WorldWideScience

Sample records for data-flow processing

  1. Synthesis of a parallel data stream processor from data flow process networks

    NARCIS (Netherlands)

    Zissulescu-Ianculescu, Claudiu

    2008-01-01

    In this talk, we address the problem of synthesizing Process Network specifications to FPGA execution platforms. The process networks we consider are special cases of Kahn Process Networks. We call them COMPAAN Data Flow Process Networks (CDFPN) because they are provided by a translator called the

  2. Adapting high-level language programs for parallel processing using data flow

    Science.gov (United States)

    Standley, Hilda M.

    1988-01-01

    EASY-FLOW, a very high-level data flow language, is introduced for the purpose of adapting programs written in a conventional high-level language to a parallel environment. The level of parallelism provided is of the large-grained variety in which parallel activities take place between subprograms or processes. A program written in EASY-FLOW is a set of subprogram calls as units, structured by iteration, branching, and distribution constructs. A data flow graph may be deduced from an EASY-FLOW program.

  3. Work flow of signal processing data of ground penetrating radar case of rigid pavement measurements

    Science.gov (United States)

    Handayani, Gunawan

    2015-04-01

    The signal processing of Ground Penetrating Radar (GPR) requires a certain work flow to obtain good results. Even though the Ground Penetrating Radar data looks similar with seismic reflection data, but the GPR data has particular signatures that the seismic reflection data does not have. This is something to do with coupling between antennae and the ground surface. Because of this, the GPR data should be treated differently from the seismic signal data processing work flow. Even though most of the processing steps still follow the same work flow of seismic reflection data such as: filtering, predictive deconvolution etc. This paper presents the work flow of GPR processing data on rigid pavement measurements. The processing steps start from raw data, de-Wow process, remove DC and continue with the standard process to get rid of noises i.e. filtering process. Some radargram particular features of rigid pavement along with pile foundations are presented.

  4. Work flow of signal processing data of ground penetrating radar case of rigid pavement measurements

    International Nuclear Information System (INIS)

    Handayani, Gunawan

    2015-01-01

    The signal processing of Ground Penetrating Radar (GPR) requires a certain work flow to obtain good results. Even though the Ground Penetrating Radar data looks similar with seismic reflection data, but the GPR data has particular signatures that the seismic reflection data does not have. This is something to do with coupling between antennae and the ground surface. Because of this, the GPR data should be treated differently from the seismic signal data processing work flow. Even though most of the processing steps still follow the same work flow of seismic reflection data such as: filtering, predictive deconvolution etc. This paper presents the work flow of GPR processing data on rigid pavement measurements. The processing steps start from raw data, de-Wow process, remove DC and continue with the standard process to get rid of noises i.e. filtering process. Some radargram particular features of rigid pavement along with pile foundations are presented

  5. Work flow of signal processing data of ground penetrating radar case of rigid pavement measurements

    Energy Technology Data Exchange (ETDEWEB)

    Handayani, Gunawan [The Earth Physics and Complex Systems Research Group (Jl. Ganesa 10 Bandung Indonesia) gunawanhandayani@gmail.com (Indonesia)

    2015-04-16

    The signal processing of Ground Penetrating Radar (GPR) requires a certain work flow to obtain good results. Even though the Ground Penetrating Radar data looks similar with seismic reflection data, but the GPR data has particular signatures that the seismic reflection data does not have. This is something to do with coupling between antennae and the ground surface. Because of this, the GPR data should be treated differently from the seismic signal data processing work flow. Even though most of the processing steps still follow the same work flow of seismic reflection data such as: filtering, predictive deconvolution etc. This paper presents the work flow of GPR processing data on rigid pavement measurements. The processing steps start from raw data, de-Wow process, remove DC and continue with the standard process to get rid of noises i.e. filtering process. Some radargram particular features of rigid pavement along with pile foundations are presented.

  6. Application of Data Smoothing Method in Signal Processing for Vortex Flow Meters

    Directory of Open Access Journals (Sweden)

    Zhang Jun

    2017-01-01

    Full Text Available Vortex flow meter is typical flow measure equipment. Its measurement output signals can easily be impaired by environmental conditions. In order to obtain an improved estimate of the time-averaged velocity from the vortex flow meter, a signal filter method is applied in this paper. The method is based on a simple Savitzky-Golay smoothing filter algorithm. According with the algorithm, a numerical program is developed in Python with the scientific library numerical Numpy. Two sample data sets are processed through the program. The results demonstrate that the processed data is available accepted compared with the original data. The improved data of the time-averaged velocity is obtained within smoothing curves. Finally the simple data smoothing program is useable and stable for this filter.

  7. A realization of an automated data flow for data collecting, processing, storing and retrieving

    International Nuclear Information System (INIS)

    Friedsam, H.; Pushor, R.; Ruland, R.

    1986-11-01

    GEONET is a database system developed at the Stanford Linear Accelerator Center for the alignment of the Stanford Linear Collider. It features an automated data flow, ranging from data collection using HP110 handheld computers to processing, storing and retrieving data and finally to adjusted coordinates. This paper gives a brief introduction to the SLC project and the applied survey methods. It emphasizes the hardware and software implementation of GEONET using a network of IBM PC/XT's. 14 refs., 4 figs

  8. Multiverse data-flow control.

    Science.gov (United States)

    Schindler, Benjamin; Waser, Jürgen; Ribičić, Hrvoje; Fuchs, Raphael; Peikert, Ronald

    2013-06-01

    In this paper, we present a data-flow system which supports comparative analysis of time-dependent data and interactive simulation steering. The system creates data on-the-fly to allow for the exploration of different parameters and the investigation of multiple scenarios. Existing data-flow architectures provide no generic approach to handle modules that perform complex temporal processing such as particle tracing or statistical analysis over time. Moreover, there is no solution to create and manage module data, which is associated with alternative scenarios. Our solution is based on generic data-flow algorithms to automate this process, enabling elaborate data-flow procedures, such as simulation, temporal integration or data aggregation over many time steps in many worlds. To hide the complexity from the user, we extend the World Lines interaction techniques to control the novel data-flow architecture. The concept of multiple, special-purpose cursors is introduced to let users intuitively navigate through time and alternative scenarios. Users specify only what they want to see, the decision which data are required is handled automatically. The concepts are explained by taking the example of the simulation and analysis of material transport in levee-breach scenarios. To strengthen the general applicability, we demonstrate the investigation of vortices in an offline-simulated dam-break data set.

  9. Measurement system of bubbly flow using ultrasonic velocity profile monitor and video data processing unit. 2. Flow characteristics of bubbly countercurrent flow

    International Nuclear Information System (INIS)

    Aritomi, Masanori; Zhou, Shirong; Nakajima, Makoto; Takeda, Yasushi; Mori, Michitsugu.

    1997-01-01

    The authors have developed a measurement system which is composed of an ultrasonic velocity profile monitor and a video data processing unit in order to clarify its multi-dimensional flow characteristics in bubbly flows and to offer a data base to validate numerical codes for multi-dimensional two-phase flow. In this paper, the measurement system was applied for bubbly countercurrent flows in a vertical rectangular channel. At first, both bubble and water velocity profiles and void fraction profiles in the channel were investigated statistically. Next, turbulence intensity in a continuous liquid phase was defined as a standard deviation of velocity fluctuation, and the two-phase multiplier profile of turbulence intensity in the channel was clarified as a ratio of the standard deviation of flow fluctuation in a bubbly countercurrent flow to that in a water single phase flow. Finally, the distribution parameter and drift velocity used in the drift flux model for bubbly countercurrent flows were calculated from the obtained velocity profiles of both phases and void fraction profile, and were compared with the correlation proposed for bubbly countercurrent flows. (author)

  10. - GEONET - A Realization of an Automated Data Flow for Data Collecting, Processing, Storing, and Retrieving

    International Nuclear Information System (INIS)

    Friedsam, Horst; Pushor, Robert; Ruland, Robert; SLAC

    2005-01-01

    GEONET is a database system developed at the Stanford Linear Accelerator Center for the alignment of the Stanford Linear Collider. It features an automated data flow, ranging from data collection using HP110 handheld computers to processing, storing and retrieving data and finally to adjusted coordinates. This paper gives a brief introduction to the SLC project and the applied survey methods. It emphasizes the hardware and software implementation of GEONET using a network of IBM PC/XT's

  11. Measurement system of bubbly flow using ultrasonic velocity profile monitor and video data processing unit

    International Nuclear Information System (INIS)

    Aritomi, Masanori; Zhou, Shirong; Nakajima, Makoto; Takeda, Yasushi; Mori, Michitsugu; Yoshioka, Yuzuru.

    1996-01-01

    The authors have been developing a measurement system for bubbly flow in order to clarify its multi-dimensional flow characteristics and to offer a data base to validate numerical codes for multi-dimensional two-phase flow. In this paper, the measurement system combining an ultrasonic velocity profile monitor with a video data processing unit is proposed, which can measure simultaneously velocity profiles in both gas and liquid phases, a void fraction profile for bubbly flow in a channel, and an average bubble diameter and void fraction. Furthermore, the proposed measurement system is applied to measure flow characteristics of a bubbly countercurrent flow in a vertical rectangular channel to verify its capability. (author)

  12. Semantic Complex Event Processing over End-to-End Data Flows

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Qunzhi [University of Southern California; Simmhan, Yogesh; Prasanna, Viktor K.

    2012-04-01

    Emerging Complex Event Processing (CEP) applications in cyber physical systems like SmartPower Grids present novel challenges for end-to-end analysis over events, flowing from heterogeneous information sources to persistent knowledge repositories. CEP for these applications must support two distinctive features - easy specification patterns over diverse information streams, and integrated pattern detection over realtime and historical events. Existing work on CEP has been limited to relational query patterns, and engines that match events arriving after the query has been registered. We propose SCEPter, a semantic complex event processing framework which uniformly processes queries over continuous and archived events. SCEPteris built around an existing CEP engine with innovative support for semantic event pattern specification and allows their seamless detection over past, present and future events. Specifically, we describe a unified semantic query model that can operate over data flowing through event streams to event repositories. Compile-time and runtime semantic patterns are distinguished and addressed separately for efficiency. Query rewriting is examined and analyzed in the context of temporal boundaries that exist between event streams and their repository to avoid duplicate or missing results. The design and prototype implementation of SCEPterare analyzed using latency and throughput metrics for scenarios from the Smart Grid domain.

  13. Measurement system of bubbly flow using Ultrasonic Velocity Profile Monitor and Video Data Processing Unit. 3. Comparison of flow characteristics between bubbly cocurrent and countercurrent flows

    International Nuclear Information System (INIS)

    Zhou, Shirong; Suzuki, Yumiko; Aritomi, Masanori; Matsuzaki, Mitsuo; Takeda, Yasushi; Mori, Michitsugu

    1998-01-01

    The authors have developed a new measurement system which consisted of an Ultrasonic Velocity Profile Monitor (UVP) and a Video Data Processing Unit (VDP) in order to clarify the two-dimensional flow characteristics in bubbly flows and to offer a data base to validate numerical codes for two-dimensional two-phase flow. In the present paper, the proposed measurement system is applied to fully developed bubbly cocurrent flows in a vertical rectangular channel. At first, both bubble and water velocity profiles and void fraction profiles in the channel were investigated statistically. In addition, the two-phase multiplier profile of turbulence intensity, which was defined as a ratio of the standard deviation of velocity fluctuation in a bubbly flow to that in a water single phase flow, were examined. Next, these flow characteristics were compared with those in bubbly countercurrent flows reported in our previous paper. Finally, concerning the drift flux model, the distribution parameter and drift velocity were obtained directly from both bubble and water velocity profiles and void fraction profiles, and their results were compared with those in bubbly countercurrent flows. (author)

  14. Massively parallel data processing for quantitative total flow imaging with optical coherence microscopy and tomography

    Science.gov (United States)

    Sylwestrzak, Marcin; Szlag, Daniel; Marchand, Paul J.; Kumar, Ashwin S.; Lasser, Theo

    2017-08-01

    We present an application of massively parallel processing of quantitative flow measurements data acquired using spectral optical coherence microscopy (SOCM). The need for massive signal processing of these particular datasets has been a major hurdle for many applications based on SOCM. In view of this difficulty, we implemented and adapted quantitative total flow estimation algorithms on graphics processing units (GPU) and achieved a 150 fold reduction in processing time when compared to a former CPU implementation. As SOCM constitutes the microscopy counterpart to spectral optical coherence tomography (SOCT), the developed processing procedure can be applied to both imaging modalities. We present the developed DLL library integrated in MATLAB (with an example) and have included the source code for adaptations and future improvements. Catalogue identifier: AFBT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AFBT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU GPLv3 No. of lines in distributed program, including test data, etc.: 913552 No. of bytes in distributed program, including test data, etc.: 270876249 Distribution format: tar.gz Programming language: CUDA/C, MATLAB. Computer: Intel x64 CPU, GPU supporting CUDA technology. Operating system: 64-bit Windows 7 Professional. Has the code been vectorized or parallelized?: Yes, CPU code has been vectorized in MATLAB, CUDA code has been parallelized. RAM: Dependent on users parameters, typically between several gigabytes and several tens of gigabytes Classification: 6.5, 18. Nature of problem: Speed up of data processing in optical coherence microscopy Solution method: Utilization of GPU for massively parallel data processing Additional comments: Compiled DLL library with source code and documentation, example of utilization (MATLAB script with raw data) Running time: 1,8 s for one B-scan (150 × faster in comparison to the CPU

  15. Image processing system for flow pattern measurements

    International Nuclear Information System (INIS)

    Ushijima, Satoru; Miyanaga, Yoichi; Takeda, Hirofumi

    1989-01-01

    This paper describes the development and application of an image processing system for measurements of flow patterns occuring in natural circulation water flows. In this method, the motions of particles scattered in the flow are visualized by a laser light slit and they are recorded on normal video tapes. These image data are converted to digital data with an image processor and then transfered to a large computer. The center points and pathlines of the particle images are numerically analized, and velocity vectors are obtained with these results. In this image processing system, velocity vectors in a vertical plane are measured simultaneously, so that the two dimensional behaviors of various eddies, with low velocity and complicated flow patterns usually observed in natural circulation flows, can be determined almost quantitatively. The measured flow patterns, which were obtained from natural circulation flow experiments, agreed with photographs of the particle movements, and the validity of this measuring system was confirmed in this study. (author)

  16. COPASutils: an R package for reading, processing, and visualizing data from COPAS large-particle flow cytometers.

    Directory of Open Access Journals (Sweden)

    Tyler C Shimko

    Full Text Available The R package COPASutils provides a logical workflow for the reading, processing, and visualization of data obtained from the Union Biometrica Complex Object Parametric Analyzer and Sorter (COPAS or the BioSorter large-particle flow cytometers. Data obtained from these powerful experimental platforms can be unwieldy, leading to difficulties in the ability to process and visualize the data using existing tools. Researchers studying small organisms, such as Caenorhabditis elegans, Anopheles gambiae, and Danio rerio, and using these devices will benefit from this streamlined and extensible R package. COPASutils offers a powerful suite of functions for the rapid processing and analysis of large high-throughput screening data sets.

  17. Hanford Site Treated Effluent Disposal Facility process flow sheet

    International Nuclear Information System (INIS)

    Bendixsen, R.B.

    1993-04-01

    This report presents a novel method of using precipitation, destruction and recycle factors to prepare a process flow sheet. The 300 Area Treated Effluent Disposal Facility (TEDF) will treat process sewer waste water from the 300 Area of the Hanford Site, located near Richland, Washington, and discharge a permittable effluent flow into the Columbia River. When completed and operating, the TEDF effluent water flow will meet or exceed water quality standards for the 300 Area process sewer effluents. A preliminary safety analysis document (PSAD), a preconstruction requirement, needed a process flow sheet detailing the concentrations of radionuclides, inorganics and organics throughout the process, including the effluents, and providing estimates of stream flow quantities, activities, composition, and properties (i.e. temperature, pressure, specific gravity, pH and heat transfer rates). As the facility begins to operate, data from process samples can be used to provide better estimates of the factors, the factors can be entered into the flow sheet and the flow sheet will estimate more accurate steady state concentrations for the components. This report shows how the factors were developed and how they were used in developing a flow sheet to estimate component concentrations for the process flows. The report concludes with how TEDF sample data can improve the ability of the flow sheet to accurately predict concentrations of components in the process

  18. Flow Logic for Process Calculi

    DEFF Research Database (Denmark)

    Nielson, Hanne Riis; Nielson, Flemming; Pilegaard, Henrik

    2012-01-01

    Flow Logic is an approach to statically determining the behavior of programs and processes. It borrows methods and techniques from Abstract Interpretation, Data Flow Analysis and Constraint Based Analysis while presenting the analysis in a style more reminiscent of Type Systems. Traditionally...... developed for programming languages, this article provides a tutorial development of the approach of Flow Logic for process calculi based on a decade of research. We first develop a simple analysis for the π-calculus; this consists of the specification, semantic soundness (in the form of subject reduction......, and finally, we extend it to a relational analysis. A Flow Logic is a program logic---in the same sense that a Hoare’s logic is. We conclude with an executive summary presenting the highlights of the approach from this perspective including a discussion of theoretical properties as well as implementation...

  19. Storing Data Flow Monitoring in Hadoop

    CERN Document Server

    Georgiou, Anastasia

    2013-01-01

    The on-line data flow monitoring for the CMS data acquisition system produces a large amount of data. Only 5% of data is stored permanently in a relational database due to performance issues and the cost for using dedicated infrastructure (e.g. Oracle systems). In a commercial environment, companies and organizations need to find new innovative approaches to process such big volumes of data, known as “big data”. The Big Data approach is trying to address the problem of a large and complex collection of data sets that become difficult to handle using traditional data processing applications. Using these new technologies, it should be possible to store all the monitoring information for a time window of months or a year. This report contains an initial evaluation of Hadoop for storage of data flow monitoring and subsequent data mining.

  20. Visual Modelling of Data Warehousing Flows with UML Profiles

    Science.gov (United States)

    Pardillo, Jesús; Golfarelli, Matteo; Rizzi, Stefano; Trujillo, Juan

    Data warehousing involves complex processes that transform source data through several stages to deliver suitable information ready to be analysed. Though many techniques for visual modelling of data warehouses from the static point of view have been devised, only few attempts have been made to model the data flows involved in a data warehousing process. Besides, each attempt was mainly aimed at a specific application, such as ETL, OLAP, what-if analysis, data mining. Data flows are typically very complex in this domain; for this reason, we argue, designers would greatly benefit from a technique for uniformly modelling data warehousing flows for all applications. In this paper, we propose an integrated visual modelling technique for data cubes and data flows. This technique is based on UML profiling; its feasibility is evaluated by means of a prototype implementation.

  1. Data triggered data processing at MFTF-B

    International Nuclear Information System (INIS)

    Jackson, R.J.; Balch, T.R.; Preckshot, G.G.

    1985-01-01

    A primary characteristic of most batch systems is that the input data files must exist before jobs are scheduled. On the Mirror Fusion Test Facility (MFTF-B) at Lawrence Livermore National Laboratory we schedule jobs to process experimental data to be collected during a five minute shot cycle. Our data-driven processing system emulates a coarsely granular data flow architecture. Processing jobs are scheduled before the experimental data is collected. Processing jobs ''fire'', or execute, as input data becomes available. Similar to UNIX ''pipes'', data produced by upstream processing nodes may be used as inputs by following nodes. Users, working on our networked SUN workstations, specify data processing templates which define processes and their data dependencies. Data specifications indicate the source of data; actual associations with specific data instantiations are made when the jobs are scheduled. We report here on details of diagnostic data processing and our experiences

  2. Monitoring the data flow of LHCb’s data acquisition system

    CERN Document Server

    Svantesson, David; Rainer, S

    2010-01-01

    The data acquisition system of the Large Hadron Collider beauty (LHCb) experiment need to read out huge amount of data. Monitoring is done for each subsystem but there exist no system to monitor the overall data flow. The aim of this work has been to design a system where the data rates can be vied continuously and making it possible to do an exact consistency check after the run to ensure no data are lost. This involves collecting and processing all necessary data from each subsystem and integrate it into the experiment control system for displaying it to the operators. The challenges are to communicate and collect data from all stages of the data acquisitions system which uses different techniques and data formats. The size of the system also makes it a challenge to gather all statistics in real time. The system must also be able to support partitioning. The result was to build a data flow monitoring system, that acquire statistics from all stages of the data acquisition, process it and display it in the ex...

  3. Data qualification summary for 1985 L-Area AC Flow Tests

    International Nuclear Information System (INIS)

    Edwards, T.B.; Eghbali, D.A.; Liebmann, M.L.; Shine, E.P.

    1992-03-01

    The 1985 L-Area AC Flow Tests were conducted to provide an extended data base for upgrading the reactor system models employed in predicting normal process water flows. This report summarizes the results of the recently completed, formal, technical review of the data from the 1985 L-Area AC Flow Tests as detailed in document SCS-CMAS-910045. The purpose of that review was to provide corroborating technical information as to the quality (fitness for use) of these experimental data. Reference [1] required three volumes to fully document the results of that Data Qualification process. This report has been prepared to provide the important conclusions from that process in a manageable and understandable format. Consult reference [1] if any additional information or detail is needed. This report provides highlights from that study: an overview of the tests and data, a description of the instrumentation used, an explanation of the data qualification methods employed to review the data, and the important conclusions reached from the study. Reference 1: Edwards, T.B., D.A. Eghbali, M.L. Liebmann, and E.P. Shine, open-quotes Data Qualification for 1985 L-Area AC Flow Tests,close quotes SCS-CMAS-910045, December 31, 1991

  4. Recharge and flow processes in a till aquitard

    DEFF Research Database (Denmark)

    Schrøder, Thomas Morville; Høgh Jensen, Karsten; Dahl, Mette

    1999-01-01

    Eastern Denmark is primarily covered by clay till. The transformation of the excess rainfall into laterally diverted groundwater flow, drain flow, stream flow, and recharge to the underlying aquifer is governed by complicatedinterrelated processes. Distributed hydrological models provide a framew......Eastern Denmark is primarily covered by clay till. The transformation of the excess rainfall into laterally diverted groundwater flow, drain flow, stream flow, and recharge to the underlying aquifer is governed by complicatedinterrelated processes. Distributed hydrological models provide...... a framework for assessing the individual flow components and forestablishing the overall water balance. Traditionally such models are calibrated against measurements of stream flow, head in the aquiferand perhaps drainage flow. The head in the near surface clay till deposits have generally not been measured...... the shallow wells and one in the valley adjacent to the stream. Precipitation and stream flow gauging along with potential evaporation estimates from a nearby weather station provide the basic data for the overall water balance assessment. The geological composition was determined from geoelectrical surveys...

  5. Graphical Language for Data Processing

    Science.gov (United States)

    Alphonso, Keith

    2011-01-01

    A graphical language for processing data allows processing elements to be connected with virtual wires that represent data flows between processing modules. The processing of complex data, such as lidar data, requires many different algorithms to be applied. The purpose of this innovation is to automate the processing of complex data, such as LIDAR, without the need for complex scripting and programming languages. The system consists of a set of user-interface components that allow the user to drag and drop various algorithmic and processing components onto a process graph. By working graphically, the user can completely visualize the process flow and create complex diagrams. This innovation supports the nesting of graphs, such that a graph can be included in another graph as a single step for processing. In addition to the user interface components, the system includes a set of .NET classes that represent the graph internally. These classes provide the internal system representation of the graphical user interface. The system includes a graph execution component that reads the internal representation of the graph (as described above) and executes that graph. The execution of the graph follows the interpreted model of execution in that each node is traversed and executed from the original internal representation. In addition, there are components that allow external code elements, such as algorithms, to be easily integrated into the system, thus making the system infinitely expandable.

  6. Identification of 3-phase flow patterns of heavy oil from pressure drop and flow rate data

    Energy Technology Data Exchange (ETDEWEB)

    Pacheco, F.; Bannwart, A.C.; Mendes, J.R.P. [Campinas State Univ., Sao Paulo (Brazil); Serapiao, A.B.S. [Sao Paulo State Univ., Sao Paulo (Brazil)

    2008-07-01

    Pipe flow of oil-gas-water mixtures poses a complex thermo-fluid dynamical problem. This paper examined the relationship between phase flow rates, flow pattern identification, and pressure drop in 3-phase water-assisted heavy oil in the presence of a gaseous phase. An artificial intelligence program called a support vector machine (SVM) was used to determine relevant parameters for flow pattern classification. Data from a 3-phase flow of heavy oil with gas and water in a vertical pipe was used in the study. The data were used to train the machine, which then predicted the flow pattern of the remaining data. Tests with different parameters and training data were then performed. The study showed that the proposed SVM flow pattern identification process accurately predicted flow patterns. It was concluded that the SVM took a relatively short amount of time to train. Future research is needed to apply the tool to larger flow datasets. 5 refs., 1 tab., 2 figs.

  7. Disjunctive Information Flow for Communicating Processes

    DEFF Research Database (Denmark)

    Li, Ximeng; Nielson, Flemming; Nielson, Hanne Riis

    2016-01-01

    The security validation of practical computer systems calls for the ability to specify and verify information flow policies that are dependent on data content. Such policies play an important role in concurrent, communicating systems: consider a scenario where messages are sent to different...... processes according to their tagging. We devise a security type system that enforces content-dependent information flow policies in the presence of communication and concurrency. The type system soundly guarantees a compositional noninterference property. All theoretical results have been formally proved...

  8. Data triggered data processing at the Mirror Fusion Test Facility

    International Nuclear Information System (INIS)

    Jackson, R.J.; Balch, T.R.; Preckshot, G.G.

    1986-01-01

    A primary characteristic of most batch systems is that the input data files must exist before jobs are scheduled. On the Mirror Fusion Test Facility (MFTF-B) at Lawrence Livermore National Laboratory the authors schedule jobs to process experimental data to be collected during a five minute shot cycle. The data driven processing system emulates a coarsely granular data flow architecture. Processing jobs are scheduled before the experimental data is collected. Processing jobs ''fire'', or execute, as input data becomes available. Similar to UNIX ''pipes'', data produced by upstream processing nodes may be used as inputs by following nodes. Users, working on the networked SUN workstations, specify data processing templates which define processes and their data dependencies. Data specifications indicate the source of data; actual associations with specific data instantiations are made when the jobs are scheduled. The authors report here on details of diagnostic data processing and their experiences

  9. Effect of material flows on energy intensity in process industries

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Liru; Aye, Lu [International Technologies Center (IDTC), Department of Civil and Environmental Engineering, The University of Melbourne, Victoria 3010 (Australia); Lu, Zhongwu [Institute of Materials and Metallurgy, Northeastern University, Shenyang 110004 (China); Zhang, Peihong [Department of Municipal and Environmental Engineering, Shenyang Architecture University, Shenyang 110168 (China)

    2006-09-15

    Many energy-intensive process industries have complex material flows, which have a strong effect on the overall energy intensity of the final product (OEIF). This problem, however, has only been recognised qualitatively due to the lack of quantitative analysis methods. This paper presents an in-depth quantitative analysis of the effect of material flows on energy intensity in process industries. Based on the concept of a standard material flow diagram (SMFD), as used in steel manufacturing, the SMFD for a generic process industry was first developed. Then material flow scenarios were addressed in a practical material flow diagram (PMFD) using the characteristics of practical process industries. The effect of each material flow deviating from a SMFD on the OEIF was analysed. The steps involved in analysing the effect of material flows in a PMFD on its energy intensity are also discussed in detail. Finally, using 1999 statistical data from the Chinese Zhenzhou alumina refinery plant, the PMFD and SMFD for this plant were constructed as a case study. The effect of material flows on the overall energy intensity of alumina (OEIA) was thus analysed quantitatively. To decrease OEIA, the process variations which decrease the product ratios could be employed in all except in multi-supplied fraction cases. In these cases, the fractions from the stream with lower energy intensities should be increased. (author)

  10. Flow in data racks

    Directory of Open Access Journals (Sweden)

    Manoch Lukáš

    2014-03-01

    Full Text Available This paper deals with the flow in data racks. The aim of this work is to find a new arrangement of elements regulating the flow in the data rack so that the aerodynamic losses and the recirculation zones were minimized. The main reason for solving this problem is to reduce the costs of data racks cooling. Another problem to be solved is a reverse flow in the servers, thus not cooled, occuring due to the underpressure in the recirculation zones. In order to solve the problem, the experimental and numerical model of 27U data rack fitted with 10 pieces of server models with a total input of 10 kW was created. Different configurations of layout of elements affecting the flow in the inlet area of the data rack were compared. Depending on the results achieved, design solutions for the improvement of existing solutions were adopted and verified by numerical simulations.

  11. Exploring the feasibility of multi-site flow cytometric processing of gut associated lymphoid tissue with centralized data analysis for multi-site clinical trials.

    Directory of Open Access Journals (Sweden)

    Ian McGowan

    Full Text Available The purpose of this study was to determine whether the development of a standardized approach to the collection of intestinal tissue from healthy volunteers, isolation of gut associated lymphoid tissue mucosal mononuclear cells (MMC, and characterization of mucosal T cell phenotypes by flow cytometry was sufficient to minimize differences in the normative ranges of flow parameters generated at two trial sites. Forty healthy male study participants were enrolled in Pittsburgh and Los Angeles. MMC were isolated from rectal biopsies using the same biopsy acquisition and enzymatic digestion protocols. As an additional comparator, peripheral blood mononuclear cells (PBMC were collected from the study participants. For quality control, cryopreserved PBMC from a single donor were supplied to both sites from a central repository (qPBMC. Using a jointly optimized standard operating procedure, cells were isolated from tissue and blood and stained with monoclonal antibodies targeted to T cell phenotypic markers. Site-specific flow data were analyzed by an independent center which analyzed all data from both sites. Ranges for frequencies for overall CD4+ and CD8+ T cells, derived from the qPBMC samples, were equivalent at both UCLA and MWRI. However, there were significant differences across sites for the majority of T cell activation and memory subsets in qPBMC as well as PBMC and MMC. Standardized protocols to collect, stain, and analyze MMC and PBMC, including centralized analysis, can reduce but not exclude variability in reporting flow data within multi-site studies. Based on these data, centralized processing, flow cytometry, and analysis of samples may provide more robust data across multi-site studies. Centralized processing requires either shipping of fresh samples or cryopreservation and the decision to perform centralized versus site processing needs to take into account the drawbacks and restrictions associated with each method.

  12. Exploring the feasibility of multi-site flow cytometric processing of gut associated lymphoid tissue with centralized data analysis for multi-site clinical trials.

    Science.gov (United States)

    McGowan, Ian; Anton, Peter A; Elliott, Julie; Cranston, Ross D; Duffill, Kathryn; Althouse, Andrew D; Hawkins, Kevin L; De Rosa, Stephen C

    2015-01-01

    The purpose of this study was to determine whether the development of a standardized approach to the collection of intestinal tissue from healthy volunteers, isolation of gut associated lymphoid tissue mucosal mononuclear cells (MMC), and characterization of mucosal T cell phenotypes by flow cytometry was sufficient to minimize differences in the normative ranges of flow parameters generated at two trial sites. Forty healthy male study participants were enrolled in Pittsburgh and Los Angeles. MMC were isolated from rectal biopsies using the same biopsy acquisition and enzymatic digestion protocols. As an additional comparator, peripheral blood mononuclear cells (PBMC) were collected from the study participants. For quality control, cryopreserved PBMC from a single donor were supplied to both sites from a central repository (qPBMC). Using a jointly optimized standard operating procedure, cells were isolated from tissue and blood and stained with monoclonal antibodies targeted to T cell phenotypic markers. Site-specific flow data were analyzed by an independent center which analyzed all data from both sites. Ranges for frequencies for overall CD4+ and CD8+ T cells, derived from the qPBMC samples, were equivalent at both UCLA and MWRI. However, there were significant differences across sites for the majority of T cell activation and memory subsets in qPBMC as well as PBMC and MMC. Standardized protocols to collect, stain, and analyze MMC and PBMC, including centralized analysis, can reduce but not exclude variability in reporting flow data within multi-site studies. Based on these data, centralized processing, flow cytometry, and analysis of samples may provide more robust data across multi-site studies. Centralized processing requires either shipping of fresh samples or cryopreservation and the decision to perform centralized versus site processing needs to take into account the drawbacks and restrictions associated with each method.

  13. File-based data flow in the CMS Filter Farm

    Science.gov (United States)

    Andre, J.-M.; Andronidis, A.; Bawej, T.; Behrens, U.; Branson, J.; Chaze, O.; Cittolin, S.; Darlea, G.-L.; Deldicque, C.; Dobson, M.; Dupont, A.; Erhan, S.; Gigi, D.; Glege, F.; Gomez-Ceballos, G.; Hegeman, J.; Holzner, A.; Jimenez-Estupiñán, R.; Masetti, L.; Meijers, F.; Meschi, E.; Mommsen, R. K.; Morovic, S.; Nunez-Barranco-Fernandez, C.; O'Dell, V.; Orsini, L.; Paus, C.; Petrucci, A.; Pieri, M.; Racz, A.; Roberts, P.; Sakulin, H.; Schwick, C.; Stieger, B.; Sumorok, K.; Veverka, J.; Zaza, S.; Zejdl, P.

    2015-12-01

    During the LHC Long Shutdown 1, the CMS Data Acquisition system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and prepare the ground for future upgrades of the detector front-ends. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has also been redesigned to be completely file- based. This approach provides additional decoupling between the HLT algorithms and the input and output data flow. All the metadata needed for bookkeeping of the data flow and the HLT process lifetimes are also generated in the form of small “documents” using the JSON encoding, by either services in the flow of the HLT execution (for rates etc.) or watchdog processes. These “files” can remain memory-resident or be written to disk if they are to be used in another part of the system (e.g. for aggregation of output data). We discuss how this redesign improves the robustness and flexibility of the CMS DAQ and the performance of the system currently being commissioned for the LHC Run 2.

  14. File-Based Data Flow in the CMS Filter Farm

    Energy Technology Data Exchange (ETDEWEB)

    Andre, J.M.; et al.

    2015-12-23

    During the LHC Long Shutdown 1, the CMS Data Acquisition system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and prepare the ground for future upgrades of the detector front-ends. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has also been redesigned to be completely file- based. This approach provides additional decoupling between the HLT algorithms and the input and output data flow. All the metadata needed for bookkeeping of the data flow and the HLT process lifetimes are also generated in the form of small “documents” using the JSON encoding, by either services in the flow of the HLT execution (for rates etc.) or watchdog processes. These “files” can remain memory-resident or be written to disk if they are to be used in another part of the system (e.g. for aggregation of output data). We discuss how this redesign improves the robustness and flexibility of the CMS DAQ and the performance of the system currently being commissioned for the LHC Run 2.

  15. 4D flow mri post-processing strategies for neuropathologies

    Science.gov (United States)

    Schrauben, Eric Mathew

    4D flow MRI allows for the measurement of a dynamic 3D velocity vector field. Blood flow velocities in large vascular territories can be qualitatively visualized with the added benefit of quantitative probing. Within cranial pathologies theorized to have vascular-based contributions or effects, 4D flow MRI provides a unique platform for comprehensive assessment of hemodynamic parameters. Targeted blood flow derived measurements, such as flow rate, pulsatility, retrograde flow, or wall shear stress may provide insight into the onset or characterization of more complex neuropathologies. Therefore, the thorough assessment of each parameter within the context of a given disease has important medical implications. Not surprisingly, the last decade has seen rapid growth in the use of 4D flow MRI. Data acquisition sequences are available to researchers on all major scanner platforms. However, the use has been limited mostly to small research trials. One major reason that has hindered the more widespread use and application in larger clinical trials is the complexity of the post-processing tasks and the lack of adequate tools for these tasks. Post-processing of 4D flow MRI must be semi-automated, fast, user-independent, robust, and reliably consistent for use in a clinical setting, within large patient studies, or across a multicenter trial. Development of proper post-processing methods coupled with systematic investigation in normal and patient populations pushes 4D flow MRI closer to clinical realization while elucidating potential underlying neuropathological origins. Within this framework, the work in this thesis assesses venous flow reproducibility and internal consistency in a healthy population. A preliminary analysis of venous flow parameters in healthy controls and multiple sclerosis patients is performed in a large study employing 4D flow MRI. These studies are performed in the context of the chronic cerebrospinal venous insufficiency hypothesis. Additionally, a

  16. Validation of CFD predictions using process data obtained from flow through an industrial control valve

    International Nuclear Information System (INIS)

    Green, J; Mishra, R; Charlton, M; Owen, R

    2012-01-01

    This study uses the experimental flow test data to validate CFD simulations for a complex control valve trim. In both the simulation and the experimental flow test the capacity of the trim (Cv) is calculated in order to test the ability of CFD software to provide a design tool for these trims. While CFD tests produced results for the capacity which were consistent across a series of five different simulations, it differed from the experimental flow data by nearly 25%. This indicates that CFD simulations need to be properly calibrated before being used in designing complex valve trims.

  17. Towards an optimized flow-sheet for a SANEX demonstration process using centrifugal contactors

    International Nuclear Information System (INIS)

    Magnusson, D.; Christiansen, B.; Glatz, J.P.; Malmbeck, R.; Serrano-Purroy, D.; Modolo, G.; Sorel, C.

    2008-01-01

    The design of an efficient process flow-sheet requires accurate extraction data for the experimental set-up used. Often this data is provided as equilibrium data. Due to the small hold-up volume compared to the flow rate in centrifugal contactors the time for extraction is often too short to reach the equilibrium D-ratios. In this work single stage kinetics experiments have been carried out to investigate the D-ratio dependence of the flow rate and also to compare with equilibrium batch experiments for CyMe 4 - BTBP. The first centrifuge experiment was run with spiked solutions while in the second a genuine actinide/lanthanide fraction from a TODGA process was used. Three different flow rates were tested with each set-up. The results show that even with low flow rates, around 8% of the equilibrium D-ratio (Am) was reached for the extraction in the spiked test and around 16% in the hot test (the difference is due to the size of the centrifuges). The general conclusion is that the development of a process flow sheet needs investigation of the kinetic behaviour in the actual equipment used. (authors)

  18. A framework about flow measurements by LDA–PDA as a spatio-temporal average: application to data post-processing

    International Nuclear Information System (INIS)

    Calvo, Esteban; García, Juan A; García, Ignacio; Aísa, Luis; Santolaya, José Luis

    2012-01-01

    Phase Doppler anemometry (PDA) is a well-established technique to study two-phase flows and its principles are also used in laser Doppler anemometry (LDA) for measurements of fluid velocity. Raw measurements of individual particle data require post-processing to obtain useful and consistent information (moments of velocity, particle concentration and flux, velocity autocorrelation, etc). This is called in this paper the reconstruction of statistical information. In the 1970s, several basic algorithms to perform the statistical reconstruction were developed for LDA measurements (such as the transit time method, the inverse velocity method, etc). With the advent of PDA, the scientific community developed reconstruction algorithms to obtain mean variables of the dispersed phase. All these basic algorithms were expounded as unconnected methods, following independent threads not integrated into a general framework. Assuming that the PDA works under ideal conditions (all particles that cross the probe volume are validated), this paper provides a general formulation and fully systematizes a large set of previous statistical reconstruction methods. In this paper, the statistical reconstruction of both the dispersed and the continuous phase is unified: the continuous phase post-processing emerges as the same reconstruction method of the dispersed phase. The general framework proposed offers many advantages. First, some previous calculation methods of particle concentration turn out to be particular cases of this general formulation. Second, it provides an easy way to deduce unbiased estimators of any statistical parameter of the flow. Third, a wide set of new post-processing methods are proposed to be tested by any member of the scientific community. In the fourth place, the generalized integral method to compute the particle concentration also gives information about the probe volume geometry and two new auto-calibration algorithms are proposed: the integral calibration

  19. A framework about flow measurements by LDA-PDA as a spatio-temporal average: application to data post-processing

    Science.gov (United States)

    Calvo, Esteban; García, Juan A.; Santolaya, José Luis; García, Ignacio; Aísa, Luis

    2012-05-01

    Phase Doppler anemometry (PDA) is a well-established technique to study two-phase flows and its principles are also used in laser Doppler anemometry (LDA) for measurements of fluid velocity. Raw measurements of individual particle data require post-processing to obtain useful and consistent information (moments of velocity, particle concentration and flux, velocity autocorrelation, etc). This is called in this paper the reconstruction of statistical information. In the 1970s, several basic algorithms to perform the statistical reconstruction were developed for LDA measurements (such as the transit time method, the inverse velocity method, etc). With the advent of PDA, the scientific community developed reconstruction algorithms to obtain mean variables of the dispersed phase. All these basic algorithms were expounded as unconnected methods, following independent threads not integrated into a general framework. Assuming that the PDA works under ideal conditions (all particles that cross the probe volume are validated), this paper provides a general formulation and fully systematizes a large set of previous statistical reconstruction methods. In this paper, the statistical reconstruction of both the dispersed and the continuous phase is unified: the continuous phase post-processing emerges as the same reconstruction method of the dispersed phase. The general framework proposed offers many advantages. First, some previous calculation methods of particle concentration turn out to be particular cases of this general formulation. Second, it provides an easy way to deduce unbiased estimators of any statistical parameter of the flow. Third, a wide set of new post-processing methods are proposed to be tested by any member of the scientific community. In the fourth place, the generalized integral method to compute the particle concentration also gives information about the probe volume geometry and two new auto-calibration algorithms are proposed: the integral calibration

  20. CLARA: A Contemporary Approach to Physics Data Processing

    Energy Technology Data Exchange (ETDEWEB)

    V Gyurjyan, D Abbott, J Carbonneau, G Gilfoyle, D Heddle, G Heyes, S Paul, C Timmer, D Weygand, E Wolin

    2011-12-01

    In traditional physics data processing (PDP) systems, data location is static and is accessed by analysis applications. In comparison, CLARA (CLAS12 Reconstruction and Analysis framework) is an environment where data processing algorithms filter continuously flowing data. In CLARA's domain of loosely coupled services, data is not stored, but rather flows from one service to another, mutating constantly along the way. Agents, performing event processing, can then subscribe to particular data/events at any stage of the data transformation, and make intricate decisions (e.g. particle ID) by correlating events from multiple, parallel data streams and/or services. This paper presents a PDP application development framework based on service oriented and event driven architectures. This system allows users to design (Java, C++, and Python languages are supported) and deploy data processing services, as well as dynamically compose PDP applications using available services. The PDP service bus provides a layer on top of a distributed pub-sub middleware implementation, which allows complex service composition and integration without writing code. Examples of service creation and deployment, along with the CLAS12 track reconstruction application design will be presented.

  1. Infrared Tomography: Data Distribution System for Real-time Mass Flow Rate Measurement

    Directory of Open Access Journals (Sweden)

    Ruzairi Abdul Rahim

    2007-06-01

    Full Text Available The system developed in this research has the objective of measuring mass flow rate in an online mode. If a single computer is used as data processing unit, a longer time is needed to produce a measurement result. In the research carried out by previous researcher shows about 11.2 seconds is needed to obtain one mass flow rate result in the offline mode (using offline data. This insufficient real-time result will cause problems in a feedback control process when applying the system on industrial plants. To increase the refreshing rate of the measurement result, an investigation on a data distribution system is performed to replace the existing data processing unit.

  2. Fast interactive exploration of 4D MRI flow data

    Science.gov (United States)

    Hennemuth, A.; Friman, O.; Schumann, C.; Bock, J.; Drexl, J.; Huellebrand, M.; Markl, M.; Peitgen, H.-O.

    2011-03-01

    1- or 2-directional MRI blood flow mapping sequences are an integral part of standard MR protocols for diagnosis and therapy control in heart diseases. Recent progress in rapid MRI has made it possible to acquire volumetric, 3-directional cine images in reasonable scan time. In addition to flow and velocity measurements relative to arbitrarily oriented image planes, the analysis of 3-dimensional trajectories enables the visualization of flow patterns, local features of flow trajectories or possible paths into specific regions. The anatomical and functional information allows for advanced hemodynamic analysis in different application areas like stroke risk assessment, congenital and acquired heart disease, aneurysms or abdominal collaterals and cranial blood flow. The complexity of the 4D MRI flow datasets and the flow related image analysis tasks makes the development of fast comprehensive data exploration software for advanced flow analysis a challenging task. Most existing tools address only individual aspects of the analysis pipeline such as pre-processing, quantification or visualization, or are difficult to use for clinicians. The goal of the presented work is to provide a software solution that supports the whole image analysis pipeline and enables data exploration with fast intuitive interaction and visualization methods. The implemented methods facilitate the segmentation and inspection of different vascular systems. Arbitrary 2- or 3-dimensional regions for quantitative analysis and particle tracing can be defined interactively. Synchronized views of animated 3D path lines, 2D velocity or flow overlays and flow curves offer a detailed insight into local hemodynamics. The application of the analysis pipeline is shown for 6 cases from clinical practice, illustrating the usefulness for different clinical questions. Initial user tests show that the software is intuitive to learn and even inexperienced users achieve good results within reasonable processing

  3. An integrated methodology for characterizing flow and transport processes in fractured rock

    International Nuclear Information System (INIS)

    Wu, Yu-Shu

    2007-01-01

    To investigate the coupled processes involved in fluid and heat flow and chemical transport in the highly heterogeneous, unsaturated-zone (UZ) fractured rock of Yucca Mountain, we present an integrated modeling methodology. This approach integrates a wide variety of moisture, pneumatic, thermal, and geochemical isotopic field data into a comprehensive three-dimensional numerical model for modeling analyses. The results of field applications of the methodology show that moisture data, such as water potential and liquid saturation, are not sufficient to determine in situ percolation flux, whereas temperature and geochemical isotopic data provide better constraints to net infiltration rates and flow patterns. In addition, pneumatic data are found to be extremely valuable in estimating large-scale fracture permeability. The integration of hydrologic, pneumatic, temperature, and geochemical data into modeling analyses is thereby demonstrated to provide a practical modeling approach for characterizing flow and transport processes in complex fractured formations

  4. Cloud-processed 4D CMR flow imaging for pulmonary flow quantification

    Energy Technology Data Exchange (ETDEWEB)

    Chelu, Raluca G., E-mail: ralucachelu@hotmail.com [Department of Radiology, Erasmus MC, Rotterdam (Netherlands); Department of Cardiology, Erasmus MC, Rotterdam (Netherlands); Wanambiro, Kevin W. [Department of Radiology, Erasmus MC, Rotterdam (Netherlands); Department of Radiology, Aga Khan University Hospital, Nairobi (Kenya); Hsiao, Albert [Department of Radiology, University of California, San Diego, CA (United States); Swart, Laurens E. [Department of Radiology, Erasmus MC, Rotterdam (Netherlands); Department of Cardiology, Erasmus MC, Rotterdam (Netherlands); Voogd, Teun [Department of Radiology, Erasmus MC, Rotterdam (Netherlands); Hoven, Allard T. van den; Kranenburg, Matthijs van [Department of Cardiology, Erasmus MC, Rotterdam (Netherlands); Coenen, Adriaan [Department of Radiology, Erasmus MC, Rotterdam (Netherlands); Department of Cardiology, Erasmus MC, Rotterdam (Netherlands); Boccalini, Sara [Department of Radiology, Erasmus MC, Rotterdam (Netherlands); Department of Radiology, University Hospital, Genoa (Italy); Wielopolski, Piotr A. [Department of Radiology, Erasmus MC, Rotterdam (Netherlands); Vogel, Mika W. [MR Applications and Workflow – Europe, GE Healthcare B.V. Hoevelaken (Netherlands); Krestin, Gabriel P. [Department of Radiology, Erasmus MC, Rotterdam (Netherlands); Vasanawala, Shreyas S. [Department of Radiology, Stanford University, Stanford, CA (United States); Budde, Ricardo P.J. [Department of Radiology, Erasmus MC, Rotterdam (Netherlands); Department of Cardiology, Erasmus MC, Rotterdam (Netherlands); Roos-Hesselink, Jolien W. [Department of Cardiology, Erasmus MC, Rotterdam (Netherlands); Nieman, Koen [Department of Radiology, Erasmus MC, Rotterdam (Netherlands); Department of Cardiology, Erasmus MC, Rotterdam (Netherlands)

    2016-10-15

    Highlights: • With 4D flow, any plane of interest can be interactively chosen for quantitative measurements. • Anatomical and flow data are obtained during an approximately 10-min free-breathing scan. • 4D CMR flow measurements correlated well with the 2D PC ones. • Eddy current correction is important for good results with 4D flow. - Abstract: Objectives: In this study, we evaluated a cloud-based platform for cardiac magnetic resonance (CMR) four-dimensional (4D) flow imaging, with fully integrated correction for eddy currents, Maxwell phase effects, and gradient field non-linearity, to quantify forward flow, regurgitation, and peak systolic velocity over the pulmonary artery. Methods: We prospectively recruited 52 adult patients during one-year period from July 2014. The 4D flow and planar (2D) phase-contrast (PC) were acquired during same scanning session, but 4D flow was scanned after injection of a gadolinium-based contrast agent. Eddy-currents were semi-automatically corrected using the web-based software. Flow over pulmonary valve was measured and the 4D flow values were compared against the 2D PC ones. Results: The mean forward flow was 92 (±30) ml/cycle measured with 4D flow and 86 (±29) ml/cycle measured with 2D PC, with a correlation of 0.82 and a mean difference of −6 ml/cycle (−41–29). For the regurgitant fraction the correlation was 0.85 with a mean difference of −0.95% (−17–15). Mean peak systolic velocity measured with 4D flow was 92 (±49) cm/s and 108 (±56) cm/s with 2D PC, having a correlation of 0.93 and a mean difference of 16 cm/s (−24–55). Conclusion: 4D flow imaging post-processed with an integrated cloud-based application accurately quantifies pulmonary flow. However, it may underestimate the peak systolic velocity.

  5. Cloud-processed 4D CMR flow imaging for pulmonary flow quantification

    International Nuclear Information System (INIS)

    Chelu, Raluca G.; Wanambiro, Kevin W.; Hsiao, Albert; Swart, Laurens E.; Voogd, Teun; Hoven, Allard T. van den; Kranenburg, Matthijs van; Coenen, Adriaan; Boccalini, Sara; Wielopolski, Piotr A.; Vogel, Mika W.; Krestin, Gabriel P.; Vasanawala, Shreyas S.; Budde, Ricardo P.J.; Roos-Hesselink, Jolien W.; Nieman, Koen

    2016-01-01

    Highlights: • With 4D flow, any plane of interest can be interactively chosen for quantitative measurements. • Anatomical and flow data are obtained during an approximately 10-min free-breathing scan. • 4D CMR flow measurements correlated well with the 2D PC ones. • Eddy current correction is important for good results with 4D flow. - Abstract: Objectives: In this study, we evaluated a cloud-based platform for cardiac magnetic resonance (CMR) four-dimensional (4D) flow imaging, with fully integrated correction for eddy currents, Maxwell phase effects, and gradient field non-linearity, to quantify forward flow, regurgitation, and peak systolic velocity over the pulmonary artery. Methods: We prospectively recruited 52 adult patients during one-year period from July 2014. The 4D flow and planar (2D) phase-contrast (PC) were acquired during same scanning session, but 4D flow was scanned after injection of a gadolinium-based contrast agent. Eddy-currents were semi-automatically corrected using the web-based software. Flow over pulmonary valve was measured and the 4D flow values were compared against the 2D PC ones. Results: The mean forward flow was 92 (±30) ml/cycle measured with 4D flow and 86 (±29) ml/cycle measured with 2D PC, with a correlation of 0.82 and a mean difference of −6 ml/cycle (−41–29). For the regurgitant fraction the correlation was 0.85 with a mean difference of −0.95% (−17–15). Mean peak systolic velocity measured with 4D flow was 92 (±49) cm/s and 108 (±56) cm/s with 2D PC, having a correlation of 0.93 and a mean difference of 16 cm/s (−24–55). Conclusion: 4D flow imaging post-processed with an integrated cloud-based application accurately quantifies pulmonary flow. However, it may underestimate the peak systolic velocity.

  6. Process flows for cyber forensic training and operations

    CSIR Research Space (South Africa)

    Venter, JP

    2006-02-01

    Full Text Available In this paper the development and testing of Cyber First Responder Process Flows is discussed. A generic process flow framework is presented and design principles and layout characteristics as well as important points within the process flows...

  7. ATLAS DataFlow Infrastructure: Recent results from ATLAS cosmic and first-beam data-taking

    Energy Technology Data Exchange (ETDEWEB)

    Vandelli, Wainer, E-mail: wainer.vandelli@cern.c

    2010-04-01

    The ATLAS DataFlow infrastructure is responsible for the collection and conveyance of event data from the detector front-end electronics to the mass storage. Several optimized and multi-threaded applications fulfill this purpose operating over a multi-stage Gigabit Ethernet network which is the backbone of the ATLAS Trigger and Data Acquisition System. The system must be able to efficiently transport event-data with high reliability, while providing aggregated bandwidths larger than 5 GByte/s and coping with many thousands network connections. Nevertheless, routing and streaming capabilities and monitoring and data accounting functionalities are also fundamental requirements. During 2008, a few months of ATLAS cosmic data-taking and the first experience with the LHC beams provided an unprecedented test-bed for the evaluation of the performance of the ATLAS DataFlow, in terms of functionality, robustness and stability. Besides, operating the system far from its design specifications helped in exercising its flexibility and contributed in understanding its limitations. Moreover, the integration with the detector and the interfacing with the off-line data processing and management have been able to take advantage of this extended data taking-period as well. In this paper we report on the usage of the DataFlow infrastructure during the ATLAS data-taking. These results, backed-up by complementary performance tests, validate the architecture of the ATLAS DataFlow and prove that the system is robust, flexible and scalable enough to cope with the final requirements of the ATLAS experiment.

  8. ATLAS DataFlow Infrastructure: Recent results from ATLAS cosmic and first-beam data-taking

    International Nuclear Information System (INIS)

    Vandelli, Wainer

    2010-01-01

    The ATLAS DataFlow infrastructure is responsible for the collection and conveyance of event data from the detector front-end electronics to the mass storage. Several optimized and multi-threaded applications fulfill this purpose operating over a multi-stage Gigabit Ethernet network which is the backbone of the ATLAS Trigger and Data Acquisition System. The system must be able to efficiently transport event-data with high reliability, while providing aggregated bandwidths larger than 5 GByte/s and coping with many thousands network connections. Nevertheless, routing and streaming capabilities and monitoring and data accounting functionalities are also fundamental requirements. During 2008, a few months of ATLAS cosmic data-taking and the first experience with the LHC beams provided an unprecedented test-bed for the evaluation of the performance of the ATLAS DataFlow, in terms of functionality, robustness and stability. Besides, operating the system far from its design specifications helped in exercising its flexibility and contributed in understanding its limitations. Moreover, the integration with the detector and the interfacing with the off-line data processing and management have been able to take advantage of this extended data taking-period as well. In this paper we report on the usage of the DataFlow infrastructure during the ATLAS data-taking. These results, backed-up by complementary performance tests, validate the architecture of the ATLAS DataFlow and prove that the system is robust, flexible and scalable enough to cope with the final requirements of the ATLAS experiment.

  9. Hedging Cash Flows from Commodity Processing

    OpenAIRE

    Dahlgran, Roger A.

    2005-01-01

    Agribusinesses make long-term plant-investment decisions based on discounted cash flow. It is therefore incongruous for an agribusiness firm to use cash flow as a plant-investment criterion and then to completely discard cash flow in favor of batch profits as an operating objective. This paper assumes that cash flow and its stability is important to commodity processors and examines methods for hedging cash flows under continuous processing. Its objectives are (a) to determine how standard he...

  10. Network Transfer of Control Data: An Application of the NIST SMART DATA FLOW

    Directory of Open Access Journals (Sweden)

    Vincent Stanford

    2004-12-01

    Full Text Available Pervasive Computing environments range from basic mobile point of sale terminal systems, to rich Smart Spaces with many devices and sensors such as lapel microphones, audio and video sensor arrays and multiple interactive PDA acting as electronic brief cases, providing authentication, and user preference data to the environment. These systems present new challenges in distributed human-computer interfaces such as how to best use sensor streams, distribute interfaces across multiple devices, and dynamic network management as users come an go, and as devices are added or fail. The NATIONAL INSTITUTE OF STANDARDS AND TECHNOLOGY SMART DATA FLOW system is a low overhead, high bandwidth transport mechanism for standardized multi-modal data streams. It is designed to allow integration of multiple sensors with distributed processing needed for the sense-recognize-respond cycle of multi modal user interfaces. Its core is a server/client architecture, allowing clients to produce or subscribe to data flows, and supporting steps toward scalable processing, distributing the computing requirements among many network connected computers and pervasive devices. This article introduces the communication broker and provides an example of an effective real time sensor fusion to track a speaker with a video camera using data captured from multi-channel microphone array.

  11. TEP process flow diagram

    Energy Technology Data Exchange (ETDEWEB)

    Wilms, R Scott [Los Alamos National Laboratory; Carlson, Bryan [Los Alamos National Laboratory; Coons, James [Los Alamos National Laboratory; Kubic, William [Los Alamos National Laboratory

    2008-01-01

    This presentation describes the development of the proposed Process Flow Diagram (PFD) for the Tokamak Exhaust Processing System (TEP) of ITER. A brief review of design efforts leading up to the PFD is followed by a description of the hydrogen-like, air-like, and waterlike processes. Two new design values are described; the mostcommon and most-demanding design values. The proposed PFD is shown to meet specifications under the most-common and mostdemanding design values.

  12. Plant uprooting by flow as a fatigue mechanical process

    Science.gov (United States)

    Perona, Paolo; Edmaier, Katharina; Crouzy, Benoît

    2015-04-01

    In river corridors, plant uprooting by flow mostly occurs as a delayed process where flow erosion first causes root exposure until residual anchoring balances hydrodynamic forces on the part of the plant that is exposed to the stream. Because a given plant exposure time to the action of the stream is needed before uprooting occurs (time-to-uprooting), this uprooting mechanism has been denominated Type II, in contrast to Type I, which mostly affect early stage seedlings and is rather instantaneous. In this work, we propose a stochastic framework that describes a (deterministic) mechanical fatigue process perturbed by a (stochastic) process noise, where collapse occurs after a given exposure time. We test the model using the experimental data of Edmaier (2014) and Edmaier et al. (submitted), who investigated vegetation uprooting by flow in the limit of low plant stem-to-sediment size ratio by inducing parallel riverbed erosion within an experimental flume. We first identify the proper timescale and lengthscale for rescaling the model. Then, we show that it describes well all the empirical cumulative distribution functions (cdf) of time-to-uprooting obtained under constant riverbed erosion rate and assuming additive gaussian process noise. By this mean, we explore the level of determinism and stochasticity affecting the time-to-uprooting for Avena sativa in relation to root anchoring and flow drag forces. We eventually ascribe the overall dynamics of the Type II uprooting mechanism to the memory of the plant-soil system that is stored by root anchoring, and discuss related implications thereof. References Edmaier, K., Uprooting mechansims of juvenile vegetation by flow erosion, Ph.D. thesis, EPFL, 2014. Edmaier, K., Crouzy, B. and P. Perona. Experimental characterization of vegetation uprooting by flow. J. of Geophys. Res. - Biogeosci., submitted

  13. Design of Flow Big Data System Based on Smart Pipeline Theory

    Directory of Open Access Journals (Sweden)

    Zhang Jianqing

    2017-01-01

    Full Text Available As telecom operators to build intelligent pipe more and more, analysis and processing of big data technology to deal the huge amounts of data intelligent pipeline generated has become an inevitable trend. Intelligent pipe describes operational data, sales data; operator’s pipe flow data make the value for e-commerce business form and business model in mobile e-business environment. Intelligent pipe is the third dimension of 3 D pipeline mobile electronic commerce system. Intelligent operation dimensions make the mobile e-business three-dimensional artifacts. This paper discusses the smart pipeline theory, smart pipeline flow big data system, their system framework and core technology.

  14. Distributed Wireless Data Acquisition System with Synchronized Data Flow

    CERN Document Server

    Astakhova, N V; Dikoussar, N D; Eremin, G I; Gerasimov, A V; Ivanov, A I; Kryukov, Yu S; Mazny, N G; Ryabchun, O V; Salamatin, I M

    2006-01-01

    New methods to provide succession of computer codes under changes of the class of problems and to integrate the drivers of special-purpose devices into application are devised. The worked out scheme and methods for constructing automation systems are used to elaborate a distributed wireless system intended for registration of the characteristics of pulse processes with synchronized data flow, transmitted over a radio channel. The equipment with a sampling frequency of 20 kHz allowed us to achieve a synchronization accuracy of up to $\\pm $ 50 $\\mu$s. Modification of part of the equipment (sampling frequency) permits one to improve the accuracy up to 0.1 $\\mu$s. The obtained results can be applied to develop systems for monitoring various objects, as well as automation systems for experiments and automated process control systems.

  15. Fractal-Markovian scaling of turbulent bursting process in open channel flow

    International Nuclear Information System (INIS)

    Keshavarzi, Ali Reza; Ziaei, Ali Naghi; Homayoun, Emdad; Shirvani, Amin

    2005-01-01

    The turbulent coherent structure of flow in open channel is a chaotic and stochastic process in nature. The coherence structure of the flow or bursting process consists of a series of eddies with a variety of different length scales and it is very important for the entrainment of sediment particles from the bed. In this study, a fractal-Markovian process is applied to the measured turbulent data in open channel. The turbulent data was measured in an experimental flume using three-dimensional acoustic Doppler velocity meter (ADV). A fractal interpolation function (FIF) algorithm was used to simulate more than 500,000 time series data of measured instantaneous velocity fluctuations and Reynolds shear stress. The fractal interpolation functions (FIF) enables to simulate and construct time series of u', v', and u'v' for any particular movement and state in the Markov process. The fractal dimension of the bursting events is calculated for 16 particular movements with the transition probability of the events based on 1st order Markov process. It was found that the average fractal dimensions of the streamwise flow velocity (u') are; 1.73, 1.74, 1.71 and 1.74 with the transition probability of 60.82%, 63.77%, 59.23% and 62.09% for the 1-1, 2-2, 3-3 and 4-4 movements, respectively. It was also found that the fractal dimensions of Reynold stress u'v' for quadrants 1, 2, 3 and 4 are 1.623, 1.623, 1.625 and 1.618, respectively

  16. Alternatives to current flow cytometry data analysis for clinical and research studies.

    Science.gov (United States)

    Gondhalekar, Carmen; Rajwa, Bartek; Patsekin, Valery; Ragheb, Kathy; Sturgis, Jennifer; Robinson, J Paul

    2018-02-01

    Flow cytometry has well-established methods for data analysis based on traditional data collection techniques. These techniques typically involved manual insertion of tube samples into an instrument that, historically, could only measure 1-3 colors. The field has since evolved to incorporate new technologies for faster and highly automated sample preparation and data collection. For example, the use of microwell plates on benchtop instruments is now a standard on virtually every new instrument, and so users can easily accumulate multiple data sets quickly. Further, because the user must carefully define the layout of the plate, this information is already defined when considering the analytical process, expanding the opportunities for automated analysis. Advances in multi-parametric data collection, as demonstrated by the development of hyperspectral flow-cytometry, 20-40 color polychromatic flow cytometry, and mass cytometry (CyTOF), are game-changing. As data and assay complexity increase, so too does the complexity of data analysis. Complex data analysis is already a challenge to traditional flow cytometry software. New methods for reviewing large and complex data sets can provide rapid insight into processes difficult to define without more advanced analytical tools. In settings such as clinical labs where rapid and accurate data analysis is a priority, rapid, efficient and intuitive software is needed. This paper outlines opportunities for analysis of complex data sets using examples of multiplexed bead-based assays, drug screens and cell cycle analysis - any of which could become integrated into the clinical environment. Copyright © 2017. Published by Elsevier Inc.

  17. VLT Data Flow System Begins Operation

    Science.gov (United States)

    1999-06-01

    conceived as a complex digital facility to explore the Universe. In order for astronomers to be able to use this marvellous research tool in the most efficient manner possible, the VLT computer software and hardware systems must guarantee a smooth flow of scientific information through the entire system. This process starts when the astronomers submit well-considered proposals for observing time and it ends with large volumes of valuable astronomical data being distributed to the international astronomical community. For this, ESO has produced an integrated collection of software and hardware, known as the VLT Data Flow System (DFS) , that manages and facilitates the flow of scientific information within the VLT Observatory. Early information about this new concept was published as ESO Press Release 12/96 and extensive tests were first carried out at ESOs 3.5-m New Technology Telescope (NTT) at La Silla, cf. ESO Press Release 03/97 [1]. The VLT DFS is a complete (end-to-end) system that guarantees the highest data quality by optimization of the observing process and repeated checks that identify and eliminate any problems. It also introduces automatic calibration of the data, i.e. the removal of external effects introduced by the atmospheric conditions at the time of the observations, as well as the momentary state of the telescope and the instruments. From Proposals to Observations In order to obtain observing time with ESO telescopes, also with the VLT, astronomers must submit a detailed observing proposal to the ESO Observing Programmes Committee (OPC) . It meets twice a year and ranks the proposals according to scientific merit. More than 1000 proposals are submitted each year, mostly by astronomers from the ESO members states and Chile; the competition is fierce and only a fraction of the total demand for observing time can be fulfilled. During the submission of observing proposals, DFS software tools available over the World Wide Web enable the astronomers to simulate

  18. Data processing for the fluid flow tomography method; Ryutai ryudo den`iho no data kaiseki

    Energy Technology Data Exchange (ETDEWEB)

    Ushijima, K; Mizunaga, H; Tanaka, T [Kyushu University, Fukuoka (Japan). Faculty of Engineering; Hashimoto, K [Kyushu Electric Power Co. Inc., Fukuoka (Japan)

    1997-05-27

    An automatic measurement system by means of conductive potential and self-potential methods (fluid flow tomography method) has been developed to measure the change of geothermal steam fluid during production and injection. For the fluid flow tomography method, the four-electrode configuration of the conductive potential method is adopted using the casing pipe of well as a current source. A lot of potential receiving electrodes are connected to the earth, preliminarily. The surface potential profile is measured, which is formed during the injection and production of the fluid through the well. Artificial and spontaneous potential profiles were continuously measured using this system during the hydraulic crushing tests at the test field of hot dry rock power generation at Ogachi-machi, Akita Prefecture. As a result of inversion analysis of self-potential data using a four-layer structural model of specific resistance, it was observed that the fluid injected at the depth of 711 m in the borehole permeated into the depth between 700 and 770 m in the south-eastern part of the well, and that the fractures propagated into the deeper part, gradually with the progress of hydraulic crushing test. 3 figs.

  19. Fastr: a workflow engine for advanced data flows in medical image analysis

    Directory of Open Access Journals (Sweden)

    Hakim Christiaan Achterberg

    2016-08-01

    Full Text Available With the increasing number of datasets encountered in imaging studies, the increasingcomplexity of processing workflows, and a growing awareness for data stewardship, thereis a need for managed, automated workflows. In this paper we introduce Fastr, an automatedworkflow engine with support for advanced data flows. Fastr has built-in data provenance forrecording processing trails and ensuring reproducible results. The extensible plugin-based designallows the system to interface with virtually any image archive and processing infrastructure. Thisworkflow engine is designed to consolidate quantitative imaging biomarker pipelines in order toenable easy application to new data.

  20. System identification by experimental data processing, application to turbulent transport of a tracer in pipe flow

    International Nuclear Information System (INIS)

    Burgos, Manuel; Getto, Daniel; Berne, Philippe

    2005-01-01

    System identification is the first, and probably the most important step in detecting abnormal behavior, control system design or performance improving. Data analysis is performed for studying the plant behavior, sensitivity of operation procedures and several other goals. In all these cases, the observed data is the convolution of an input function, and the system's impulse response. Practical discrete time convolutions may be performed multiplying a matrix built from the impulse response by the input vector, but for deconvolution it is necessary to invert the matrix which is singular in a causal system. Another method for deconvolution is by means of Fourier Transforms. Actual readings are usually corrupted by noise and, besides, their transform shows high low frequencies components and high frequency ones mainly due to additive noise. Subjective decisions as cut-off frequency should be taken as well. This paper proposes a deconvolution method based on parameters fitting of suitable models, where they exist, and estimation of values where analytical forms are not available. It is based on the global, non linear fitting of them, with a maximum likelihood criteria. An application of the method is shown using data from two fluid flow experiments. The experimental test rigs basically consist in a long section of straight pipe in which fluid is flowing. A pulse of tracer is injected at the entrance and detected at various locations along the pipe. An attempt of deconvolution of signals from successive probes using a classical model describing the flow of tracer as a plug moving with the average fluid velocity, plus some axial dispersion. The parameters are for instance the velocity of the plug and a dispersion coefficient. After parameter fitting, the model is found to reproduce the experimental data well. The flow rates deduced from the adjusted travel times are in very good agreement with the actual values. In addition, the flow dispersion coefficient is obtained

  1. A Fresh Look at Spatio-Temporal Remote Sensing Data: Data Formats, Processing Flow, and Visualization

    Science.gov (United States)

    Gens, R.

    2017-12-01

    With increasing number of experimental and operational satellites in orbit, remote sensing based mapping and monitoring of the dynamic Earth has entered into the realm of `big data'. Just the Landsat series of satellites provide a near continuous archive of 45 years of data. The availability of such spatio-temporal datasets has created opportunities for long-term monitoring diverse features and processes operating on the Earth's terrestrial and aquatic systems. Processes such as erosion, deposition, subsidence, uplift, evapotranspiration, urbanization, land-cover regime shifts can not only be monitored and change can be quantified using time-series data analysis. This unique opportunity comes with new challenges in management, analysis, and visualization of spatio-temporal datasets. Data need to be stored in a user-friendly format, and relevant metadata needs to be recorded, to allow maximum flexibility for data exchange and use. Specific data processing workflows need to be defined to support time-series analysis for specific applications. Value-added data products need to be generated keeping in mind the needs of the end-users, and using best practices in complex data visualization. This presentation systematically highlights the various steps for preparing spatio-temporal remote sensing data for time series analysis. It showcases a prototype workflow for remote sensing based change detection that can be generically applied while preserving the application-specific fidelity of the datasets. The prototype includes strategies for visualizing change over time. This has been exemplified using a time-series of optical and SAR images for visualizing the changing glacial, coastal, and wetland landscapes in parts of Alaska.

  2. Research on key technologies of data processing in internet of things

    Science.gov (United States)

    Zhu, Yangqing; Liang, Peiying

    2017-08-01

    The data of Internet of things (IOT) has the characteristics of polymorphism, heterogeneous, large amount and processing real-time. The traditional structured and static batch processing method has not met the requirements of data processing of IOT. This paper studied a middleware that can integrate heterogeneous data of IOT, and integrated different data formats into a unified format. Designed a data processing model of IOT based on the Storm flow calculation architecture, integrated the existing Internet security technology to build the Internet security system of IOT data processing, which provided reference for the efficient transmission and processing of IOT data.

  3. The ATLAS Data Flow System for Run 2

    CERN Document Server

    Kazarov, Andrei; The ATLAS collaboration

    2015-01-01

    After its first shutdown, the LHC will provide pp collisions with increased luminosity and energy. In the ATLAS experiment, the Trigger and Data Acquisition (TDAQ) system has been upgraded to deal with the increased event rates. The Data Flow (DF) element of the TDAQ is a distributed hardware and software system responsible for buffering and transporting event data from the readout system to the High Level Trigger (HLT) and to the event storage. The DF has been reshaped in order to profit from the technological progress and to maximize the flexibility and efficiency of the data selection process. The updated DF is radically different from the previous implementation both in terms of architecture and expected performance. The pre-existing two level software filtering, known as L2 and the Event Filter, and the Event Building are now merged into a single process, performing incremental data collection and analysis. This design has many advantages, among which are: the radical simplification of the architecture, ...

  4. Constraining a compositional flow model with flow-chemical data using an ensemble-based Kalman filter

    KAUST Repository

    Gharamti, M. E.; Kadoura, A.; Valstar, J.; Sun, S.; Hoteit, Ibrahim

    2014-01-01

    Isothermal compositional flow models require coupling transient compressible flows and advective transport systems of various chemical species in subsurface porous media. Building such numerical models is quite challenging and may be subject to many sources of uncertainties because of possible incomplete representation of some geological parameters that characterize the system's processes. Advanced data assimilation methods, such as the ensemble Kalman filter (EnKF), can be used to calibrate these models by incorporating available data. In this work, we consider the problem of estimating reservoir permeability using information about phase pressure as well as the chemical properties of fluid components. We carry out state-parameter estimation experiments using joint and dual updating schemes in the context of the EnKF with a two-dimensional single-phase compositional flow model (CFM). Quantitative and statistical analyses are performed to evaluate and compare the performance of the assimilation schemes. Our results indicate that including chemical composition data significantly enhances the accuracy of the permeability estimates. In addition, composition data provide more information to estimate system states and parameters than do standard pressure data. The dual state-parameter estimation scheme provides about 10% more accurate permeability estimates on average than the joint scheme when implemented with the same ensemble members, at the cost of twice more forward model integrations. At similar computational cost, the dual approach becomes only beneficial after using large enough ensembles.

  5. Constraining a compositional flow model with flow-chemical data using an ensemble-based Kalman filter

    KAUST Repository

    Gharamti, M. E.

    2014-03-01

    Isothermal compositional flow models require coupling transient compressible flows and advective transport systems of various chemical species in subsurface porous media. Building such numerical models is quite challenging and may be subject to many sources of uncertainties because of possible incomplete representation of some geological parameters that characterize the system\\'s processes. Advanced data assimilation methods, such as the ensemble Kalman filter (EnKF), can be used to calibrate these models by incorporating available data. In this work, we consider the problem of estimating reservoir permeability using information about phase pressure as well as the chemical properties of fluid components. We carry out state-parameter estimation experiments using joint and dual updating schemes in the context of the EnKF with a two-dimensional single-phase compositional flow model (CFM). Quantitative and statistical analyses are performed to evaluate and compare the performance of the assimilation schemes. Our results indicate that including chemical composition data significantly enhances the accuracy of the permeability estimates. In addition, composition data provide more information to estimate system states and parameters than do standard pressure data. The dual state-parameter estimation scheme provides about 10% more accurate permeability estimates on average than the joint scheme when implemented with the same ensemble members, at the cost of twice more forward model integrations. At similar computational cost, the dual approach becomes only beneficial after using large enough ensembles.

  6. A Study on Data Base for the Pyroprocessing Material Flow and MUF Uncertainty Simulation

    International Nuclear Information System (INIS)

    Sitompul, Yos Panagaman; Shin, Heesung; Han, Boyoung; Kim, Hodong

    2011-01-01

    The data base for the pyroprocessing material flow and MUF uncertainty simulation has been implemented well. There is no error in the data base processing and it is relatively fast by using OLEDB and MySQL. The important issue is the data base size. In OLEDB the data base size is limited to 2 Gb. To reduce the data base size, we give an option for users to filter the input nuclides based on their masses and activities. A simulation program called PYMUS has been developed to study the pyroprocessing material flow and MUF. In the program, there is a data base system that controls the data processing in the simulation. The data base system consists of input data base, data processing, and output data base. The data base system has been designed in such a way to be efficient. One example is using the OLEDB and MySQL. The data base system is explained in detail in this paper. The result shows that the data base system works well in the simulation

  7. A formal definition of data flow graph models

    Science.gov (United States)

    Kavi, Krishna M.; Buckles, Bill P.; Bhat, U. Narayan

    1986-01-01

    In this paper, a new model for parallel computations and parallel computer systems that is based on data flow principles is presented. Uninterpreted data flow graphs can be used to model computer systems including data driven and parallel processors. A data flow graph is defined to be a bipartite graph with actors and links as the two vertex classes. Actors can be considered similar to transitions in Petri nets, and links similar to places. The nondeterministic nature of uninterpreted data flow graphs necessitates the derivation of liveness conditions.

  8. Research on fracture analysis, groundwater flow and sorption processes in fractured rocks

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Dae-Ha; Kim, Won-Young; Lee, Seung-Gu [Korea Institute of Geology Mining and Materials, Taejon (KR)] (and others)

    1999-12-01

    Due to increasing demand for numerous industrial facilities including nuclear power plants and waste repositories, the feasibility of rocks masses as sites for the facilities has been a geological issue of concern. Rock masses, in general, comprises systems of fractures which can provide pathways for groundwater flow and may also affect the stability of engineered structures. For the study of groundwater flow and sorption processes in fractured rocks, five boreholes were drilled. A stepwise and careful integration of various data obtained from field works and laboratory experiments were carried out to analyze groundwater flow in fractured rocks as follows; (1) investigation of geological feature of the site, (2) identification and characterization of fracture systems using core and televiewer logs, (3) determination of hydrogeological properties of fractured aquifers using geophysical borehole logging, pumping and slug tests, and continuous monitoring of groundwater level and quality, (4) evaluation of groundwater flow patterns using fluid flow modeling. The results obtained from these processes allow a qualitative interpretation of fractured aquifers in the study area. Column experiments of some reactive radionuclides were also performed to examine sorption processes of the radionuclides including retardation coefficients. In addition, analyses of fracture systems covered (1) reconstruction of the Cenozoic tectonic movements and estimation of frequency indices for the Holocene tectonic movements, (2) determination of distributions and block movements of the Quaternary marine terraces, (3) investigation of lithologic and geotechnical nature of study area, and (4) examination of the Cenozoic volcanic activities and determination of age of the dike swarms. Using data obtained from above mentioned analyses along with data related to earthquakes and active faults, probabilistic approach was performed to determine various potential hazards which may result from the

  9. Research on fracture analysis, groundwater flow and sorption processes in fractured rocks

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Dae Ha [Korea Institute of Geology Mining and Materials, Taejon (Korea)

    1998-12-01

    Due to increasing demand for numerous industrial facilities including nuclear power plants and waste repositories, the feasibility of rocks masses as sites for the facilities has been a geological issue of concern. Rock masses, in general, comprises systems of fractures which can provide pathways for groundwater flow and may also affect the stability of engineered structures. such properties of fractures stimulate a synthetic study on (1) analyses of fracture systems, and (2) characterization of groundwater flow and sorption processes in fractured rocks to establish a preliminary model for assessing suitable sites for industrial facilities. The analyses of fracture systems cover (1) reconstruction of the Cenozoic tectonic movements and estimation of frequency indices for the Holocene tectonic movements, (2) determination of distributions and block movements of the Quaternary marine terraces, (3) investigation of lithologic and geotechnical nature of study area, and (4) examination of the Cenozoic volcanic activities and determination of age of the dike swarms. Using data obtained from above mentioned analyses along with data related to earthquakes and active faults, probabilistic approach is performed to determine various potential hazards which may result from the Quaternary or the Holocene tectonic movements. In addition, stepwise and careful integration of various data obtained from field works and laboratory experiments are carried out to analyze groundwater flow in fractures rocks as follows; (1) investigation of geological feature of the site, (2) identification and characterization of fracture systems using core and televiewer logs, (3) determination of conductive fractures using electrical conductivity, temperature, and flow logs, (4) identification of hydraulic connections between fractures using televiewer logs with tracer tests within specific zones. The results obtained from these processes allow a qualitative interpretation of groundwater flow patterns

  10. SSDA code to apply data assimilation in soil water flow modeling: Documentation and user manual

    Science.gov (United States)

    Soil water flow models are based on simplified assumptions about the mechanisms, processes, and parameters of water retention and flow. That causes errors in soil water flow model predictions. Data assimilation (DA) with the ensemble Kalman filter (EnKF) corrects modeling results based on measured s...

  11. Quantitative investigation of the transition process in Taylor-Couette flow

    International Nuclear Information System (INIS)

    Tu, Xin Cheng; Kim, Hyoung Bum Kim; Liu, Dong

    2013-01-01

    The transition process from circular Couette flow to Taylor vortex flow regime was experimentally investigated by measuring the instantaneous velocity vector fields at the annular gap flow region between two concentric cylinders. The proper orthogonal decomposition method, vorticity calculation, and frequency analysis were applied in order to analyze the instantaneous velocity fields to identify the flow characteristics during the transition process. From the results, the kinetic energy and corresponding reconstructed velocity fields were able to detect the onset of the transition process and the alternation of the flow structure. The intermittency and oscillation of the vortex flows during the transition process were also revealed from the analysis of the instantaneous velocity fields. The results can be a measure of identifying the critical Reynolds number of the Taylor-Couette flow from a velocity measurement method.

  12. Data flow in LCG Data Challenge 3

    CERN Multimedia

    2005-01-01

    This map shows the real data transfer from CERN to selected nodes during the Large Hadron Collider Computer Grid (LCG) Data Challenge 3. The goal of this activity was to achieve an average data flow out of CERN of 400 Mbytes/sec, equivalent to 100 million words every second, for one week. At this rate, the complete works of Shakespeare could be sent every second.

  13. Understanding the ‘Intensive’ in ‘Data Intensive Research’: Data Flows in Next Generation Sequencing and Environmental Networked Sensors

    Directory of Open Access Journals (Sweden)

    Ruth McNally

    2012-03-01

    Full Text Available Genomic and environmental sciences represent two poles of scientific data. In the first, highly parallel sequencing facilities generate large quantities of sequence data. In the latter, loosely networked remote and field sensors produce intermittent streams of different data types. Yet both genomic and environmental sciences are said to be moving to data intensive research. This paper explores and contrasts data flow in these two domains in order to better understand how data intensive research is being done. Our case studies are next generation sequencing for genomics and environmental networked sensors.Our objective was to enrich understanding of the ‘intensive’ processes and properties of data intensive research through a ‘sociology’ of data using methods that capture the relational properties of data flows. Our key methodological innovation was the staging of events for practitioners with different kinds of expertise in data intensive research to participate in the collective annotation of visual forms. Through such events we built a substantial digital data archive of our own that we then analysed in terms of three traits of data flow: durability, replicability and metrology.Our findings are that analysing data flow with respect to these three traits provides better insight into how doing data intensive research involves people, infrastructures, practices, things, knowledge and institutions. Collectively, these elements shape the topography of data and condition how it flows. We argue that although much attention is given to phenomena such as the scale, volume and speed of data in data intensive research, these are measures of what we call ‘extensive’ properties rather than intensive ones. Our thesis is that extensive changes, that is to say those that result in non-linear changes in metrics, can be seen to result from intensive changes that bring multiple, disparate flows into confluence.If extensive shifts in the modalities of

  14. Electronic device, system on chip and method for monitoring a data flow

    NARCIS (Netherlands)

    2012-01-01

    An electronic device is provided which comprises a plurality of processing units (IP1-IP6), a network-based inter-connect (N) coupled to the processing units (IP1-IP6) and at least one monitoring unit (P1, P2) for monitoring a data flow of at least one first communication path between the processing

  15. Representation and display of vector field topology in fluid flow data sets

    Science.gov (United States)

    Helman, James; Hesselink, Lambertus

    1989-01-01

    The visualization of physical processes in general and of vector fields in particular is discussed. An approach to visualizing flow topology that is based on the physics and mathematics underlying the physical phenomenon is presented. It involves determining critical points in the flow where the velocity vector vanishes. The critical points, connected by principal lines or planes, determine the topology of the flow. The complexity of the data is reduced without sacrificing the quantitative nature of the data set. By reducing the original vector field to a set of critical points and their connections, a representation of the topology of a two-dimensional vector field that is much smaller than the original data set but retains with full precision the information pertinent to the flow topology is obtained. This representation can be displayed as a set of points and tangent curves or as a graph. Analysis (including algorithms), display, interaction, and implementation aspects are discussed.

  16. Hydrothermal Processing of Macroalgal Feedstocks in Continuous-Flow Reactors

    Energy Technology Data Exchange (ETDEWEB)

    Elliott, Douglas C.; Hart, Todd R.; Neuenschwander, Gary G.; Rotness, Leslie J.; Roesijadi, Guri; Zacher, Alan H.; Magnuson, Jon K.

    2014-02-03

    Wet macroalgal slurries have been converted into a biocrude by hydrothermal liquefaction (HTL) in a bench-scale continuous-flow reactor system. Carbon conversion to a gravity-separable oil product of 58.8% was accomplished at relatively low temperature (350 °C) in a pressurized (subcritical liquid water) environment (20 MPa) when using feedstock slurries with a 21.7% concentration of dry solids. As opposed to earlier work in batch reactors reported by others, direct oil recovery was achieved without the use of a solvent, and biomass trace mineral components were removed by processing steps so that they did not cause processing difficulties. In addition, catalytic hydrothermal gasification (CHG) was effectively applied for HTL byproduct water cleanup and fuel gas production from water-soluble organics. Conversion of 99.2% of the carbon left in the aqueous phase was demonstrated. Finally, as a result, high conversion of macroalgae to liquid and gas fuel products was found with low levels of residual organic contamination in byproduct water. Both process steps were accomplished in continuous-flow reactor systems such that design data for process scale-up was generated.

  17. Built-In Data-Flow Integration Testing in Large-Scale Component-Based Systems

    Science.gov (United States)

    Piel, Éric; Gonzalez-Sanchez, Alberto; Gross, Hans-Gerhard

    Modern large-scale component-based applications and service ecosystems are built following a number of different component models and architectural styles, such as the data-flow architectural style. In this style, each building block receives data from a previous one in the flow and sends output data to other components. This organisation expresses information flows adequately, and also favours decoupling between the components, leading to easier maintenance and quicker evolution of the system. Integration testing is a major means to ensure the quality of large systems. Their size and complexity, together with the fact that they are developed and maintained by several stake holders, make Built-In Testing (BIT) an attractive approach to manage their integration testing. However, so far no technique has been proposed that combines BIT and data-flow integration testing. We have introduced the notion of a virtual component in order to realize such a combination. It permits to define the behaviour of several components assembled to process a flow of data, using BIT. Test-cases are defined in a way that they are simple to write and flexible to adapt. We present two implementations of our proposed virtual component integration testing technique, and we extend our previous proposal to detect and handle errors in the definition by the user. The evaluation of the virtual component testing approach suggests that more issues can be detected in systems with data-flows than through other integration testing approaches.

  18. Acquisition and Processing of Cerebral Blood Flow Data with a M ...

    African Journals Online (AJOL)

    1974-12-07

    Dec 7, 1974 ... anaesthetic agent is described, as well as the use of a ... anaesthetic agents cerebral blood flow has therefore to .... AlO = area under clearance curve after 10 min. .... weighted flow was 0,54, and the percentage standard.

  19. Measurement plans for process flow improvement in services and health care

    NARCIS (Netherlands)

    Kemper, B.; de Mast, J.

    2013-01-01

    The discussion of performance measurement is often on a conceptual, not operational, level; advice on the operational and practical matters of obtaining data for process flow improvement is scarce. We define a measurement plan and study four measurement study designs and corresponding methods and

  20. Design of Flow Big Data System Based on Smart Pipeline Theory

    OpenAIRE

    Zhang Jianqing; Li Shuai; Liu Lilan

    2017-01-01

    As telecom operators to build intelligent pipe more and more, analysis and processing of big data technology to deal the huge amounts of data intelligent pipeline generated has become an inevitable trend. Intelligent pipe describes operational data, sales data; operator’s pipe flow data make the value for e-commerce business form and business model in mobile e-business environment. Intelligent pipe is the third dimension of 3 D pipeline mobile electronic commerce system. Intelligent operation...

  1. The nuclear safeguards data flow for the item facilities

    International Nuclear Information System (INIS)

    Wang Hongjun; Chen Desheng

    1994-04-01

    The constitution of nuclear safeguards data flow for the item facilities is introduced and the main contents are the data flow of nuclear safeguards. If the data flow moves positively, i.e. from source data →supporting documents→accounting records→accounting reports, the systems of records and reports will be constituted. If the data flow moves negatively, the way to trace inspection of nuclear material accounting quality will be constituted

  2. Wildfire impacts on the processes that generate debris flows in burned watersheds

    Science.gov (United States)

    Parise, M.; Cannon, S.H.

    2012-01-01

    Every year, and in many countries worldwide, wildfires cause significant damage and economic losses due to both the direct effects of the fires and the subsequent accelerated runoff, erosion, and debris flow. Wildfires can have profound effects on the hydrologic response of watersheds by changing the infiltration characteristics and erodibility of the soil, which leads to decreased rainfall infiltration, significantly increased overland flow and runoff in channels, and movement of soil. Debris-flow activity is among the most destructive consequences of these changes, often causing extensive damage to human infrastructure. Data from the Mediterranean area and Western United States of America help identify the primary processes that result in debris flows in recently burned areas. Two primary processes for the initiation of fire-related debris flows have been so far identified: (1) runoff-dominated erosion by surface overland flow; and (2) infiltration-triggered failure and mobilization of a discrete landslide mass. The first process is frequently documented immediately post-fire and leads to the generation of debris flows through progressive bulking of storm runoff with sediment eroded from the hillslopes and channels. As sediment is incorporated into water, runoff can convert to debris flow. The conversion to debris flow may be observed at a position within a drainage network that appears to be controlled by threshold values of upslope contributing area and its gradient. At these locations, sufficient eroded material has been incorporated, relative to the volume of contributing surface runoff, to generate debris flows. Debris flows have also been generated from burned basins in response to increased runoff by water cascading over a steep, bedrock cliff, and incorporating material from readily erodible colluvium or channel bed. Post-fire debris flows have also been generated by infiltration-triggered landslide failures which then mobilize into debris flows. However

  3. Flow Kinematics and Particle Orientations during Composite Processing

    International Nuclear Information System (INIS)

    Chiba, Kunji

    2007-01-01

    The mechanism of orientation of fibers or thin micro-particles in various flows involving the processing of composite materials has not been fully understood although it is much significant to obtain the knowledge of the processing operations of particle reinforced composites as well as to improve the properties of the advanced composites. The objective of this paper is to introduce and well understand the evolution of the particle orientation in a suspension flow and flow kinematics induced by suspended particles by means of our two research work

  4. Standard services for the capture, processing, and distribution of packetized telemetry data

    Science.gov (United States)

    Stallings, William H.

    1989-01-01

    Standard functional services for the capture, processing, and distribution of packetized data are discussed with particular reference to the future implementation of packet processing systems, such as those for the Space Station Freedom. The major functions are listed under the following major categories: input processing, packet processing, and output processing. A functional block diagram of a packet data processing facility is presented, showing the distribution of the various processing functions as well as the primary data flow through the facility.

  5. Effects of Coating Materials and Processing Conditions on Flow Enhancement of Cohesive Acetaminophen Powders by High-Shear Processing With Pharmaceutical Lubricants.

    Science.gov (United States)

    Wei, Guoguang; Mangal, Sharad; Denman, John; Gengenbach, Thomas; Lee Bonar, Kevin; Khan, Rubayat I; Qu, Li; Li, Tonglei; Zhou, Qi Tony

    2017-10-01

    This study has investigated the surface coating efficiency and powder flow improvement of a model cohesive acetaminophen powder by high-shear processing with pharmaceutical lubricants through 2 common equipment, conical comil and high-shear mixer. Effects of coating materials and processing parameters on powder flow and surface coating coverage were evaluated. Both Carr's index and shear cell data indicated that processing with the lubricants using comil or high-shear mixer substantially improved the flow of the cohesive acetaminophen powder. Flow improvement was most pronounced for those processed with 1% wt/wt magnesium stearate, from "cohesive" for the V-blended sample to "easy flowing" for the optimally coated sample. Qualitative and quantitative characterizations demonstrated a greater degree of surface coverage for high-shear mixing compared with comilling; nevertheless, flow properties of the samples at the corresponding optimized conditions were comparable between 2 techniques. Scanning electron microscopy images demonstrated different coating mechanisms with magnesium stearate or l-leucine (magnesium stearate forms a coating layer and leucine coating increases surface roughness). Furthermore, surface coating with hydrophobic magnesium stearate did not retard the dissolution kinetics of acetaminophen. Future studies are warranted to evaluate tableting behavior of such dry-coated pharmaceutical powders. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  6. Automation of process accountability flow diagrams at Los Alamos National Laboratory's Plutonium Facility

    International Nuclear Information System (INIS)

    Knepper, P.; Whiteson, R.; Strittmatter, R.; Mousseau, K.

    1999-01-01

    Many industrial processes (including reprocessing activities; nuclear fuel fabrication; and material storage, measurement and transfer) make use of process flow diagrams. These flows can be used for material accountancy and for data analysis. At Los Alamos National Laboratory (LANL), the Technical Area (TA)-55 Plutonium Facility is home to various research and development activities involving the use of special nuclear material (SNM). A facility conducting research and development (R and D) activities using SNM must satisfy material accountability guidelines. All processes involving SNM or tritium processing, at LANL, require a process accountability flow diagram (PAFD). At LANL a technique was developed to generate PAFDs that can be coupled to a relational database for use in material accountancy. These techniques could also be used for propagation of variance, measurement control, and inventory difference analysis. The PAFD is a graphical representation of the material flow during a specific process. PAFDs are currently stored as PowerPoint files. In the PowerPoint format, the data captured by the PAFD are not easily accessible. Converting the PAFDs to an accessible electronic format is desirable for several reasons. Any program will be able to access the data contained in the PAFD. For the PAFD data to be useful in applications such as an expert system for data checking, SNM accountability, inventory difference evaluation, measurement control, and other kinds of analysis, it is necessary to interface directly with the information contained within the PAFD. The PAFDs can be approved and distributed electronically, eliminating the paper copies of the PAFDs and ensuring that material handlers have the current PAFDs. Modifications to the PAFDs are often global. Storing the data in an accessible format would eliminate the need to manually update each of the PAFDs when a global change has occurred. The goal was to determine a software package that would store the

  7. The ATLAS Data Flow System for LHC Run II

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00305920; The ATLAS collaboration

    2016-01-01

    After its first shutdown, the LHC will provide pp collisions with increased luminosity and energy. In the ATLAS experiment, the Trigger and Data Acquisition (TDAQ) system has been upgraded to deal with the increased event rates. The Data Flow (DF) element of the TDAQ is a distributed hardware and software system responsible for buffering and transporting event data from the readout system to the High Level Trigger (HLT) and to the event storage. The DF has been reshaped in order to profit from the technological progress and to maximize the flexibility and efficiency of the data selection process. The updated DF is radically different from the previous implementation both in terms of architecture and expected performance. The pre-existing two level software filtering, known as L2 and the Event Filter, and the Event Building are now merged into a single process, performing incremental data collection and analysis. This design has many advantages, among which are: the radical simplification of the architecture, ...

  8. Improving Process Quality by Means of Accurate and Traceable Calibration of Flow Devices with Process-oriented Liquids.

    Science.gov (United States)

    Bissig, Hugo; Tschannen, Martin; de Huu, Marc

    2018-03-30

    Calibration of flow devices is important in several areas of pharmaceutical, flow chemistry and health care applications where volumetric dosage or delivery at given flow rates are crucial for the process. Although most of the flow devices are measuring flow rates of process-oriented liquids, their calibrations are often performed with water as calibration liquid. It is recommended to perform the calibrations of the flow devices with process-oriented liquids as the liquid itself might influence the performance of the flow devices. Therefore, METAS has developed facilities with METAS flow generators to address the issue of measuring with process-oriented liquids for flow rates from 400 ml/min down to 50 nl/min with uncertainties from 0.07-0.9 %. Traceability is guaranteed through the calibration of the generated flow rates of the METAS flow generators by means of the dynamic gravimetric method where a liquid of well-known density and a well-controlled evaporation rate is used. The design of the milli-flow facility will be discussed as well as first measurement results of the METAS flow generators in the range of micro-flow and milli-flow using water and other liquids.

  9. Big Bicycle Data Processing: from Personal Data to Urban Applications

    Science.gov (United States)

    Pettit, C. J.; Lieske, S. N.; Leao, S. Z.

    2016-06-01

    Understanding the flows of people moving through the built environment is a vital source of information for the planners and policy makers who shape our cities. Smart phone applications enable people to trace themselves through the city and these data can potentially be then aggregated and visualised to show hot spots and trajectories of macro urban movement. In this paper our aim is to develop procedures for cleaning, aggregating and visualising human movement data and translating this into policy relevant information. In conducting this research we explore using bicycle data collected from a smart phone application known as RiderLog. We focus on the RiderLog application initially in the context of Sydney, Australia and discuss the procedures and challenges in processing and cleaning this data before any analysis can be made. We then present some preliminary map results using the CartoDB online mapping platform where data are aggregated and visualised to show hot spots and trajectories of macro urban movement. We conclude the paper by highlighting some of the key challenges in working with such data and outline some next steps in processing the data and conducting higher volume and more extensive analysis.

  10. Evaluation of alternative flow sheets for upgrade of the Process Waste Treatment Plant

    International Nuclear Information System (INIS)

    Robinson, S.M.

    1991-04-01

    Improved chemical precipitation and/or ion-exchange (IX) methods are being developed at the Oak Ridge National Laboratory (ORNL) in an effort to reduce waste generation at the Process Waste Treatment Plant (PWTP). A wide variety of screening tests were performed on potential precipitation techniques and IX materials on a laboratory scale. Two of the more promising flow sheets have been tested on pilot and full scales. The data were modeled to determine the operating conditions and waste generation at plant-scale and used to develop potential flow sheets for use at the PWTP. Each flow sheet was evaluated using future-valve economic analysis and performance ratings (where numerical values were assigned to costs, process flexibility and simplicity, stage of development, waste reduction, environmental and occupational safety, post-processing requirements, and final waste form). The results of this study indicated that several potential flow sheets should be considered for further development, and more detailed cost estimates should be made before a final selection is made for upgrade of the PWTP. 19 refs., 52 figs., 22 tabs

  11. Design of data acquisition system ZOH production process equipment of ZBS (zircon based sulfate)

    International Nuclear Information System (INIS)

    Moch Rosyid; Tunjung Indrati Y

    2013-01-01

    Design of data acquisition system of ZOH maker unit from ZBS has performed. Design is done as a follow-up of The Design Flow stirred tank reactor (RATB) ZOH Making of ZBS is equipped with a data acquisition system. The design of the system is going to work based on constants or parameters that have been previously calculated. Design method begins with understanding the process description of ZBS ZOH continuous basis, identify the parameters to be observed. Description of the process need to know to determine the actuator is used, as for these parameters are used to determine the sensor to be used. The parameter are the detection of NH 4 OH and ZBS reserves the feeder tank, flow rate ZBS and NH 4 OH, the temperature inside RATB, RATB and pH , the produced flow rate of ZOH. Based on the calculation, in order to get the results needed ZOH ZBS will flow into the reactor at the rate of 10 ml/min simultaneously with NH 4 OH with discharge flow 6.1 ml/min into the RATB 3 liters volume. When the volume reaches half tank RATB then start heating is turned on while the constant feed flow. Conditions of pH and temperature on the RATB always monitored by setting point pH at 10 while setting point temperature of 90°C. Monitoring parameters require gauge/transducer or a particular sensor. The study on the obtained results in the form of a flow chart design controller when controlling the process, when read and transmit data, and display the resulting data acquisition process parameters on the screen according to the parameters that was planned. (author)

  12. Abnormal traffic flow data detection based on wavelet analysis

    Directory of Open Access Journals (Sweden)

    Xiao Qian

    2016-01-01

    Full Text Available In view of the traffic flow data of non-stationary, the abnormal data detection is difficult.proposed basing on the wavelet analysis and least squares method of abnormal traffic flow data detection in this paper.First using wavelet analysis to make the traffic flow data of high frequency and low frequency component and separation, and then, combined with least square method to find abnormal points in the reconstructed signal data.Wavelet analysis and least square method, the simulation results show that using wavelet analysis of abnormal traffic flow data detection, effectively reduce the detection results of misjudgment rate and false negative rate.

  13. CoreFlow: A computational platform for integration, analysis and modeling of complex biological data

    DEFF Research Database (Denmark)

    Pasculescu, Adrian; Schoof, Erwin; Creixell, Pau

    2014-01-01

    between data generation, analysis and manuscript writing. CoreFlow is being released to the scientific community as an open-sourced software package complete with proteomics-specific examples, which include corrections for incomplete isotopic labeling of peptides (SILAC) or arginine-to-proline conversion......A major challenge in mass spectrometry and other large-scale applications is how to handle, integrate, and model the data that is produced. Given the speed at which technology advances and the need to keep pace with biological experiments, we designed a computational platform, CoreFlow, which...... provides programmers with a framework to manage data in real-time. It allows users to upload data into a relational database (MySQL), and to create custom scripts in high-level languages such as R, Python, or Perl for processing, correcting and modeling this data. CoreFlow organizes these scripts...

  14. CyNC - a method for Real Time Analysis of Systems with Cyclic Data Flows

    DEFF Research Database (Denmark)

    Schiøler, Henrik; Nielsen, Jens F. Dalsgaard; Larsen, Kim Guldstrand

    2005-01-01

    The paper addresses a novel method for realtime analysis of systems with cyclic data flows. The presented method is based on Network Calculus principles, where upper and lower flow and service constraint are used to bound data flows and processing resources. In acyclic systems flow constraints ma...... in a prototype tool also denoted CyNC providing a graphical user interface for model specification based on the MATLAB/SimuLink framework....... in a space of constraint functions. In this paper a method denoted CyNC for obtaining a well defined solution to that problem is presented along with a theoretical justification of the method as well as comparative results for CyNC and alternative methods on a relevant example. The method is implemented...

  15. Data processing

    CERN Document Server

    Fry, T F

    2013-01-01

    Data Processing discusses the principles, practices, and associated tools in data processing. The book is comprised of 17 chapters that are organized into three parts. The first part covers the characteristics, systems, and methods of data processing. Part 2 deals with the data processing practice; this part discusses the data input, output, and storage. The last part discusses topics related to systems and software in data processing, which include checks and controls, computer language and programs, and program elements and structures. The text will be useful to practitioners of computer-rel

  16. Flow field mapping in data rack model

    Directory of Open Access Journals (Sweden)

    Matěcha J.

    2013-04-01

    Full Text Available The main objective of this study was to map the flow field inside the data rack model, fitted with three 1U server models. The server model is based on the common four-processor 1U server. The main dimensions of the data rack model geometry are taken fully from the real geometry. Only the model was simplified with respect to the greatest possibility in the experimental measurements. The flow field mapping was carried out both experimentally and numerically. PIV (Particle Image Velocimetry method was used for the experimental flow field mapping, when the flow field has been mapped for defined regions within the 2D/3D data rack model. Ansys CFX and OpenFOAM software were used for the numerical solution. Boundary conditions for numerical model were based on data obtained from experimental measurement of velocity profile at the output of the server mockup. This velocity profile was used as the input boundary condition in the calculation. In order to achieve greater consistency of the numerical model with experimental data, the numerical model was modified with regard to the results of experimental measurements. Results from the experimental and numerical measurements were compared and the areas of disparateness were identified. In further steps the obtained proven numerical model will be utilized for the real geometry of data racks and data.

  17. Air flow management in raised floor data centers

    CERN Document Server

    Arghode, Vaibhav K

    2016-01-01

    The Brief discuss primarily two aspects of air flow management in raised floor data centers. Firstly, cooling air delivery through perforated tiles will be examined and influence of the tile geometry on flow field development and hot air entrainment above perforated tiles will be discussed. Secondly, the use of cold aisle containment to physically separate hot and cold regions, and minimize hot and cold air mixing will be presented. Both experimental investigations and computational efforts are discussed and development of computational fluid dynamics (CFD) based models for simulating air flow in data centers is included. In addition, metrology tools for facility scale air velocity and temperature measurement, and air flow rate measurement through perforated floor tiles and server racks are examined and the authors present thermodynamics-based models to gauge the effectiveness and importance of air flow management schemes in data centers.

  18. Modeling study on the flow patterns of gas-liquid flow for fast decarburization during the RH process

    Science.gov (United States)

    Li, Yi-hong; Bao, Yan-ping; Wang, Rui; Ma, Li-feng; Liu, Jian-sheng

    2018-02-01

    A water model and a high-speed video camera were utilized in the 300-t RH equipment to study the effect of steel flow patterns in a vacuum chamber on fast decarburization and a superior flow-pattern map was obtained during the practical RH process. There are three flow patterns with different bubbling characteristics and steel surface states in the vacuum chamber: boiling pattern (BP), transition pattern (TP), and wave pattern (WP). The effect of the liquid-steel level and the residence time of the steel in the chamber on flow patterns and decarburization reaction were investigated, respectively. The liquid-steel level significantly affected the flow-pattern transition from BP to WP, and the residence time and reaction area were crucial to evaluate the whole decarburization process rather than the circulation flow rate and mixing time. A superior flow-pattern map during the practical RH process showed that the steel flow pattern changed from BP to TP quickly, and then remained as TP until the end of decarburization.

  19. Collecting and Storing Data Flow Monitoring in Elasticsearch

    CERN Document Server

    Hashim, Fatin Hazwani

    2014-01-01

    A very large amount of data is produced from the online data flow monitoring for the CMS data acquisition system. However, there are only a small portion of data is stored permanently in the relational database. This is because of the high cost needed while relying on the dedicated infrastructure as well as the issues in its performance itself. A new approach needs to be found in order to confront such a big volume of data known as “Big Data”. The Big Data [1] is the term given to the very large and complex data sets that cannot be handled by the traditional data processing application [2] in terms of capturing, storing, managing, and analyzing. The sheer size of the data [3] in CMS data acquisition system is one of the major challenges, and is the one of the most easily recognized. New technology need to be used as the alternative of the traditional databases initial evaluation to handle this problem as more data need to be stored permanently and can be easily retrieved. This report consists of the intro...

  20. Laser velocimeter data acquisition, processing, and control system

    International Nuclear Information System (INIS)

    Croll, R.H. Jr.; Peterson, C.W.

    1975-01-01

    The use of a mini-computer for data acquisition, processing, and control of a two-velocity-component dual beam laser velocimeter in a low-speed wind tunnel is described in detail. Digital stepping motors were programmed to map the mean-flow and turbulent fluctuating velocities in the test section boundary layer and free stream. The mini-computer interface controlled the operation of the LV processor and the high-speed selection of the photomultiplier tube whose output was to be processed. A statistical analysis of the large amount of data from the LV processor was performed by the computer while the experiment was in progress. The resulting velocities are in good agreement with hot-wire survey data obtained in the same facility

  1. Data uncertainties in material flow analysis: Municipal solid waste management system in Maputo City, Mozambique.

    Science.gov (United States)

    Dos Muchangos, Leticia Sarmento; Tokai, Akihiro; Hanashima, Atsuko

    2017-01-01

    Material flow analysis can effectively trace and quantify the flows and stocks of materials such as solid wastes in urban environments. However, the integrity of material flow analysis results is compromised by data uncertainties, an occurrence that is particularly acute in low-and-middle-income study contexts. This article investigates the uncertainties in the input data and their effects in a material flow analysis study of municipal solid waste management in Maputo City, the capital of Mozambique. The analysis is based on data collected in 2007 and 2014. Initially, the uncertainties and their ranges were identified by the data classification model of Hedbrant and Sörme, followed by the application of sensitivity analysis. The average lower and upper bounds were 29% and 71%, respectively, in 2007, increasing to 41% and 96%, respectively, in 2014. This indicates higher data quality in 2007 than in 2014. Results also show that not only data are partially missing from the established flows such as waste generation to final disposal, but also that they are limited and inconsistent in emerging flows and processes such as waste generation to material recovery (hence the wider variation in the 2014 parameters). The sensitivity analysis further clarified the most influencing parameter and the degree of influence of each parameter on the waste flows and the interrelations among the parameters. The findings highlight the need for an integrated municipal solid waste management approach to avoid transferring or worsening the negative impacts among the parameters and flows.

  2. A new data-processing approach to study particle motion using ultrafast X-ray tomography scanner: case study of gravitational mass flow

    Science.gov (United States)

    Waktola, Selam; Bieberle, Andre; Barthel, Frank; Bieberle, Martina; Hampel, Uwe; Grudzień, Krzysztof; Babout, Laurent

    2018-04-01

    In most industrial products, granular materials are often required to flow under gravity in various kinds of silo shapes and usually through an outlet in the bottom. There are several interrelated parameters which affect the flow, such as internal friction, bulk and packing density, hopper geometry, and material type. Due to the low-spatial resolution of electrical capacitance tomography or scanning speed limitation of standard X-ray CT systems, it is extremely challenging to measure the flow velocity and possible centrifugal effects of granular materials flow effectively. However, ROFEX (ROssendorf Fast Electron beam X-ray tomography) opens new avenues of granular flow investigation due to its very high temporal resolution. This paper aims to track particle movements and evaluate the local grain velocity during silo discharging process in the case of mass flow. The study has considered the use of the Seramis material, which can also serve as a type of tracer particles after impregnation, due to its porous nature. The presented novel image processing and analysis approach allows satisfyingly measuring individual particle velocities but also tracking their lateral movement and three-dimensional rotations.

  3. Baseliner: An open-source, interactive tool for processing sap flux data from thermal dissipation probes

    Directory of Open Access Journals (Sweden)

    A. Christopher Oishi

    2016-01-01

    Full Text Available Estimating transpiration from woody plants using thermal dissipation sap flux sensors requires careful data processing. Currently, researchers accomplish this using spreadsheets, or by personally writing scripts for statistical software programs (e.g., R, SAS. We developed the Baseliner software to help establish a standardized protocol for processing sap flux data. Baseliner enables users to QA/QC data and process data using a combination of automated steps, visualization, and manual editing. Data processing requires establishing a zero-flow reference value, or “baseline”, which varies among sensors and with time. Since no set of algorithms currently exists to reliably QA/QC and estimate the zero-flow baseline, Baseliner provides a graphical user interface to allow visual inspection and manipulation of data. Data are first automatically processed using a set of user defined parameters. The user can then view the data for additional, manual QA/QC and baseline identification using mouse and keyboard commands. The open-source software allows for user customization of data processing algorithms as improved methods are developed.

  4. The ATLAS Data Flow system for the Second LHC Run

    CERN Document Server

    Hauser, Reiner; The ATLAS collaboration

    2015-01-01

    After its first shutdown, LHC will provide pp collisions with increased luminosity and energy. In the ATLAS experiment the Trigger and Data Acquisition (TDAQ) system has been upgraded to deal with the increased event rates. The Data Flow (DF) element of the TDAQ is a distributed hardware and software system responsible for buffering and transporting event data from the Readout system to the High Level Trigger (HLT) and to the event storage. The DF has been reshaped in order to profit from the technological progress and to maximize the flexibility and efficiency of the data selection process. The updated DF is radically different from the previous implementation both in terms of architecture and expected performance. The pre-existing two level software filtering, known as L2 and the Event Filter, and the Event Building are now merged into a single process, performing incremental data collection and analysis. This design has many advantages, among which are: the radical simplification of the architecture, the f...

  5. flowClust: a Bioconductor package for automated gating of flow cytometry data

    Directory of Open Access Journals (Sweden)

    Lo Kenneth

    2009-05-01

    Full Text Available Abstract Background As a high-throughput technology that offers rapid quantification of multidimensional characteristics for millions of cells, flow cytometry (FCM is widely used in health research, medical diagnosis and treatment, and vaccine development. Nevertheless, there is an increasing concern about the lack of appropriate software tools to provide an automated analysis platform to parallelize the high-throughput data-generation platform. Currently, to a large extent, FCM data analysis relies on the manual selection of sequential regions in 2-D graphical projections to extract the cell populations of interest. This is a time-consuming task that ignores the high-dimensionality of FCM data. Results In view of the aforementioned issues, we have developed an R package called flowClust to automate FCM analysis. flowClust implements a robust model-based clustering approach based on multivariate t mixture models with the Box-Cox transformation. The package provides the functionality to identify cell populations whilst simultaneously handling the commonly encountered issues of outlier identification and data transformation. It offers various tools to summarize and visualize a wealth of features of the clustering results. In addition, to ensure its convenience of use, flowClust has been adapted for the current FCM data format, and integrated with existing Bioconductor packages dedicated to FCM analysis. Conclusion flowClust addresses the issue of a dearth of software that helps automate FCM analysis with a sound theoretical foundation. It tends to give reproducible results, and helps reduce the significant subjectivity and human time cost encountered in FCM analysis. The package contributes to the cytometry community by offering an efficient, automated analysis platform which facilitates the active, ongoing technological advancement.

  6. Study of a three-phase flow metering process for oil-water-gas flows; Etude d`un procede de mesure des debits d`un ecoulement triphasique de type eau-huile-gaz

    Energy Technology Data Exchange (ETDEWEB)

    Boyer, Ch.

    1996-11-01

    We propose a theoretical and experimental study of a three-phase flow metering process for oil-water-gas flows. The selected process is based on a combination of a mixer, a Venturi and ultrasonic methods. To perform an experimental validation of this process an instrumented set-up for three-phase air-oil-water flows has been designed, conceived and adjusted. An original theoretical model have been built to predict three-phase dispersed flows across a contraction. Once validated with two-phase air-water, oil-water and air-oil-water flows data, this model has been used to solve the Venturi metering problems. After a critical review of the available techniques, the ultrasonic propagation velocity has been selected to determine two-phase liquid-liquid flow composition. Two original models have been developed to describe the ultrasonic propagation with the dispersed phase fraction. The comparison with experimental data in oil-water flows show the superiority of one of the two models, the scattering model. For the void fraction determination in air-water flows, the work of Bensler (1990) based on the ultrasonic attenuation measurement has been extended to take into account the multiple scattering effects. Finally these techniques have been combined to determine the different flow rates in air-water, oil-water flows. For two-phase air-water and oil-water flows the problem is solved and the flow rates are measured with a very good accuracy ({+-} 3%). The results quality obtained with three-phase oil-water-gas flows and the secure theoretical bases allowing their interpretation give us the opportunity to strongly recommend the development of an industrial prototype based on the process we studied. (author) 183 refs.

  7. Documentation of a Conduit Flow Process (CFP) for MODFLOW-2005

    Science.gov (United States)

    Shoemaker, W. Barclay; Kuniansky, Eve L.; Birk, Steffen; Bauer, Sebastian; Swain, Eric D.

    2007-01-01

    This report documents the Conduit Flow Process (CFP) for the modular finite-difference ground-water flow model, MODFLOW-2005. The CFP has the ability to simulate turbulent ground-water flow conditions by: (1) coupling the traditional ground-water flow equation with formulations for a discrete network of cylindrical pipes (Mode 1), (2) inserting a high-conductivity flow layer that can switch between laminar and turbulent flow (Mode 2), or (3) simultaneously coupling a discrete pipe network while inserting a high-conductivity flow layer that can switch between laminar and turbulent flow (Mode 3). Conduit flow pipes (Mode 1) may represent dissolution or biological burrowing features in carbonate aquifers, voids in fractured rock, and (or) lava tubes in basaltic aquifers and can be fully or partially saturated under laminar or turbulent flow conditions. Preferential flow layers (Mode 2) may represent: (1) a porous media where turbulent flow is suspected to occur under the observed hydraulic gradients; (2) a single secondary porosity subsurface feature, such as a well-defined laterally extensive underground cave; or (3) a horizontal preferential flow layer consisting of many interconnected voids. In this second case, the input data are effective parameters, such as a very high hydraulic conductivity, representing multiple features. Data preparation is more complex for CFP Mode 1 (CFPM1) than for CFP Mode 2 (CFPM2). Specifically for CFPM1, conduit pipe locations, lengths, diameters, tortuosity, internal roughness, critical Reynolds numbers (NRe), and exchange conductances are required. CFPM1, however, solves the pipe network equations in a matrix that is independent of the porous media equation matrix, which may mitigate numerical instability associated with solution of dual flow components within the same matrix. CFPM2 requires less hydraulic information and knowledge about the specific location and hydraulic properties of conduits, and turbulent flow is approximated by

  8. Evolutionary analysis of groundwater flow: Application of multivariate statistical analysis to hydrochemical data in the Densu Basin, Ghana

    Science.gov (United States)

    Yidana, Sandow Mark; Bawoyobie, Patrick; Sakyi, Patrick; Fynn, Obed Fiifi

    2018-02-01

    An evolutionary trend has been postulated through the analysis of hydrochemical data of a crystalline rock aquifer system in the Densu Basin, Southern Ghana. Hydrochemcial data from 63 groundwater samples, taken from two main groundwater outlets (Boreholes and hand dug wells) were used to postulate an evolutionary theory for the basin. Sequential factor and hierarchical cluster analysis were used to disintegrate the data into three factors and five clusters (spatial associations). These were used to characterize the controls on groundwater hydrochemistry and its evolution in the terrain. The dissolution of soluble salts and cation exchange processes are the dominant processes controlling groundwater hydrochemistry in the terrain. The trend of evolution of this set of processes follows the pattern of groundwater flow predicted by a calibrated transient groundwater model in the area. The data suggest that anthropogenic activities represent the second most important process in the hydrochemistry. Silicate mineral weathering is the third most important set of processes. Groundwater associations resulting from Q-mode hierarchical cluster analysis indicate an evolutionary pattern consistent with the general groundwater flow pattern in the basin. These key findings are at variance with results of previous investigations and indicate that when carefully done, groundwater hydrochemical data can be very useful for conceptualizing groundwater flow in basins.

  9. On-line sample processing methods in flow analysis

    DEFF Research Database (Denmark)

    Miró, Manuel; Hansen, Elo Harald

    2008-01-01

    In this chapter, the state of the art of flow injection and related approaches thereof for automation and miniaturization of sample processing regardless of the aggregate state of the sample medium is overviewed. The potential of the various generation of flow injection for implementation of in...

  10. Coded Ultrasound for Blood Flow Estimation Using Subband Processing

    DEFF Research Database (Denmark)

    Gran, Fredrik; Udesen, Jesper; Nielsen, Michael Bachamnn

    2008-01-01

    the excitation signal is broadband and has good spatial resolution after pulse compression. This means that time can be saved by using the same data for B-mode imaging and blood flow estimation. Two different coding schemes are used in this paper, Barker codes and Golay codes. The performance of the codes......This paper investigates the use of coded excitation for blood flow estimation in medical ultrasound. Traditional autocorrelation estimators use narrow-band excitation signals to provide sufficient signal-to-noise-ratio (SNR) and velocity estimation performance. In this paper, broadband coded...... signals are used to increase SNR, followed by subband processing. The received broadband signal is filtered using a set of narrow-band filters. Estimating the velocity in each of the bands and averaging the results yields better performance compared with what would be possible when transmitting a narrow...

  11. Digital image processing based mass flow rate measurement of gas/solid two-phase flow

    Energy Technology Data Exchange (ETDEWEB)

    Song Ding; Peng Lihui; Lu Geng; Yang Shiyuan [Tsinghua National Laboratory for Information Science and Technology, Department of Automation, Tsinghua University, Beijing, 100084 (China); Yan Yong, E-mail: lihuipeng@tsinghua.edu.c [University of Kent, Canterbury, Kent CT2 7NT (United Kingdom)

    2009-02-01

    With the rapid growth of the process industry, pneumatic conveying as a tool for the transportation of a wide variety of pulverized and granular materials has become widespread. In order to improve plant control and operational efficiency, it is essential to know the parameters of the particle flow. This paper presents a digital imaging based method which is capable of measuring multiple flow parameters, including volumetric concentration, velocity and mass flow rate of particles in the gas/solid two phase flow. The measurement system consists of a solid state laser for illumination, a low-cost CCD camera for particle image acquisition and a microcomputer with bespoke software for particle image processing. The measurements of particle velocity and volumetric concentration share the same sensing hardware but use different exposure time and different image processing methods. By controlling the exposure time of the camera a clear image and a motion blurred image are obtained respectively. The clear image is thresholded by OTSU method to identify the particles from the dark background so that the volumetric concentration is determined by calculating the ratio between the particle area and the total area. Particle velocity is derived from the motion blur length, which is estimated from the motion blurred images by using the travelling wave equation method. The mass flow rate of particles is calculated by combining the particle velocity and volumetric concentration. Simulation and experiment results indicate that the proposed method is promising for the measurement of multiple parameters of gas/solid two-phase flow.

  12. Digital image processing based mass flow rate measurement of gas/solid two-phase flow

    International Nuclear Information System (INIS)

    Song Ding; Peng Lihui; Lu Geng; Yang Shiyuan; Yan Yong

    2009-01-01

    With the rapid growth of the process industry, pneumatic conveying as a tool for the transportation of a wide variety of pulverized and granular materials has become widespread. In order to improve plant control and operational efficiency, it is essential to know the parameters of the particle flow. This paper presents a digital imaging based method which is capable of measuring multiple flow parameters, including volumetric concentration, velocity and mass flow rate of particles in the gas/solid two phase flow. The measurement system consists of a solid state laser for illumination, a low-cost CCD camera for particle image acquisition and a microcomputer with bespoke software for particle image processing. The measurements of particle velocity and volumetric concentration share the same sensing hardware but use different exposure time and different image processing methods. By controlling the exposure time of the camera a clear image and a motion blurred image are obtained respectively. The clear image is thresholded by OTSU method to identify the particles from the dark background so that the volumetric concentration is determined by calculating the ratio between the particle area and the total area. Particle velocity is derived from the motion blur length, which is estimated from the motion blurred images by using the travelling wave equation method. The mass flow rate of particles is calculated by combining the particle velocity and volumetric concentration. Simulation and experiment results indicate that the proposed method is promising for the measurement of multiple parameters of gas/solid two-phase flow.

  13. MotionFlow: Visual Abstraction and Aggregation of Sequential Patterns in Human Motion Tracking Data.

    Science.gov (United States)

    Jang, Sujin; Elmqvist, Niklas; Ramani, Karthik

    2016-01-01

    Pattern analysis of human motions, which is useful in many research areas, requires understanding and comparison of different styles of motion patterns. However, working with human motion tracking data to support such analysis poses great challenges. In this paper, we propose MotionFlow, a visual analytics system that provides an effective overview of various motion patterns based on an interactive flow visualization. This visualization formulates a motion sequence as transitions between static poses, and aggregates these sequences into a tree diagram to construct a set of motion patterns. The system also allows the users to directly reflect the context of data and their perception of pose similarities in generating representative pose states. We provide local and global controls over the partition-based clustering process. To support the users in organizing unstructured motion data into pattern groups, we designed a set of interactions that enables searching for similar motion sequences from the data, detailed exploration of data subsets, and creating and modifying the group of motion patterns. To evaluate the usability of MotionFlow, we conducted a user study with six researchers with expertise in gesture-based interaction design. They used MotionFlow to explore and organize unstructured motion tracking data. Results show that the researchers were able to easily learn how to use MotionFlow, and the system effectively supported their pattern analysis activities, including leveraging their perception and domain knowledge.

  14. Vadose zone process that control landslide initiation and debris flow propagation

    Science.gov (United States)

    Sidle, Roy C.

    2015-04-01

    Advances in the areas of geotechnical engineering, hydrology, mineralogy, geomorphology, geology, and biology have individually advanced our understanding of factors affecting slope stability; however, the interactions among these processes and attributes as they affect the initiation and propagation of landslides and debris flows are not well understood. Here the importance of interactive vadose zone processes is emphasized related to the mechanisms, initiation, mode, and timing of rainfall-initiated landslides that are triggered by positive pore water accretion, loss of soil suction and increase in overburden weight, and long-term cumulative rain water infiltration. Both large- and small-scale preferential flow pathways can both contribute to and mitigate instability, by respectively concentrating and dispersing subsurface flow. These mechanisms are influenced by soil structure, lithology, landforms, and biota. Conditions conducive to landslide initiation by infiltration versus exfiltration are discussed relative to bedrock structure and joints. The effects of rhizosphere processes on slope stability are examined, including root reinforcement of soil mantles, evapotranspiration, and how root structures affect preferential flow paths. At a larger scale, the nexus between hillslope landslides and in-channel debris flows is examined with emphasis on understanding the timing of debris flows relative to chronic and episodic infilling processes, as well as the episodic nature of large rainfall and related stormflow generation in headwater streams. The hydrogeomorphic processes and conditions that determine whether or not landslides immediately mobilize into debris flows is important for predicting the timing and extent of devastating debris flow runout in steep terrain. Given the spatial footprint of individual landslides, it is necessary to assess vadose zone processes at appropriate scales to ascertain impacts on mass wasting phenomena. Articulating the appropriate

  15. In-situ Condition Monitoring of Components in Small Modular Reactors Using Process and Electrical Signature Analysis. Final report, volume 1. Development of experimental flow control loop, data analysis and plant monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Upadhyaya, Belle [Univ. of Tennessee, Knoxville, TN (United States); Hines, J. Wesley [Univ. of Tennessee, Knoxville, TN (United States); Damiano, Brian [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Mehta, Chaitanya [Univ. of Tennessee, Knoxville, TN (United States); Collins, Price [Univ. of Tennessee, Knoxville, TN (United States); Lish, Matthew [Univ. of Tennessee, Knoxville, TN (United States); Cady, Brian [Univ. of Tennessee, Knoxville, TN (United States); Lollar, Victor [Univ. of Tennessee, Knoxville, TN (United States); de Wet, Dane [Univ. of Tennessee, Knoxville, TN (United States); Bayram, Duygu [Univ. of Tennessee, Knoxville, TN (United States)

    2015-12-15

    The research and development under this project was focused on the following three major objectives: Objective 1: Identification of critical in-vessel SMR components for remote monitoring and development of their low-order dynamic models, along with a simulation model of an integral pressurized water reactor (iPWR). Objective 2: Development of an experimental flow control loop with motor-driven valves and pumps, incorporating data acquisition and on-line monitoring interface. Objective 3: Development of stationary and transient signal processing methods for electrical signatures, machinery vibration, and for characterizing process variables for equipment monitoring. This objective includes the development of a data analysis toolbox. The following is a summary of the technical accomplishments under this project: - A detailed literature review of various SMR types and electrical signature analysis of motor-driven systems was completed. A bibliography of literature is provided at the end of this report. Assistance was provided by ORNL in identifying some key references. - A review of literature on pump-motor modeling and digital signal processing methods was performed. - An existing flow control loop was upgraded with new instrumentation, data acquisition hardware and software. The upgrading of the experimental loop included the installation of a new submersible pump driven by a three-phase induction motor. All the sensors were calibrated before full-scale experimental runs were performed. - MATLAB-Simulink model of a three-phase induction motor and pump system was completed. The model was used to simulate normal operation and fault conditions in the motor-pump system, and to identify changes in the electrical signatures. - A simulation model of an integral PWR (iPWR) was updated and the MATLAB-Simulink model was validated for known transients. The pump-motor model was interfaced with the iPWR model for testing the impact of primary flow perturbations (upsets) on

  16. In-situ Condition Monitoring of Components in Small Modular Reactors Using Process and Electrical Signature Analysis. Final report, volume 1. Development of experimental flow control loop, data analysis and plant monitoring

    International Nuclear Information System (INIS)

    Upadhyaya, Belle; Hines, J. Wesley; Damiano, Brian; Mehta, Chaitanya; Collins, Price; Lish, Matthew; Cady, Brian; Lollar, Victor; De Wet, Dane; Bayram, Duygu

    2015-01-01

    The research and development under this project was focused on the following three major objectives: Objective 1: Identification of critical in-vessel SMR components for remote monitoring and development of their low-order dynamic models, along with a simulation model of an integral pressurized water reactor (iPWR). Objective 2: Development of an experimental flow control loop with motor-driven valves and pumps, incorporating data acquisition and on-line monitoring interface. Objective 3: Development of stationary and transient signal processing methods for electrical signatures, machinery vibration, and for characterizing process variables for equipment monitoring. This objective includes the development of a data analysis toolbox. The following is a summary of the technical accomplishments under this project: - A detailed literature review of various SMR types and electrical signature analysis of motor-driven systems was completed. A bibliography of literature is provided at the end of this report. Assistance was provided by ORNL in identifying some key references. - A review of literature on pump-motor modeling and digital signal processing methods was performed. - An existing flow control loop was upgraded with new instrumentation, data acquisition hardware and software. The upgrading of the experimental loop included the installation of a new submersible pump driven by a three-phase induction motor. All the sensors were calibrated before full-scale experimental runs were performed. - MATLAB-Simulink model of a three-phase induction motor and pump system was completed. The model was used to simulate normal operation and fault conditions in the motor-pump system, and to identify changes in the electrical signatures. - A simulation model of an integral PWR (iPWR) was updated and the MATLAB-Simulink model was validated for known transients. The pump-motor model was interfaced with the iPWR model for testing the impact of primary flow perturbations (upsets) on

  17. In-line monitoring and optimization of powder flow in a simulated continuous process using transmission near infrared spectroscopy.

    Science.gov (United States)

    Alam, Md Anik; Shi, Zhenqi; Drennen, James K; Anderson, Carl A

    2017-06-30

    In-line monitoring of continuous powder flow is an integral part of the continuous manufacturing process of solid oral dosage forms in the pharmaceutical industry. Specifically, monitoring downstream from loss-in-weight (LIW) feeders and/or continuous mixers provides important data about the state of the process. Such measurements support control of the process and thereby enhance product quality. Near Infrared Spectroscopy (NIRS) is a potential PAT tool to monitor the homogeneity of a continuous powder flow stream in pharmaceutical manufacturing. However, the association of analytical results from NIR sampling of the powder stream and the homogeneity (content uniformity) of the resulting tablets provides several challenges; appropriate sampling strategies, adequately robust modeling techniques and poor sensitivities (for low dose APIs) are amongst them. Information from reflectance-based NIRS sampling is limited. The region of the powder bed that is interrogated is confined to the surface where the measurement is made. This potential bias in sampling may, in turn, limit the ability to predict the homogeneity of the finished dosage form. Further, changes to the processing parameters (e.g., rate of powder flow) often have a significant effect on the resulting data. Sample representation, interdependence between process parameters and their effects on powder flow behavior are critical factors for NIRS monitoring of continuous powder flow system. A transmission NIR method was developed as an alternative technique to monitor continuous powder flow and quantify API in the powder stream. Transmission NIRS was used to determine the thickness of the powder stream flowing from a loss-in-weight feeder. The thickness measurement of the powder stream provided an in-depth understanding about the effects of process parameters such as tube angles and powder flow rates on powder flow behaviors. This knowledge based approach helped to define an analytical design space that was

  18. Laser Doppler Blood Flow Imaging Using a CMOS Imaging Sensor with On-Chip Signal Processing

    Directory of Open Access Journals (Sweden)

    Cally Gill

    2013-09-01

    Full Text Available The first fully integrated 2D CMOS imaging sensor with on-chip signal processing for applications in laser Doppler blood flow (LDBF imaging has been designed and tested. To obtain a space efficient design over 64 × 64 pixels means that standard processing electronics used off-chip cannot be implemented. Therefore the analog signal processing at each pixel is a tailored design for LDBF signals with balanced optimization for signal-to-noise ratio and silicon area. This custom made sensor offers key advantages over conventional sensors, viz. the analog signal processing at the pixel level carries out signal normalization; the AC amplification in combination with an anti-aliasing filter allows analog-to-digital conversion with a low number of bits; low resource implementation of the digital processor enables on-chip processing and the data bottleneck that exists between the detector and processing electronics has been overcome. The sensor demonstrates good agreement with simulation at each design stage. The measured optical performance of the sensor is demonstrated using modulated light signals and in vivo blood flow experiments. Images showing blood flow changes with arterial occlusion and an inflammatory response to a histamine skin-prick demonstrate that the sensor array is capable of detecting blood flow signals from tissue.

  19. Laser doppler blood flow imaging using a CMOS imaging sensor with on-chip signal processing.

    Science.gov (United States)

    He, Diwei; Nguyen, Hoang C; Hayes-Gill, Barrie R; Zhu, Yiqun; Crowe, John A; Gill, Cally; Clough, Geraldine F; Morgan, Stephen P

    2013-09-18

    The first fully integrated 2D CMOS imaging sensor with on-chip signal processing for applications in laser Doppler blood flow (LDBF) imaging has been designed and tested. To obtain a space efficient design over 64 × 64 pixels means that standard processing electronics used off-chip cannot be implemented. Therefore the analog signal processing at each pixel is a tailored design for LDBF signals with balanced optimization for signal-to-noise ratio and silicon area. This custom made sensor offers key advantages over conventional sensors, viz. the analog signal processing at the pixel level carries out signal normalization; the AC amplification in combination with an anti-aliasing filter allows analog-to-digital conversion with a low number of bits; low resource implementation of the digital processor enables on-chip processing and the data bottleneck that exists between the detector and processing electronics has been overcome. The sensor demonstrates good agreement with simulation at each design stage. The measured optical performance of the sensor is demonstrated using modulated light signals and in vivo blood flow experiments. Images showing blood flow changes with arterial occlusion and an inflammatory response to a histamine skin-prick demonstrate that the sensor array is capable of detecting blood flow signals from tissue.

  20. A review of the heat flow data of NE Morocco

    Science.gov (United States)

    Chiozzi, Paolo; Barkaoui, Alae-Eddine; Rimi, Abdelkrim; Verdoya, Massimo; Zarhloule, Yassine

    2016-04-01

    background terrestrial heat flow and loosing heat by conduction through the overlying cover. The results slightly modify the heat-flow picture proposed in previous investigations and point to negligible effects of advection. The heat flow ranges from 64 to 112 mW m-2, showing a variation in relation to the different tectonic units, and increases with the decrease of crustal thickness. Heat-flow data do not satisfactorily track the volcanism of the northeastern sector. The largest values (86-112 mW m-2) are found in the Oujda region, at the easternmost edge of the investigated area. The mantle origin of this thermal anomaly can be neither ruled out nor proved using only heat flow data, because ˜15 Ma or less is a too short time to enhance the surface heat flow for pure conduction through a ˜ 100 km-thick lithosphere. We speculate that the heat flow in the Oujda region might be related to subduction and rifting processes that occurred during the opening of the western Mediterranean basins.

  1. Flow measurement and control in the defense waste process

    International Nuclear Information System (INIS)

    Heckendorn, F.M. II.

    1985-01-01

    The Defense Waste Processing Facility (DWPF) for immobilizing Savannah River Plant (SRP) high-level radioactive waste is now under construction. Previously stored waste is retrieved and processed into a glass matrix for permanent storage. The equipment operates in an entirely remote environment for both processing and maintenance due to the highly radioactive nature of the waste. A fine powdered glass frit is mixed with the waste prior to its introduction as a slurry into an electric glass furnace. The slurry is Bingham plastic in nature and of high viscosity. This combination of factors has created significant problems in flow measurement and control. Specialized pieces of equipment have been demonstrated that will function properly in a highly abrasive environment while receiving no maintenance during their lifetime. Included are flow meters, flow control technology, flow switching, and remote connections. No plastics or elastomers are allowed in contact with fluids and all electronic components are mounted remotely. Both two- and three-way valves are used. Maintenance is by crane replacement of process sections, utilizing specialized connectors. All portions of the above are now operating full scale (radioactively cold) at the test facility at SRP. 4 references, 8 figures

  2. State Space Reduction of Linear Processes using Control Flow Reconstruction

    NARCIS (Netherlands)

    van de Pol, Jan Cornelis; Timmer, Mark

    2009-01-01

    We present a new method for fighting the state space explosion of process algebraic specifications, by performing static analysis on an intermediate format: linear process equations (LPEs). Our method consists of two steps: (1) we reconstruct the LPE's control flow, detecting control flow parameters

  3. State Space Reduction of Linear Processes Using Control Flow Reconstruction

    NARCIS (Netherlands)

    van de Pol, Jan Cornelis; Timmer, Mark; Liu, Zhiming; Ravn, Anders P.

    2009-01-01

    We present a new method for fighting the state space explosion of process algebraic specifications, by performing static analysis on an intermediate format: linear process equations (LPEs). Our method consists of two steps: (1) we reconstruct the LPE's control flow, detecting control flow parameters

  4. Processing of the WLCG monitoring data using NoSQL

    Science.gov (United States)

    Andreeva, J.; Beche, A.; Belov, S.; Dzhunov, I.; Kadochnikov, I.; Karavakis, E.; Saiz, P.; Schovancova, J.; Tuckett, D.

    2014-06-01

    The Worldwide LHC Computing Grid (WLCG) today includes more than 150 computing centres where more than 2 million jobs are being executed daily and petabytes of data are transferred between sites. Monitoring the computing activities of the LHC experiments, over such a huge heterogeneous infrastructure, is extremely demanding in terms of computation, performance and reliability. Furthermore, the generated monitoring flow is constantly increasing, which represents another challenge for the monitoring systems. While existing solutions are traditionally based on Oracle for data storage and processing, recent developments evaluate NoSQL for processing large-scale monitoring datasets. NoSQL databases are getting increasingly popular for processing datasets at the terabyte and petabyte scale using commodity hardware. In this contribution, the integration of NoSQL data processing in the Experiment Dashboard framework is described along with first experiences of using this technology for monitoring the LHC computing activities.

  5. Processing of the WLCG monitoring data using NoSQL

    International Nuclear Information System (INIS)

    Andreeva, J; Beche, A; Karavakis, E; Saiz, P; Tuckett, D; Belov, S; Kadochnikov, I; Schovancova, J; Dzhunov, I

    2014-01-01

    The Worldwide LHC Computing Grid (WLCG) today includes more than 150 computing centres where more than 2 million jobs are being executed daily and petabytes of data are transferred between sites. Monitoring the computing activities of the LHC experiments, over such a huge heterogeneous infrastructure, is extremely demanding in terms of computation, performance and reliability. Furthermore, the generated monitoring flow is constantly increasing, which represents another challenge for the monitoring systems. While existing solutions are traditionally based on Oracle for data storage and processing, recent developments evaluate NoSQL for processing large-scale monitoring datasets. NoSQL databases are getting increasingly popular for processing datasets at the terabyte and petabyte scale using commodity hardware. In this contribution, the integration of NoSQL data processing in the Experiment Dashboard framework is described along with first experiences of using this technology for monitoring the LHC computing activities.

  6. An improved algorithm of image processing technique for film thickness measurement in a horizontal stratified gas-liquid two-phase flow

    Energy Technology Data Exchange (ETDEWEB)

    Kuntoro, Hadiyan Yusuf, E-mail: hadiyan.y.kuntoro@mail.ugm.ac.id; Majid, Akmal Irfan; Deendarlianto, E-mail: deendarlianto@ugm.ac.id [Center for Energy Studies, Gadjah Mada University, Sekip K-1A Kampus UGM, Yogyakarta 55281 (Indonesia); Department of Mechanical and Industrial Engineering, Faculty of Engineering, Gadjah Mada University, Jalan Grafika 2, Yogyakarta 55281 (Indonesia); Hudaya, Akhmad Zidni; Dinaryanto, Okto [Department of Mechanical and Industrial Engineering, Faculty of Engineering, Gadjah Mada University, Jalan Grafika 2, Yogyakarta 55281 (Indonesia)

    2016-06-03

    Due to the importance of the two-phase flow researches for the industrial safety analysis, many researchers developed various methods and techniques to study the two-phase flow phenomena on the industrial cases, such as in the chemical, petroleum and nuclear industries cases. One of the developing methods and techniques is image processing technique. This technique is widely used in the two-phase flow researches due to the non-intrusive capability to process a lot of visualization data which are contain many complexities. Moreover, this technique allows to capture direct-visual information data of the flow which are difficult to be captured by other methods and techniques. The main objective of this paper is to present an improved algorithm of image processing technique from the preceding algorithm for the stratified flow cases. The present algorithm can measure the film thickness (h{sub L}) of stratified flow as well as the geometrical properties of the interfacial waves with lower processing time and random-access memory (RAM) usage than the preceding algorithm. Also, the measurement results are aimed to develop a high quality database of stratified flow which is scanty. In the present work, the measurement results had a satisfactory agreement with the previous works.

  7. Web-based execution of graphical work-flows: a modular platform for multifunctional scientific process automation

    International Nuclear Information System (INIS)

    De Ley, E.; Jacobs, D.; Ounsy, M.

    2012-01-01

    The Passerelle process automation suite offers a fundamentally modular solution platform, based on a layered integration of several best-of-breed technologies. It has been successfully applied by Synchrotron Soleil as the sequencer for data acquisition and control processes on its beamlines, integrated with TANGO as a control bus and GlobalScreen TM ) as the SCADA package. Since last year it is being used as the graphical work-flow component for the development of an eclipse-based Data Analysis Work Bench, at ESRF. The top layer of Passerelle exposes an actor-based development paradigm, based on the Ptolemy framework (UC Berkeley). Actors provide explicit reusability and strong decoupling, combined with an inherently concurrent execution model. Actor libraries exist for TANGO integration, web-services, database operations, flow control, rules-based analysis, mathematical calculations, launching external scripts etc. Passerelle's internal architecture is based on OSGi, the major Java framework for modular service-based applications. A large set of modules exist that can be recombined as desired to obtain different features and deployment models. Besides desktop versions of the Passerelle work-flow workbench, there is also the Passerelle Manager. It is a secured web application including a graphical editor, for centralized design, execution, management and monitoring of process flows, integrating standard Java Enterprise services with OSGi. We will present the internal technical architecture, some interesting application cases and the lessons learnt. (authors)

  8. Information systems for material flow management in construction processes

    Science.gov (United States)

    Mesároš, P.; Mandičák, T.

    2015-01-01

    The article describes the options for the management of material flows in the construction process. Management and resource planning is one of the key factors influencing the effectiveness of construction project. It is very difficult to set these flows correctly. The current period offers several options and tools to do this. Information systems and their modules can be used just for the management of materials in the construction process.

  9. Experimental study of bubbly flow using image processing techniques

    Energy Technology Data Exchange (ETDEWEB)

    Fu, Yucheng, E-mail: ycfu@vt.edu; Liu, Yang, E-mail: liu130@vt.edu

    2016-12-15

    This paper presents an experimental study of bubbly flows at relatively high void fractions using an advanced image processing method. Bubble overlapping is a common problem in such flows and the past studies often treat the overlapping bubbles as a whole, which introduces considerable measurement uncertainties. In this study, a hybrid method combining intersection point detection and watershed segmentation is used to separate the overlapping bubbles. In order to reconstruct bubbles from separated segments, a systematic procedure is developed which can preserve more features captured in the raw image compared to the simple ellipse fitting method. The distributions of void fraction, interfacial area concentration, number density and velocity are obtained from the extracted bubble information. High-speed images of air-water bubbly flows are acquired and processed for eight test runs conducted in a 30 mm × 10 mm rectangular channel. The developed image processing scheme can effectively separate overlapping bubbles and the results compare well with the measurements by the gas flow meter and double-sensor conductivity probe. The development of flows in transverse and mainstream directions are analyzed and compared with the prediction made by the one-dimensional interfacial area transport equation (IATE) and the bubble number density transport equation.

  10. Transient flow analysis of integrated valve opening process

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Xinming; Qin, Benke; Bo, Hanliang, E-mail: bohl@tsinghua.edu.cn; Xu, Xingxing

    2017-03-15

    Highlights: • The control rod hydraulic driving system (CRHDS) is a new type of built-in control rod drive technology and the integrated valve (IV) is the key control component. • The transient flow experiment induced by IV is conducted and the test results are analyzed to get its working mechanism. • The theoretical model of IV opening process is established and applied to get the changing rule of the transient flow characteristic parameters. - Abstract: The control rod hydraulic driving system (CRHDS) is a new type of built-in control rod drive technology and the IV is the key control component. The working principle of integrated valve (IV) is analyzed and the IV hydraulic experiment is conducted. There is transient flow phenomenon in the valve opening process. The theoretical model of IV opening process is established by the loop system control equations and boundary conditions. The valve opening boundary condition equation is established based on the IV three dimensional flow field analysis results and the dynamic analysis of the valve core movement. The model calculation results are in good agreement with the experimental results. On this basis, the model is used to analyze the transient flow under high temperature condition. The peak pressure head is consistent with the one under room temperature and the pressure fluctuation period is longer than the one under room temperature. Furthermore, the changing rule of pressure transients with the fluid and loop structure parameters is analyzed. The peak pressure increases with the flow rate and the peak pressure decreases with the increase of the valve opening time. The pressure fluctuation period increases with the loop pipe length and the fluctuation amplitude remains largely unchanged under different equilibrium pressure conditions. The research results lay the base for the vibration reduction analysis of the CRHDS.

  11. Interactive Data Exploration for High-Performance Fluid Flow Computations through Porous Media

    KAUST Repository

    Perovic, Nevena

    2014-09-01

    © 2014 IEEE. Huge data advent in high-performance computing (HPC) applications such as fluid flow simulations usually hinders the interactive processing and exploration of simulation results. Such an interactive data exploration not only allows scientiest to \\'play\\' with their data but also to visualise huge (distributed) data sets in both an efficient and easy way. Therefore, we propose an HPC data exploration service based on a sliding window concept, that enables researches to access remote data (available on a supercomputer or cluster) during simulation runtime without exceeding any bandwidth limitations between the HPC back-end and the user front-end.

  12. The management and realizing of image data flow in PACS

    International Nuclear Information System (INIS)

    Tao Yonghao; Miao Jingtao

    2002-01-01

    Objective: To explore the management model and realizing of PACS image data-flow. Methods: Based on the implementing environment and management model of PACS image data-flow after full digital reengineering for radiology department in Shanghai First Hospital was completed, analysis on image data flow types, procedure, and achieving pattern were conducted. Results: Two kinds of image data-flow management were set up for the PACS of Shanghai First Hospital, which included image archiving procedure and image forward procedure. The former was implemented with central management model while the latter was achieved with a program that functionally acted as workflow management running on the central server. Conclusion: The image data-flow management pattern, as a key factor for PACS, has to be designed and implemented functionally and effectively depending on the performance environment, the tasks and requirements specified to particular user

  13. Exploring 4D Flow Data in an Immersive Virtual Environment

    Science.gov (United States)

    Stevens, A. H.; Butkiewicz, T.

    2017-12-01

    Ocean models help us to understand and predict a wide range of intricate physical processes which comprise the atmospheric and oceanic systems of the Earth. Because these models output an abundance of complex time-varying three-dimensional (i.e., 4D) data, effectively conveying the myriad information from a given model poses a significant visualization challenge. The majority of the research effort into this problem has concentrated around synthesizing and examining methods for representing the data itself; by comparison, relatively few studies have looked into the potential merits of various viewing conditions and virtual environments. We seek to improve our understanding of the benefits offered by current consumer-grade virtual reality (VR) systems through an immersive, interactive 4D flow visualization system. Our dataset is a Regional Ocean Modeling System (ROMS) model representing a 12-hour tidal cycle of the currents within New Hampshire's Great Bay estuary. The model data was loaded into a custom VR particle system application using the OpenVR software library and the HTC Vive hardware, which tracks a headset and two six-degree-of-freedom (6DOF) controllers within a 5m-by-5m area. The resulting visualization system allows the user to coexist in the same virtual space as the data, enabling rapid and intuitive analysis of the flow model through natural interactions with the dataset and within the virtual environment. Whereas a traditional computer screen typically requires the user to reposition a virtual camera in the scene to obtain the desired view of the data, in virtual reality the user can simply move their head to the desired viewpoint, completely eliminating the mental context switches from data exploration/analysis to view adjustment and back. The tracked controllers become tools to quickly manipulate (reposition, reorient, and rescale) the dataset and to interrogate it by, e.g., releasing dye particles into the flow field, probing scalar velocities

  14. New Techniques for Deep Learning with Geospatial Data using TensorFlow, Earth Engine, and Google Cloud Platform

    Science.gov (United States)

    Hancher, M.

    2017-12-01

    Recent years have seen promising results from many research teams applying deep learning techniques to geospatial data processing. In that same timeframe, TensorFlow has emerged as the most popular framework for deep learning in general, and Google has assembled petabytes of Earth observation data from a wide variety of sources and made them available in analysis-ready form in the cloud through Google Earth Engine. Nevertheless, developing and applying deep learning to geospatial data at scale has been somewhat cumbersome to date. We present a new set of tools and techniques that simplify this process. Our approach combines the strengths of several underlying tools: TensorFlow for its expressive deep learning framework; Earth Engine for data management, preprocessing, postprocessing, and visualization; and other tools in Google Cloud Platform to train TensorFlow models at scale, perform additional custom parallel data processing, and drive the entire process from a single familiar Python development environment. These tools can be used to easily apply standard deep neural networks, convolutional neural networks, and other custom model architectures to a variety of geospatial data structures. We discuss our experiences applying these and related tools to a range of machine learning problems, including classic problems like cloud detection, building detection, land cover classification, as well as more novel problems like illegal fishing detection. Our improved tools will make it easier for geospatial data scientists to apply modern deep learning techniques to their own problems, and will also make it easier for machine learning researchers to advance the state of the art of those techniques.

  15. Data flow manager for DART

    International Nuclear Information System (INIS)

    Berg, D.; Black, D.; Slimmer, D.; Engelfried, J.; O'Dell, V.

    1994-04-01

    The DART Data Flow Manager (dfm) integrates a buffer manager with a requester/provider model for scheduling work on buffers. Buffer lists, representing built events or other data, are queued by service requesters to service providers. Buffers may be either internal (reside on the local node), or external (located elsewhere, e.g., dual ported memory). Internal buffers are managed locally. Wherever possible, dfm moves only addresses of buffers rather than buffers themselves

  16. Pre-processing data using wavelet transform and PCA based on ...

    Indian Academy of Sciences (India)

    Abazar Solgi

    2017-07-14

    Jul 14, 2017 ... Pre-processing data using wavelet transform and PCA based on support vector regression and gene expression programming for river flow simulation. Abazar Solgi1,*, Amir Pourhaghi1, Ramin Bahmani2 and Heidar Zarei3. 1. Department of Water Resources Engineering, Shahid Chamran University of ...

  17. Flow modelling of plant processes for fault diagnosis

    International Nuclear Information System (INIS)

    Praetorius, N.; Duncan, K.D.

    1989-01-01

    Flow and its interruption or degradation is seen by many people in industry to be the essential problem in fault diagnonsis. It is this observation which has motivated the representation of a complex simulation of a process plant presented here. The display system we have developed represents the mass and energy flow functions of the plant and the relationship between such flow functions. In this report we shall mainly discuss how such representation seems to provide opportunities to design alarm systems as an integral part of the flow function representation itself and to solve two of the most intricate problems in diagnosis, namely the problem of symptom referral and the problem of confuseable faults. (author)

  18. Development of system for product tracking and data acquisition of data irradiation process in large gamma irradiators

    International Nuclear Information System (INIS)

    Soares, Jose Roberto

    2010-01-01

    The sterilization of medical care products using ionizing radiation is a consolidated technique. In Brazil there are in operation gamma irradiators with capacity between 0.37 PBq (10kCi) 185 PBq (5 MCi) using radioisotopes 60 Co as radiation source. The developed work provides an accurate control and a data acquisition for the application of Good Manufacturing Practices during all phases of an irradiation process, required by the standards of ANVISA, ISO and IAEA technical recommendations for the treatment of foods and medical products. All the steps involved in the irradiation treatment are mapped into process flow (work flow), where each agent (participant) has its systematized tasks. The data acquisition process, monitoring and control, are based on a set of tools (free software licenses) integrated by a network of efficient communication, including the use of Web resources. Using the Gamma Irradiator Multipurpose IPEN/CNEN/USP all the development was performed to be applied in irradiators facilities operating in industrial scale. The system enables a complete traceability of the process, in real time, for any participant and also the storage of the corresponding records to be audited. (author)

  19. Development of system for product tracking and data acquisition of data irradiation process in large gamma irradiators

    International Nuclear Information System (INIS)

    Soares, Jose R.; Rela, Paulo R.; Costa, Fabio E.

    2011-01-01

    The sterilization of medical care products using ionizing radiation is a consolidated technique. In Brazil there are in operation gamma irradiators with capacity between 0.37 PBq (10kCi) 185 PBq (5 MCi) using radioisotopes 60 Co as radiation source. The developed work provides an accurate control and data acquisition for the application of good manufacturing practices during all phases of an irradiation process, required by the standards of ANVISA, technical ISO and IAEA recommendations for the treatment of foods and medical products. All the steps involved in the irradiation treatment are mapped into process flow (work flow), where each agent (participant) has its systematized tasks. The automatic data process acquisition using wireless ZigBee technology, monitoring and control, are based on a set of tools (free software licenses) integrated by a network of efficient communication, including the use of Web resources. Using the Gamma Irradiator Multipurpose IPEN/CNEN-SP all the development was performed to be applied in irradiators' facilities operating in industrial scale. The system enables a complete traceability of the process, in real time, for any participant and also the storage of the corresponding records to be audited. (author)

  20. Development of system for product tracking and data acquisition of data irradiation process in large gamma irradiators

    Energy Technology Data Exchange (ETDEWEB)

    Soares, Jose R., E-mail: joseroberto.soares@mackenzie.br [Universidade Presbiteriana Mackenzie. Escola de Engenharia. Sao Paulo, SP (Brazil); Rela, Paulo R.; Costa, Fabio E., E-mail: prela@ipen.br, E-mail: fecosta@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2011-07-01

    The sterilization of medical care products using ionizing radiation is a consolidated technique. In Brazil there are in operation gamma irradiators with capacity between 0.37 PBq (10kCi) 185 PBq (5 MCi) using radioisotopes {sup 60}Co as radiation source. The developed work provides an accurate control and data acquisition for the application of good manufacturing practices during all phases of an irradiation process, required by the standards of ANVISA, technical ISO and IAEA recommendations for the treatment of foods and medical products. All the steps involved in the irradiation treatment are mapped into process flow (work flow), where each agent (participant) has its systematized tasks. The automatic data process acquisition using wireless ZigBee technology, monitoring and control, are based on a set of tools (free software licenses) integrated by a network of efficient communication, including the use of Web resources. Using the Gamma Irradiator Multipurpose IPEN/CNEN-SP all the development was performed to be applied in irradiators' facilities operating in industrial scale. The system enables a complete traceability of the process, in real time, for any participant and also the storage of the corresponding records to be audited. (author)

  1. Making Data Flow Diagrams Accessible for Visually Impaired Students Using Excel Tables

    Science.gov (United States)

    Sauter, Vicki L.

    2015-01-01

    This paper addresses the use of Excel tables to convey information to blind students that would otherwise be presented using graphical tools, such as Data Flow Diagrams. These tables can supplement diagrams in the classroom when introducing their use to understand the scope of a system and its main sub-processes, on exams when answering questions…

  2. Tracer measurements compared to process data reconciliation in accordance with VDI 2048

    International Nuclear Information System (INIS)

    Hungerbuehler, Thomas; Langenstein, Magnus

    2007-01-01

    The feed water mass flow is the key measured variable used to determine the thermal reactor output in a nuclear power plant. Usually this parameter is recorded via venturi nozzles or orifice plates. The problem with both principles of measurement, however, is that an accuracy of below 1% cannot be reached. In the case of nuclear power plants and depending on the size of the plant, this corresponds to an electrical output of 4 MWel to 16 MWel. In order to make more accurate statements about the feed water amounts recirculated in the water-steam circuit, tracer measurements that offer an accuracy of up to 0.2% are used. A drawback of this method is that this measuring principle is suitable only for providing an instantaneous picture but does not provide continuous operating information about the feed water mass flow. Process data reconciliation based on VDI 2048 is a mathematical-statistic process that makes use of redundant process information. The uncertainty of reconciled feed water flow rates and the thermal reactor output calculated on this basis can be reduced to 0.4%. The overall process monitored continuously in this manner therefore provides hourly process information of a quality equal to that obtained with acceptance measurements. In the NPP Beznau both methods have been used in parallel to determine the feed water flow rates in 2004 (unit 1) and 2005 (unit 2). Comparison of the results shows that a high level of agreement is obtained between the results of the reconciliation and the results of the tracer measurements. For this reason it was decided that no future tracer measurements will be conducted anymore. A result of the findings of this comparison, a high level of acceptance of process data reconciliation based on VDI 2048 was achieved. (author)

  3. Developing the technique of image processing for the study of bubble dynamics in subcooled flow boiling

    International Nuclear Information System (INIS)

    Donevski, Bozin; Saga, Tetsuo; Kobayashi, Toshio; Segawa, Shigeki

    1998-01-01

    This study presents the development of an image processing technique for studying the dynamic behavior of vapor bubbles in a two-phase bubbly flow. It focuses on the quantitative assessment of some basic parameters such as a local bubble size and size distribution in the range of void fraction between 0.03 < a < 0.07. The image processing methodology is based upon the computer evaluation of high speed motion pictures obtained from the flow field in the region of underdeveloped subcooled flow boiling for a variety of experimental conditions. This technique has the advantage of providing computer measurements and extracting the bubbles of the two-phase bubbly flow. This method appears to be promising for determining the governing mechanisms in subcooled flow boiling, particularly near the point of net vapor generation. The data collected by the image analysis software can be incorporated into the new models and computer codes currently under development which are aimed at incorporating the effect of vapor generation and condensation separately. (author)

  4. flowPeaks: a fast unsupervised clustering for flow cytometry data via K-means and density peak finding.

    Science.gov (United States)

    Ge, Yongchao; Sealfon, Stuart C

    2012-08-01

    For flow cytometry data, there are two common approaches to the unsupervised clustering problem: one is based on the finite mixture model and the other on spatial exploration of the histograms. The former is computationally slow and has difficulty to identify clusters of irregular shapes. The latter approach cannot be applied directly to high-dimensional data as the computational time and memory become unmanageable and the estimated histogram is unreliable. An algorithm without these two problems would be very useful. In this article, we combine ideas from the finite mixture model and histogram spatial exploration. This new algorithm, which we call flowPeaks, can be applied directly to high-dimensional data and identify irregular shape clusters. The algorithm first uses K-means algorithm with a large K to partition the cell population into many small clusters. These partitioned data allow the generation of a smoothed density function using the finite mixture model. All local peaks are exhaustively searched by exploring the density function and the cells are clustered by the associated local peak. The algorithm flowPeaks is automatic, fast and reliable and robust to cluster shape and outliers. This algorithm has been applied to flow cytometry data and it has been compared with state of the art algorithms, including Misty Mountain, FLOCK, flowMeans, flowMerge and FLAME. The R package flowPeaks is available at https://github.com/yongchao/flowPeaks. yongchao.ge@mssm.edu Supplementary data are available at Bioinformatics online.

  5. FuGEFlow: data model and markup language for flow cytometry

    Directory of Open Access Journals (Sweden)

    Manion Frank J

    2009-06-01

    Full Text Available Abstract Background Flow cytometry technology is widely used in both health care and research. The rapid expansion of flow cytometry applications has outpaced the development of data storage and analysis tools. Collaborative efforts being taken to eliminate this gap include building common vocabularies and ontologies, designing generic data models, and defining data exchange formats. The Minimum Information about a Flow Cytometry Experiment (MIFlowCyt standard was recently adopted by the International Society for Advancement of Cytometry. This standard guides researchers on the information that should be included in peer reviewed publications, but it is insufficient for data exchange and integration between computational systems. The Functional Genomics Experiment (FuGE formalizes common aspects of comprehensive and high throughput experiments across different biological technologies. We have extended FuGE object model to accommodate flow cytometry data and metadata. Methods We used the MagicDraw modelling tool to design a UML model (Flow-OM according to the FuGE extension guidelines and the AndroMDA toolkit to transform the model to a markup language (Flow-ML. We mapped each MIFlowCyt term to either an existing FuGE class or to a new FuGEFlow class. The development environment was validated by comparing the official FuGE XSD to the schema we generated from the FuGE object model using our configuration. After the Flow-OM model was completed, the final version of the Flow-ML was generated and validated against an example MIFlowCyt compliant experiment description. Results The extension of FuGE for flow cytometry has resulted in a generic FuGE-compliant data model (FuGEFlow, which accommodates and links together all information required by MIFlowCyt. The FuGEFlow model can be used to build software and databases using FuGE software toolkits to facilitate automated exchange and manipulation of potentially large flow cytometry experimental data sets

  6. FuGEFlow: data model and markup language for flow cytometry.

    Science.gov (United States)

    Qian, Yu; Tchuvatkina, Olga; Spidlen, Josef; Wilkinson, Peter; Gasparetto, Maura; Jones, Andrew R; Manion, Frank J; Scheuermann, Richard H; Sekaly, Rafick-Pierre; Brinkman, Ryan R

    2009-06-16

    Flow cytometry technology is widely used in both health care and research. The rapid expansion of flow cytometry applications has outpaced the development of data storage and analysis tools. Collaborative efforts being taken to eliminate this gap include building common vocabularies and ontologies, designing generic data models, and defining data exchange formats. The Minimum Information about a Flow Cytometry Experiment (MIFlowCyt) standard was recently adopted by the International Society for Advancement of Cytometry. This standard guides researchers on the information that should be included in peer reviewed publications, but it is insufficient for data exchange and integration between computational systems. The Functional Genomics Experiment (FuGE) formalizes common aspects of comprehensive and high throughput experiments across different biological technologies. We have extended FuGE object model to accommodate flow cytometry data and metadata. We used the MagicDraw modelling tool to design a UML model (Flow-OM) according to the FuGE extension guidelines and the AndroMDA toolkit to transform the model to a markup language (Flow-ML). We mapped each MIFlowCyt term to either an existing FuGE class or to a new FuGEFlow class. The development environment was validated by comparing the official FuGE XSD to the schema we generated from the FuGE object model using our configuration. After the Flow-OM model was completed, the final version of the Flow-ML was generated and validated against an example MIFlowCyt compliant experiment description. The extension of FuGE for flow cytometry has resulted in a generic FuGE-compliant data model (FuGEFlow), which accommodates and links together all information required by MIFlowCyt. The FuGEFlow model can be used to build software and databases using FuGE software toolkits to facilitate automated exchange and manipulation of potentially large flow cytometry experimental data sets. Additional project documentation, including

  7. Modeling erosion and sedimentation coupled with hydrological and overland flow processes at the watershed scale

    Science.gov (United States)

    Kim, Jongho; Ivanov, Valeriy Y.; Katopodes, Nikolaos D.

    2013-09-01

    A novel two-dimensional, physically based model of soil erosion and sediment transport coupled to models of hydrological and overland flow processes has been developed. The Hairsine-Rose formulation of erosion and deposition processes is used to account for size-selective sediment transport and differentiate bed material into original and deposited soil layers. The formulation is integrated within the framework of the hydrologic and hydrodynamic model tRIBS-OFM, Triangulated irregular network-based, Real-time Integrated Basin Simulator-Overland Flow Model. The integrated model explicitly couples the hydrodynamic formulation with the advection-dominated transport equations for sediment of multiple particle sizes. To solve the system of equations including both the Saint-Venant and the Hairsine-Rose equations, the finite volume method is employed based on Roe's approximate Riemann solver on an unstructured grid. The formulation yields space-time dynamics of flow, erosion, and sediment transport at fine scale. The integrated model has been successfully verified with analytical solutions and empirical data for two benchmark cases. Sensitivity tests to grid resolution and the number of used particle sizes have been carried out. The model has been validated at the catchment scale for the Lucky Hills watershed located in southeastern Arizona, USA, using 10 events for which catchment-scale streamflow and sediment yield data were available. Since the model is based on physical laws and explicitly uses multiple types of watershed information, satisfactory results were obtained. The spatial output has been analyzed and the driving role of topography in erosion processes has been discussed. It is expected that the integrated formulation of the model has the promise to reduce uncertainties associated with typical parameterizations of flow and erosion processes. A potential for more credible modeling of earth-surface processes is thus anticipated.

  8. Data Workflow - A Workflow Model for Continuous Data Processing

    NARCIS (Netherlands)

    Wombacher, Andreas

    2010-01-01

    Online data or streaming data are getting more and more important for enterprise information systems, e.g. by integrating sensor data and workflows. The continuous flow of data provided e.g. by sensors requires new workflow models addressing the data perspective of these applications, since

  9. A perspective for biomedical data integration: Design of databases for flow cytometry

    Directory of Open Access Journals (Sweden)

    Lakoumentas John

    2008-02-01

    Full Text Available Abstract Background The integration of biomedical information is essential for tackling medical problems. We describe a data model in the domain of flow cytometry (FC allowing for massive management, analysis and integration with other laboratory and clinical information. The paper is concerned with the proper translation of the Flow Cytometry Standard (FCS into a relational database schema, in a way that facilitates end users at either doing research on FC or studying specific cases of patients undergone FC analysis Results The proposed database schema provides integration of data originating from diverse acquisition settings, organized in a way that allows syntactically simple queries that provide results significantly faster than the conventional implementations of the FCS standard. The proposed schema can potentially achieve up to 8 orders of magnitude reduction in query complexity and up to 2 orders of magnitude reduction in response time for data originating from flow cytometers that record 256 colours. This is mainly achieved by managing to maintain an almost constant number of data-mining procedures regardless of the size and complexity of the stored information. Conclusion It is evident that using single-file data storage standards for the design of databases without any structural transformations significantly limits the flexibility of databases. Analysis of the requirements of a specific domain for integration and massive data processing can provide the necessary schema modifications that will unlock the additional functionality of a relational database.

  10. Automatized material and radioactivity flow control tool in decommissioning process

    International Nuclear Information System (INIS)

    Rehak, I.; Vasko, M.; Daniska, V.; Schultz, O.

    2009-01-01

    In this presentation the automatized material and radioactivity flow control tool in decommissioning process is discussed. It is concluded that: computer simulation of the decommissioning process is one of the important attributes of computer code Omega; one of the basic tools of computer optimisation of decommissioning waste processing are the tools of integral material and radioactivity flow; all the calculated parameters of materials are stored in each point of calculation process and they can be viewed; computer code Omega represents opened modular system, which can be improved; improvement of the module of optimisation of decommissioning waste processing will be performed in the frame of improvement of material procedures and scenarios.

  11. Krohne Flow Indicator and High Flow Alarm - Local Indicator and High Flow Alarm of Helium Flow from the SCHe Purge Lines C and D to the Process Vent

    International Nuclear Information System (INIS)

    MISKA, C.R.

    2000-01-01

    Flow Indicators/alarms FI/FSH-5*52 and -5*72 are located in the process vent lines connected to the 2 psig SCHe purge lines C and D. They monitor the flow from the 2 psig SCHe purge going to the process vent. The switch/alarm is non-safety class GS

  12. Sulfur flows and biosolids processing: Using Material Flux Analysis (MFA) principles at wastewater treatment plants.

    Science.gov (United States)

    Fisher, R M; Alvarez-Gaitan, J P; Stuetz, R M; Moore, S J

    2017-08-01

    High flows of sulfur through wastewater treatment plants (WWTPs) may cause noxious gaseous emissions, corrosion of infrastructure, inhibit wastewater microbial communities, or contribute to acid rain if the biosolids or biogas is combusted. Yet, sulfur is an important agricultural nutrient and the direct application of biosolids to soils enables its beneficial re-use. Flows of sulfur throughout the biosolids processing of six WWTPs were investigated to identify how they were affected by biosolids processing configurations. The process of tracking sulfur flows through the sites also identified limitations in data availability and quality, highlighting future requirements for tracking substance flows. One site was investigated in more detail showing sulfur speciation throughout the plant and tracking sulfur flows in odour control systems in order to quantify outflows to air, land and ocean sinks. While the majority of sulfur from WWTPs is removed as sulfate in the secondary effluent, the sulfur content of biosolids is valuable as it can be directly returned to soils to combat the potential sulfur deficiencies. Biosolids processing configurations, which focus on maximising solids recovery, through high efficiency separation techniques in primary sedimentation tanks, thickeners and dewatering centrifuges retain more sulfur in the biosolids. However, variations in sulfur loads and concentrations entering the WWTPs affect sulfur recovery in the biosolids, suggesting industrial emitters, and chemical dosing of iron salts are responsible for differences in recovery between sites. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. The Impact of Rhizosphere Processes on Water Flow and Root Water Uptake

    Science.gov (United States)

    Schwartz, Nimrod; Kroener, Eva; Carminati, Andrea; Javaux, Mathieu

    2015-04-01

    For many years, the rhizosphere, which is the zone of soil in the vicinity of the roots and which is influenced by the roots, is known as a unique soil environment with different physical, biological and chemical properties than those of the bulk soil. Indeed, in recent studies it has been shown that root exudate and especially mucilage alter the hydraulic properties of the soil, and that drying and wetting cycles of mucilage result in non-equilibrium water dynamics in the rhizosphere. While there are experimental evidences and simplified 1D model for those concepts, an integrated model that considers rhizosphere processes with a detailed model for water and roots flow is absent. Therefore, the objective of this work is to develop a 3D physical model of water flow in the soil-plant continuum that take in consideration root architecture and rhizosphere specific properties. Ultimately, this model will enhance our understanding on the impact of processes occurring in the rhizosphere on water flow and root water uptake. To achieve this objective, we coupled R-SWMS, a detailed 3D model for water flow in soil and root system (Javaux et al 2008), with the rhizosphere model developed by Kroener et al (2014). In the new Rhizo-RSWMS model the rhizosphere hydraulic properties differ from those of the bulk soil, and non-equilibrium dynamics between the rhizosphere water content and pressure head is also considered. We simulated a wetting scenario. The soil was initially dry and it was wetted from the top at a constant flow rate. The model predicts that, after infiltration the water content in the rhizosphere remained lower than in the bulk soil (non-equilibrium), but over time water infiltrated into the rhizosphere and eventually the water content in the rhizosphere became higher than in the bulk soil. These results are in qualitative agreement with the available experimental data on water dynamics in the rhizosphere. Additionally, the results show that rhizosphere processes

  14. Dynamic modeling of Shell entrained flow gasifier in an integrated gasification combined cycle process

    International Nuclear Information System (INIS)

    Lee, Hyeon-Hui; Lee, Jae-Chul; Joo, Yong-Jin; Oh, Min; Lee, Chang-Ha

    2014-01-01

    Highlights: • Detailed dynamic model for the Shell entrained flow gasifier was developed. • The model included sub-models of reactor, membrane wall, gas quench and slag flow. • The dynamics of each zone including membrane wall in the gasifier were analyzed. • Cold gas efficiency (81.82%), gas fraction and temperature agreed with Shell data. • The model could be used as part of the overall IGCC simulation. - Abstract: The Shell coal gasification system is a single-stage, up-flow, oxygen-blown gasifier which utilizes dry pulverized coal with an entrained flow mechanism. Moreover, it has a membrane wall structure and operates in the slagging mode. This work provides a detailed dynamic model of the 300 MW Shell gasifier developed for use as part of an overall IGCC (integrated gasification combined cycle) process simulation. The model consists of several sub-models, such as a volatilization zone, reaction zone, quench zone, slag zone, and membrane wall zone, including heat transfers between the wall layers and steam generation. The dynamic results were illustrated and the validation of the gasifier model was confirmed by comparing the results in the steady state with the reference data. The product gases (H 2 and CO) began to come out from the exit of the reaction zone within 0.5 s, and nucleate boiling heat transfer was dominant in the water zone of the membrane wall due to high heat fluxes. The steady state of the process was reached at nearly t = 500 s, and our simulation data for the steady state, such as the temperature and composition of the syngas, the cold gas efficiency (81.82%), and carbon conversion (near 1.0) were in good agreement with the reference data

  15. Automated acquisition and processing of data from measurements on aerodynamic models

    International Nuclear Information System (INIS)

    Mantlik, F.; Pilat, M.; Schmid, J.

    1981-01-01

    Hardware and software are described for processing data measured in the model research of local hydrodynamic conditions in fluid flow through channels with a complex cross sectional geometry, obtained usign aerodynamic models of parts of fast reactor fuel assemblies of the HEM-1 and HEM-2 type. A system was proposed and is being implemented of automatic control of the experiments and measured data acquisition. Basic information is given on the programs for processing and storing the results using a GIER computer. A CAMAC system is primarily used as part of the hardware. (B.S.)

  16. Use of soil moisture dynamics and patterns at different spatio-temporal scales for the investigation of subsurface flow processes

    Directory of Open Access Journals (Sweden)

    T. Blume

    2009-07-01

    Full Text Available Spatial patterns as well as temporal dynamics of soil moisture have a major influence on runoff generation. The investigation of these dynamics and patterns can thus yield valuable information on hydrological processes, especially in data scarce or previously ungauged catchments. The combination of spatially scarce but temporally high resolution soil moisture profiles with episodic and thus temporally scarce moisture profiles at additional locations provides information on spatial as well as temporal patterns of soil moisture at the hillslope transect scale. This approach is better suited to difficult terrain (dense forest, steep slopes than geophysical techniques and at the same time less cost-intensive than a high resolution grid of continuously measuring sensors. Rainfall simulation experiments with dye tracers while continuously monitoring soil moisture response allows for visualization of flow processes in the unsaturated zone at these locations. Data was analyzed at different spacio-temporal scales using various graphical methods, such as space-time colour maps (for the event and plot scale and binary indicator maps (for the long-term and hillslope scale. Annual dynamics of soil moisture and decimeter-scale variability were also investigated. The proposed approach proved to be successful in the investigation of flow processes in the unsaturated zone and showed the importance of preferential flow in the Malalcahuello Catchment, a data-scarce catchment in the Andes of Southern Chile. Fast response times of stream flow indicate that preferential flow observed at the plot scale might also be of importance at the hillslope or catchment scale. Flow patterns were highly variable in space but persistent in time. The most likely explanation for preferential flow in this catchment is a combination of hydrophobicity, small scale heterogeneity in rainfall due to redistribution in the canopy and strong gradients in unsaturated conductivities leading to

  17. CoreFlow: a computational platform for integration, analysis and modeling of complex biological data.

    Science.gov (United States)

    Pasculescu, Adrian; Schoof, Erwin M; Creixell, Pau; Zheng, Yong; Olhovsky, Marina; Tian, Ruijun; So, Jonathan; Vanderlaan, Rachel D; Pawson, Tony; Linding, Rune; Colwill, Karen

    2014-04-04

    A major challenge in mass spectrometry and other large-scale applications is how to handle, integrate, and model the data that is produced. Given the speed at which technology advances and the need to keep pace with biological experiments, we designed a computational platform, CoreFlow, which provides programmers with a framework to manage data in real-time. It allows users to upload data into a relational database (MySQL), and to create custom scripts in high-level languages such as R, Python, or Perl for processing, correcting and modeling this data. CoreFlow organizes these scripts into project-specific pipelines, tracks interdependencies between related tasks, and enables the generation of summary reports as well as publication-quality images. As a result, the gap between experimental and computational components of a typical large-scale biology project is reduced, decreasing the time between data generation, analysis and manuscript writing. CoreFlow is being released to the scientific community as an open-sourced software package complete with proteomics-specific examples, which include corrections for incomplete isotopic labeling of peptides (SILAC) or arginine-to-proline conversion, and modeling of multiple/selected reaction monitoring (MRM/SRM) results. CoreFlow was purposely designed as an environment for programmers to rapidly perform data analysis. These analyses are assembled into project-specific workflows that are readily shared with biologists to guide the next stages of experimentation. Its simple yet powerful interface provides a structure where scripts can be written and tested virtually simultaneously to shorten the life cycle of code development for a particular task. The scripts are exposed at every step so that a user can quickly see the relationships between the data, the assumptions that have been made, and the manipulations that have been performed. Since the scripts use commonly available programming languages, they can easily be

  18. Preface "Nonlinear processes in oceanic and atmospheric flows"

    Directory of Open Access Journals (Sweden)

    E. García-Ladona

    2010-05-01

    Full Text Available Nonlinear phenomena are essential ingredients in many oceanic and atmospheric processes, and successful understanding of them benefits from multidisciplinary collaboration between oceanographers, meteorologists, physicists and mathematicians. The present Special Issue on "Nonlinear Processes in Oceanic and Atmospheric Flows" contains selected contributions from attendants to the workshop which, in the above spirit, was held in Castro Urdiales, Spain, in July 2008. Here we summarize the Special Issue contributions, which include papers on the characterization of ocean transport in the Lagrangian and in the Eulerian frameworks, generation and variability of jets and waves, interactions of fluid flow with plankton dynamics or heavy drops, scaling in meteorological fields, and statistical properties of El Niño Southern Oscillation.

  19. Application of analogue computers to radiotracer data processing

    International Nuclear Information System (INIS)

    Chmielewski, A.G.

    1979-01-01

    Some applications of analogue computers for processing the flow-system radiotracer-investigation data are presented. Analysis of the impulse response shaped to obtain the frequency response of the system under consideration can be performed on the basis of an estimated transfer function. Furthermore, simulation of the system behaviour for other excitation functions is discussed. Simple approach is made for estimating the model parameters in situations where the input signal is not approximated by the unit impulse function. (author)

  20. Development of a Computational Framework for Big Data-Driven Prediction of Long-Term Bridge Performance and Traffic Flow

    Science.gov (United States)

    2018-04-01

    Consistent efforts with dense sensor deployment and data gathering processes for bridge big data have accumulated profound information regarding bridge performance, associated environments, and traffic flows. However, direct applications of bridge bi...

  1. The R package EchoviewR for automated processing of active acoustic data using Echoview

    Directory of Open Access Journals (Sweden)

    Lisa-Marie Katarina Harrison

    2015-02-01

    Full Text Available Acoustic data is time consuming to process due to the large data size and the requirement to often undertake some data processing steps manually. Manual processing may introduce subjective, irreproducible decisions into the data processing work flow, reducing consistency in processing between surveys. We introduce the R package EchoviewR as an interface between R and Echoview, a commercially available acoustic processing software package. EchoviewR allows for automation of Echoview using scripting which can drastically reduce the manual work required when processing acoustic surveys. This package plays an important role in reducing subjectivity in acoustic data processing by allowing exactly the same process to be applied automatically to multiple surveys and documenting where subjective decisions have been made. Using data from a survey of Antarctic krill, we provide two examples of using EchoviewR: krill estimation and swarm detection.

  2. GenFlow: generic flow for integration, management and analysis of molecular biology data

    Directory of Open Access Journals (Sweden)

    Marcio Katsumi Oikawa

    2004-01-01

    Full Text Available A large number of DNA sequencing projects all over the world have yielded a fantastic amount of data, whose analysis is, currently, a big challenge for computational biology. The limiting step in this task is the integration of large volumes of data stored in highly heterogeneous repositories of genomic and cDNA sequences, as well as gene expression results. Solving this problem requires automated analytical tools to optimize operations and efficiently generate knowledge. This paper presents an information flow model , called GenFlow, that can tackle this analytical task.

  3. Computational Flow Dynamic Simulation of Micro Flow Field Characteristics Drainage Device Used in the Process of Oil-Water Separation

    Directory of Open Access Journals (Sweden)

    Guangya Jin

    2017-01-01

    Full Text Available Aqueous crude oil often contains large amounts of produced water and heavy sediment, which seriously threats the safety of crude oil storage and transportation. Therefore, the proper design of crude oil tank drainage device is prerequisite for efficient purification of aqueous crude oil. In this work, the composition and physicochemical properties of crude oil samples were tested under the actual conditions encountered. Based on these data, an appropriate crude oil tank drainage device was developed using the principle of floating ball and multiphase flow. In addition, the flow field characteristics in the device were simulated and the contours and streamtraces of velocity magnitude at different nine moments were obtained. Meanwhile, the improvement of flow field characteristics after the addition of grids in crude oil tank drainage device was validated. These findings provide insights into the development of effective selection methods and serve as important references for oil-water separation process.

  4. CFD modeling of condensation process of water vapor in supersonic flows

    DEFF Research Database (Denmark)

    Yang, Yan; Walther, Jens Honore; Yan, Yuying

    2017-01-01

    The condensation phenomenon of vapor plays an important role in various industries, such as the steam flow in turbines and refrigeration system. A mathematical model is developed to predict the spontaneous condensing phenomenon in the supersonic flows using the nucleation and droplet growth...... theories. The numerical approach is validated with the experimental data, which shows a good agreement between them. The condensation characteristics of water vapor in the Laval nozzle are described in detail. The results show that the condensation process is a rapid variation of the vapor-liquid phase...... change both in the space and in time. The spontaneous condensation of water vapor will not appear immediately when the steam reaches the saturation state. Instead, it occurs further downstream the nozzle throat, where the steam is in the state of supersaturation....

  5. Flows of engineered nanomaterials through the recycling process in Switzerland

    International Nuclear Information System (INIS)

    Caballero-Guzman, Alejandro; Sun, Tianyin; Nowack, Bernd

    2015-01-01

    Highlights: • Recycling is one of the likely end-of-life fates of nanoproducts. • We assessed the material flows of four nanomaterials in the Swiss recycling system. • After recycling, most nanomaterials will flow to landfills or incineration plants. • Recycled construction waste, plastics and textiles may contain nanomaterials. - Abstract: The use of engineered nanomaterials (ENMs) in diverse applications has increased during the last years and this will likely continue in the near future. As the number of applications increase, more and more waste with nanomaterials will be generated. A portion of this waste will enter the recycling system, for example, in electronic products, textiles and construction materials. The fate of these materials during and after the waste management and recycling operations is poorly understood. The aim of this work is to model the flows of nano-TiO 2 , nano-ZnO, nano-Ag and CNT in the recycling system in Switzerland. The basis for this study is published information on the ENMs flows on the Swiss system. We developed a method to assess their flow after recycling. To incorporate the uncertainties inherent to the limited information available, we applied a probabilistic material flow analysis approach. The results show that the recycling processes does not result in significant further propagation of nanomaterials into new products. Instead, the largest proportion will flow as waste that can subsequently be properly handled in incineration plants or landfills. Smaller fractions of ENMs will be eliminated or end up in materials that are sent abroad to undergo further recovery processes. Only a reduced amount of ENMs will flow back to the productive process of the economy in a limited number of sectors. Overall, the results suggest that risk assessment during recycling should focus on occupational exposure, release of ENMs in landfills and incineration plants, and toxicity assessment in a small number of recycled inputs

  6. Improved Low Power FPGA Binding of Datapaths from Data Flow Graphs with NSGA II -based Schedule Selection

    Directory of Open Access Journals (Sweden)

    BHUVANESWARI, M. C.

    2013-11-01

    Full Text Available FPGAs are increasingly being used to implement data path intensive algorithms for signal processing and image processing applications. In High Level Synthesis of Data Flow Graphs targeted at FPGAs, the effect of interconnect resources such as multiplexers must be considered since they contribute significantly to the area and switching power. We propose a binding framework for behavioral synthesis of Data Flow Graphs (DFGs onto FPGA targets with power reduction as the main criterion. The technique uses a multi-objective GA, NSGA II for design space exploration to identify schedules that have the potential to yield low-power bindings from a population of non-dominated solutions. A greedy constructive binding technique reported in the literature is adapted for interconnect minimization. The binding is further subjected to a perturbation process by altering the register and multiplexer assignments. Results obtained on standard DFG benchmarks indicate that our technique yields better power aware bindings than the constructive binding approach with little or no area overhead.

  7. Foundations of Total Functional Data-Flow Programming

    Directory of Open Access Journals (Sweden)

    Baltasar Trancón y Widemann

    2014-06-01

    Full Text Available The field of declarative stream programming (discrete time, clocked synchronous, modular, data-centric is divided between the data-flow graph paradigm favored by domain experts, and the functional reactive paradigm favored by academics. In this paper, we describe the foundations of a framework for unifying functional and data-flow styles that differs from FRP proper in significant ways: It is based on set theory to match the expectations of domain experts, and the two paradigms are reduced symmetrically to a low-level middle ground, with strongly compositional semantics. The design of the framework is derived from mathematical first principles, in particular coalgebraic coinduction and a standard relational model of stateful computation. The abstract syntax and semantics introduced here constitute the full core of a novel stream programming language.

  8. Synthesis CNTs Particle Based Abrasive Media for Abrasive Flow Machining Process

    International Nuclear Information System (INIS)

    Kumar, Sonu; Walia, R.S; Dhull, S.; Murtaza, Q.; Tyagi, P. K.

    2016-01-01

    Abrasive flow machining (AFM) is a modem fine finishing process used for intricate and internal finishing of components or parts. It is based on flowing of viscoelastic abrasive media over the surface to be fine finished. The abrasive media is the important parameter in the AFM process because of its ability to accurately abrade the predefined area along it flow path. In this study, an attempt is made to develop a new abrasive, alumina with Carbon nanotubes (CNTs) in viscoelastic medium. CNT s in house produced through chemical vapour deposition technique and characterize through TEM. Performance evaluation of the new abrasive media is carried out by increasing content of CNT s with fixed extrusion pressure, viscosity of media and media flow rate as process parameters and surface finish improvement and material removal as process responses in AFM setup. Significantly improvement has been observed in material removal and maximum improvement of 100% has been observed in the surface finish on the inner cylindrical surface of the cast iron work piece. (paper)

  9. Modeling and flow analysis of pure nylon polymer for injection molding process

    International Nuclear Information System (INIS)

    Nuruzzaman, D M; Kusaseh, N; Basri, S; Hamedon, Z; Oumer, A N

    2016-01-01

    In the production of complex plastic parts, injection molding is one of the most popular industrial processes. This paper addresses the modeling and analysis of the flow process of the nylon (polyamide) polymer for injection molding process. To determine the best molding conditions, a series of simulations are carried out using Autodesk Moldflow Insight software and the processing parameters are adjusted. This mold filling commercial software simulates the cavity filling pattern along with temperature and pressure distributions in the mold cavity. In the modeling, during the plastics flow inside the mold cavity, different flow parameters such as fill time, pressure, temperature, shear rate and warp at different locations in the cavity are analyzed. Overall, this Moldflow is able to perform a relatively sophisticated analysis of the flow process of pure nylon. Thus the prediction of the filling of a mold cavity is very important and it becomes useful before a nylon plastic part to be manufactured. (paper)

  10. Modeling and flow analysis of pure nylon polymer for injection molding process

    Science.gov (United States)

    Nuruzzaman, D. M.; Kusaseh, N.; Basri, S.; Oumer, A. N.; Hamedon, Z.

    2016-02-01

    In the production of complex plastic parts, injection molding is one of the most popular industrial processes. This paper addresses the modeling and analysis of the flow process of the nylon (polyamide) polymer for injection molding process. To determine the best molding conditions, a series of simulations are carried out using Autodesk Moldflow Insight software and the processing parameters are adjusted. This mold filling commercial software simulates the cavity filling pattern along with temperature and pressure distributions in the mold cavity. In the modeling, during the plastics flow inside the mold cavity, different flow parameters such as fill time, pressure, temperature, shear rate and warp at different locations in the cavity are analyzed. Overall, this Moldflow is able to perform a relatively sophisticated analysis of the flow process of pure nylon. Thus the prediction of the filling of a mold cavity is very important and it becomes useful before a nylon plastic part to be manufactured.

  11. Hydrodynamic modelling and global datasets: Flow connectivity and SRTM data, a Bangkok case study.

    Science.gov (United States)

    Trigg, M. A.; Bates, P. B.; Michaelides, K.

    2012-04-01

    The rise in the global interconnected manufacturing supply chains requires an understanding and consistent quantification of flood risk at a global scale. Flood risk is often better quantified (or at least more precisely defined) in regions where there has been an investment in comprehensive topographical data collection such as LiDAR coupled with detailed hydrodynamic modelling. Yet in regions where these data and modelling are unavailable, the implications of flooding and the knock on effects for global industries can be dramatic, as evidenced by the recent floods in Bangkok, Thailand. There is a growing momentum in terms of global modelling initiatives to address this lack of a consistent understanding of flood risk and they will rely heavily on the application of available global datasets relevant to hydrodynamic modelling, such as Shuttle Radar Topography Mission (SRTM) data and its derivatives. These global datasets bring opportunities to apply consistent methodologies on an automated basis in all regions, while the use of coarser scale datasets also brings many challenges such as sub-grid process representation and downscaled hydrology data from global climate models. There are significant opportunities for hydrological science in helping define new, realistic and physically based methodologies that can be applied globally as well as the possibility of gaining new insights into flood risk through analysis of the many large datasets that will be derived from this work. We use Bangkok as a case study to explore some of the issues related to using these available global datasets for hydrodynamic modelling, with particular focus on using SRTM data to represent topography. Research has shown that flow connectivity on the floodplain is an important component in the dynamics of flood flows on to and off the floodplain, and indeed within different areas of the floodplain. A lack of representation of flow connectivity, often due to data resolution limitations, means

  12. An experimental study of fluidization behavior using flow visualization and image processing

    International Nuclear Information System (INIS)

    Laan, Flavio T. van der; Sefidvash, Farhang; Cornelius, Vanderli

    2000-01-01

    A program of experimental study of fluidization of heavy spherical pellets with water using image processing technique has been started in the Nuclear Engineering Department of the Federal University of Rio Grande do Sul. Fluidization for application in nuclear reactors requires very detailed knowledge of its behavior as the reactivity is closely dependent on the porosity of the fluidized bed. A small modular nuclear reactor concept with suspended core is under study. A modified version of the reactor involves the choice of is to make conical the shape of the reactor core to produce a non-fluctuating bed and consequently guarantee the dynamic stability of the reactor. A 5 mm diameter steel ball are fluidized with water in a conical Plexiglass tube. A pump circulate the water in a loop feeding the room temperature water from the tank into the fluidization system and returning it back to the tank. A controllable valve controls the flow velocity. A high velocity digital CCD camera captures the images of the pellets moving in the fluidized tube. At different flow velocities, the individual pellets can be tracked by processing the sequential frames. A DVT digital tape record stores the images and by acquisition through interface board into a microcomputer. A special program process the data later on. Different algorithm of image treatment determines the velocity fields of the pellets. The behavior of the pellets under different flow velocity and porosity are carefully studied. (author)

  13. Influence of Processing Parameters on the Flow Path in Friction Stir Welding

    Science.gov (United States)

    Schneider, J. A.; Nunes, A. C., Jr.

    2006-01-01

    Friction stir welding (FSW) is a solid phase welding process that unites thermal and mechanical aspects to produce a high quality joint. The process variables are rpm, translational weld speed, and downward plunge force. The strain-temperature history of a metal element at each point on the cross-section of the weld is determined by the individual flow path taken by the particular filament of metal flowing around the tool as influenced by the process variables. The resulting properties of the weld are determined by the strain-temperature history. Thus to control FSW properties, improved understanding of the processing parameters on the metal flow path is necessary.

  14. Material flow-based economic assessment of landfill mining processes.

    Science.gov (United States)

    Kieckhäfer, Karsten; Breitenstein, Anna; Spengler, Thomas S

    2017-02-01

    This paper provides an economic assessment of alternative processes for landfill mining compared to landfill aftercare with the goal of assisting landfill operators with the decision to choose between the two alternatives. A material flow-based assessment approach is developed and applied to a landfill in Germany. In addition to landfill aftercare, six alternative landfill mining processes are considered. These range from simple approaches where most of the material is incinerated or landfilled again to sophisticated technology combinations that allow for recovering highly differentiated products such as metals, plastics, glass, recycling sand, and gravel. For the alternatives, the net present value of all relevant cash flows associated with plant installation and operation, supply, recycling, and disposal of material flows, recovery of land and landfill airspace, as well as landfill closure and aftercare is computed with an extensive sensitivity analyses. The economic performance of landfill mining processes is found to be significantly influenced by the prices of thermal treatment (waste incineration as well as refuse-derived fuels incineration plant) and recovered land or airspace. The results indicate that the simple process alternatives have the highest economic potential, which contradicts the aim of recovering most of the resources. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Laminar flow and convective transport processes scaling principles and asymptotic analysis

    CERN Document Server

    Brenner, Howard

    1992-01-01

    Laminar Flow and Convective Transport Processes: Scaling Principles and Asymptotic Analysis presents analytic methods for the solution of fluid mechanics and convective transport processes, all in the laminar flow regime. This book brings together the results of almost 30 years of research on the use of nondimensionalization, scaling principles, and asymptotic analysis into a comprehensive form suitable for presentation in a core graduate-level course on fluid mechanics and the convective transport of heat. A considerable amount of material on viscous-dominated flows is covered.A unique feat

  16. TRAQ I, a CAMAC system for multichannel data acquisition, storage and processing

    International Nuclear Information System (INIS)

    Broad, A.S.; Jordan, C.L.; Kojola, P.H.; Miller, M.

    1983-01-01

    Multichannel, high speed, signal sources generate large amounts of data which cannot be handled real time on the camac dataway. TRAQ I is a modular CAMAC system designed to buffer and process data of this type. The system can acquire data from up to 256 sources (ADCs etc.) and store in local memory (4 Mbytes). Many different signal sources can be controlled, working in either a histogramming or sequential mode. The system's data transfer bus is designed to accommodate other modules which can pre- or postprocess the data. Pre-processors can either intercept the data flow to memory for data compaction or passively monitor, looking for signal excursions, etc. Post-processors access memory to process and rewrite the data or transmit to other devices

  17. Development process of muzzle flows including a gun-launched missile

    Directory of Open Access Journals (Sweden)

    Zhuo Changfei

    2015-04-01

    Full Text Available Numerical investigations on the launch process of a gun-launched missile from the muzzle of a cannon to the free-flight stage have been performed in this paper. The dynamic overlapped grids approach are applied to dealing with the problems of a moving gun-launched missile. The high-resolution upwind scheme (AUSMPW+ and the detailed reaction kinetics model are adopted to solve the chemical non-equilibrium Euler equations for dynamic grids. The development process and flow field structure of muzzle flows including a gun-launched missile are discussed in detail. This present numerical study confirms that complicated transient phenomena exist in the shortly launching stages when the gun-launched missile moves from the muzzle of a cannon to the free-flight stage. The propellant gas flows, the initial environmental ambient air flows and the moving missile mutually couple and interact. A complete structure of flow field is formed at the launching stages, including the blast wave, base shock, reflected shock, incident shock, shear layer, primary vortex ring and triple point.

  18. Data-flow Analysis of Programs with Associative Arrays

    Directory of Open Access Journals (Sweden)

    David Hauzar

    2014-05-01

    Full Text Available Dynamic programming languages, such as PHP, JavaScript, and Python, provide built-in data structures including associative arrays and objects with similar semantics—object properties can be created at run-time and accessed via arbitrary expressions. While a high level of security and safety of applications written in these languages can be of a particular importance (consider a web application storing sensitive data and providing its functionality worldwide, dynamic data structures pose significant challenges for data-flow analysis making traditional static verification methods both unsound and imprecise. In this paper, we propose a sound and precise approach for value and points-to analysis of programs with associative arrays-like data structures, upon which data-flow analyses can be built. We implemented our approach in a web-application domain—in an analyzer of PHP code.

  19. Coupling of Processes and Data in PennState Integrated Hydrologic Modeling (PIHM) System

    Science.gov (United States)

    Kumar, M.; Duffy, C.

    2007-12-01

    Full physical coupling, "natural" numerical coupling and parsimonious but accurate data coupling is needed to comprehensively and accurately capture the interaction between different components of a hydrologic continuum. Here we present a physically based, spatially distributed hydrologic model that incorporates all the three coupling strategies. Physical coupling of interception, snow melt, transpiration, overland flow, subsurface flow, river flow, macropore based infiltration and stormflow, flow through and over hydraulic structures likes weirs and dams, and evaporation from interception, ground and overland flow is performed. All the physically coupled components are numerically coupled through semi-discrete form of ordinary differential equations, that define each hydrologic process, using Finite-Volume based approach. The fully implicit solution methodology using CVODE solver solves for all the state variables simultaneously at each adaptive time steps thus providing robustness, stability and accuracy. The accurate data coupling is aided by use of constrained unstructured meshes, flexible data model and use of PIHMgis. The spatial adaptivity of decomposed domain and temporal adaptivity of the numerical solver facilitates capture of varied spatio-temporal scales that are inherent in hydrologic process interactions. The implementation of the model has been performed on a meso-scale Little-Juniata Watershed. Model results are validated by comparison of streamflow at multiple locations. We discuss some of the interesting hydrologic interactions between surface, subsurface and atmosphere witnessed during the year long simulation such as a) inverse relationship between evaporation from interception storage and transpiration b) relative influence of forcing (precipitation, temperature and radiation) and source (soil moisture and overland flow) on evaporation c) influence of local topography on gaining, loosing or "flow-through" behavior of river-aquifer interactions

  20. Experimental and Numerical Modeling of Fluid Flow Processes in Continuous Casting: Results from the LIMMCAST-Project

    Science.gov (United States)

    Timmel, K.; Kratzsch, C.; Asad, A.; Schurmann, D.; Schwarze, R.; Eckert, S.

    2017-07-01

    The present paper reports about numerical simulations and model experiments concerned with the fluid flow in the continuous casting process of steel. This work was carried out in the LIMMCAST project in the framework of the Helmholtz alliance LIMTECH. A brief description of the LIMMCAST facilities used for the experimental modeling at HZDR is given here. Ultrasonic and inductive techniques and the X-ray radioscopy were employed for flow measurements or visualizations of two-phase flow regimes occurring in the submerged entry nozzle and the mold. Corresponding numerical simulations were performed at TUBAF taking into account the dimensions and properties of the model experiments. Numerical models were successfully validated using the experimental data base. The reasonable and in many cases excellent agreement of numerical with experimental data allows to extrapolate the models to real casting configurations. Exemplary results will be presented here showing the effect of electromagnetic brakes or electromagnetic stirrers on the flow in the mold or illustrating the properties of two-phase flows resulting from an Ar injection through the stopper rod.

  1. Rotating thermal flows in natural and industrial processes

    CERN Document Server

    Lappa, Marcello

    2012-01-01

    Rotating Thermal Flows in Natural and Industrial Processes provides the reader with a systematic description of the different types of thermal convection and flow instabilities in rotating systems, as present in materials, crystal growth, thermal engineering, meteorology, oceanography, geophysics and astrophysics. It expressly shows how the isomorphism between small and large scale phenomena becomes beneficial to the definition and ensuing development of an integrated comprehensive framework.  This allows the reader to understand and assimilate the underlying, quintessential mechanisms withou

  2. Data processing

    International Nuclear Information System (INIS)

    Cousot, P.

    1988-01-01

    The 1988 progress report of the Data Processing laboratory (Polytechnic School, France), is presented. The laboratory research fields are: the semantics, the tests and the semantic analysis of the codes, the formal calculus, the software applications, the algorithms, the neuron networks and VLSI (Very Large Scale Integration). The investigations concerning the polynomial rings are performed by means of the standard basis approach. Among the research topics, the Pascal codes, the parallel processing, the combinatorial, statistical and asymptotic properties of the fundamental data processing tools, the signal processing and the pattern recognition. The published papers, the congress communications and the thesis are also included [fr

  3. flowAI: automatic and interactive anomaly discerning tools for flow cytometry data.

    Science.gov (United States)

    Monaco, Gianni; Chen, Hao; Poidinger, Michael; Chen, Jinmiao; de Magalhães, João Pedro; Larbi, Anis

    2016-08-15

    Flow cytometry (FCM) is widely used in both clinical and basic research to characterize cell phenotypes and functions. The latest FCM instruments analyze up to 20 markers of individual cells, producing high-dimensional data. This requires the use of the latest clustering and dimensionality reduction techniques to automatically segregate cell sub-populations in an unbiased manner. However, automated analyses may lead to false discoveries due to inter-sample differences in quality and properties. We present an R package, flowAI, containing two methods to clean FCM files from unwanted events: (i) an automatic method that adopts algorithms for the detection of anomalies and (ii) an interactive method with a graphical user interface implemented into an R shiny application. The general approach behind the two methods consists of three key steps to check and remove suspected anomalies that derive from (i) abrupt changes in the flow rate, (ii) instability of signal acquisition and (iii) outliers in the lower limit and margin events in the upper limit of the dynamic range. For each file analyzed our software generates a summary of the quality assessment from the aforementioned steps. The software presented is an intuitive solution seeking to improve the results not only of manual but also and in particular of automatic analysis on FCM data. R source code available through Bioconductor: http://bioconductor.org/packages/flowAI/ CONTACTS: mongianni1@gmail.com or Anis_Larbi@immunol.a-star.edu.sg Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  4. Extensible packet processing architecture

    Science.gov (United States)

    Robertson, Perry J.; Hamlet, Jason R.; Pierson, Lyndon G.; Olsberg, Ronald R.; Chun, Guy D.

    2013-08-20

    A technique for distributed packet processing includes sequentially passing packets associated with packet flows between a plurality of processing engines along a flow through data bus linking the plurality of processing engines in series. At least one packet within a given packet flow is marked by a given processing engine to signify by the given processing engine to the other processing engines that the given processing engine has claimed the given packet flow for processing. A processing function is applied to each of the packet flows within the processing engines and the processed packets are output on a time-shared, arbitered data bus coupled to the plurality of processing engines.

  5. Influence of lateral groundwater flow in a shallow aquifer on eco-hydrological process in a shrub-grass coexistence semiarid area

    Science.gov (United States)

    Wang, Siru; Sun, Jinhua; Lei, Huimin; Zhu, Qiande; Jiang, Sanyuan

    2017-04-01

    Topography has a considerable influence on eco-hydrological processes resulting from the patterns of solar radiation distribution and lateral water flow. However, not much quantitative information on the contribution of lateral groundwater flow on ecological processes such as vegetation growth and evapo-transpiration is available. To fill this gap, we used a simple eco-hydrological model based on water balance with a 3D groundwater module that uses Darcy's law. This model was applied to a non-contributing area of 50km2 dominated by grassland and shrubland with an underlying shallow aquifer. It was calibrated using manually and remotely sensed vegetation data and water flux data observed by eddy covariance system of two flux towers as well as water table data obtained from HOBO recorders of 40 wells. The results demonstrate that the maximum hydraulic gradient and the maximum flux of lateral groundwater flow reached to 0.156m m-1 and 0.093m3 s-1 respectively. The average annual maximum LAI in grassland, predominantly in low-lying areas, improved by about 5.9% while that in shrubland, predominantly in high-lying areas, remained the same when lateral groundwater flow is considered adequately compared to the case without considering lateral groundwater flow. They also show that LAI is positively and nonlinearly related to evapotranspiration, and that the greater the magnitude of evapotranspiration, the smaller the rate of increase of LAI. The results suggest that lateral groundwater flow should not be neglected when simulating eco-hydrological process in areas with a shallow aquifer.

  6. Migrant nurses in Brazil: demographic characteristics, migration flow and relationship with the training process

    Science.gov (United States)

    Silva, Kênia Lara; de Sena, Roseni Rosângela; Tavares, Tatiana Silva; Belga, Stephanie Marques Moura Franco; Maas, Lucas Wan Der

    2016-01-01

    Objective to analyze the migration of nurses in Brazil, describe the demographic characteristics of migrant nurses, the main migration flows, and establish relationships with the training process. Method a descriptive, exploratory study, based on 2010 Census data. The data were analyzed using descriptive statistics. Result there were 355,383 nurses in Brazil in 2010. Of these, 36,479 (10.3%) reported having moved compared to the year 2005: 18,073 (5.1%) for intrastate migration, 17,525 (4.8%) interstate migration, and 871 (0.2%) international migration. Females (86.3%), Caucasians (65.2%), and unmarried (48.3%) nurses prevailed in the population, without considerable variation between groups according to migration situation. The findings indicate that the migration flows are driven by the training process for states that concentrate a greater number of courses and positions in undergraduate and graduate studies, and the motivation of employment opportunity in regions of economic expansion in the country. Conclusion it is necessary to deepen the discussion on the movement of nurses in Brazil, their motivations, and international migration. PMID:27027681

  7. Migrant nurses in Brazil: demographic characteristics, migration flow and relationship with the training process

    Directory of Open Access Journals (Sweden)

    Kênia Lara Silva

    2016-01-01

    Full Text Available Objective to analyze the migration of nurses in Brazil, describe the demographic characteristics of migrant nurses, the main migration flows, and establish relationships with the training process. Method a descriptive, exploratory study, based on 2010 Census data. The data were analyzed using descriptive statistics. Result there were 355,383 nurses in Brazil in 2010. Of these, 36,479 (10.3% reported having moved compared to the year 2005: 18,073 (5.1% for intrastate migration, 17,525 (4.8% interstate migration, and 871 (0.2% international migration. Females (86.3%, Caucasians (65.2%, and unmarried (48.3% nurses prevailed in the population, without considerable variation between groups according to migration situation. The findings indicate that the migration flows are driven by the training process for states that concentrate a greater number of courses and positions in undergraduate and graduate studies, and the motivation of employment opportunity in regions of economic expansion in the country. Conclusion it is necessary to deepen the discussion on the movement of nurses in Brazil, their motivations, and international migration.

  8. Data adaptive estimation of transversal blood flow velocities

    DEFF Research Database (Denmark)

    Pirnia, E.; Jakobsson, A.; Gudmundson, E.

    2014-01-01

    the transversal blood flow. In this paper, we propose a novel data-adaptive blood flow estimator exploiting this modulation scheme. Using realistic Field II simulations, the proposed estimator is shown to achieve a notable performance improvement as compared to current state-of-the-art techniques.......The examination of blood flow inside the body may yield important information about vascular anomalies, such as possible indications of, for example, stenosis. Current Medical ultrasound systems suffer from only allowing for measuring the blood flow velocity along the direction of irradiation......, posing natural difficulties due to the complex behaviour of blood flow, and due to the natural orientation of most blood vessels. Recently, a transversal modulation scheme was introduced to induce also an oscillation along the transversal direction, thereby allowing for the measurement of also...

  9. Communicating Processes with Data for Supervisory Coordination

    Directory of Open Access Journals (Sweden)

    Jasen Markovski

    2012-08-01

    Full Text Available We employ supervisory controllers to safely coordinate high-level discrete(-event behavior of distributed components of complex systems. Supervisory controllers observe discrete-event system behavior, make a decision on allowed activities, and communicate the control signals to the involved parties. Models of the supervisory controllers can be automatically synthesized based on formal models of the system components and a formalization of the safe coordination (control requirements. Based on the obtained models, code generation can be used to implement the supervisory controllers in software, on a PLC, or an embedded (microprocessor. In this article, we develop a process theory with data that supports a model-based systems engineering framework for supervisory coordination. We employ communication to distinguish between the different flows of information, i.e., observation and supervision, whereas we employ data to specify the coordination requirements more compactly, and to increase the expressivity of the framework. To illustrate the framework, we remodel an industrial case study involving coordination of maintenance procedures of a printing process of a high-tech Oce printer.

  10. Heat transfer and fluid flow in biological processes advances and applications

    CERN Document Server

    Becker, Sid

    2015-01-01

    Heat Transfer and Fluid Flow in Biological Processes covers emerging areas in fluid flow and heat transfer relevant to biosystems and medical technology. This book uses an interdisciplinary approach to provide a comprehensive prospective on biofluid mechanics and heat transfer advances and includes reviews of the most recent methods in modeling of flows in biological media, such as CFD. Written by internationally recognized researchers in the field, each chapter provides a strong introductory section that is useful to both readers currently in the field and readers interested in learning more about these areas. Heat Transfer and Fluid Flow in Biological Processes is an indispensable reference for professors, graduate students, professionals, and clinical researchers in the fields of biology, biomedical engineering, chemistry and medicine working on applications of fluid flow, heat transfer, and transport phenomena in biomedical technology. Provides a wide range of biological and clinical applications of fluid...

  11. A novel methodology for in-process monitoring of flow forming

    Science.gov (United States)

    Appleby, Andrew; Conway, Alastair; Ion, William

    2017-10-01

    Flow forming (FF) is an incremental cold working process with near-net-shape forming capability. Failures by fracture due to high deformation can be unexpected and sometimes catastrophic, causing tool damage. If process failures can be identified in real time, an automatic cut-out could prevent costly tool damage. Sound and vibration monitoring is well established and commercially viable in the machining sector to detect current and incipient process failures, but not for FF. A broad-frequency microphone was used to record the sound signature of the manufacturing cycle for a series of FF parts. Parts were flow formed using single and multiple passes, and flaws were introduced into some of the parts to simulate the presence of spontaneously initiated cracks. The results show that this methodology is capable of identifying both introduced defects and spontaneous failures during flow forming. Further investigation is needed to categorise and identify different modes of failure and identify further potential applications in rotary forming.

  12. New FORTRAN computer programs to acquire and process isotopic mass-spectrometric data

    International Nuclear Information System (INIS)

    Smith, D.H.

    1982-08-01

    The computer programs described in New Computer Programs to Acquire and Process Isotopic Mass Spectrometric Data have been revised. This report describes in some detail the operation of these programs, which acquire and process isotopic mass spectrometric data. Both functional and overall design aspects are addressed. The three basic program units - file manipulation, data acquisition, and data processing - are discussed in turn. Step-by-step instructions are included where appropriate, and each subsection is described in enough detail to give a clear picture of its function. Organization of file structure, which is central to the entire concept, is extensively discussed with the help of numerous tables. Appendices contain flow charts and outline file structure to help a programmer unfamiliar with the programs to alter them with a minimum of lost time

  13. Flows of engineered nanomaterials through the recycling process in Switzerland

    Energy Technology Data Exchange (ETDEWEB)

    Caballero-Guzman, Alejandro; Sun, Tianyin; Nowack, Bernd, E-mail: nowack@empa.ch

    2015-02-15

    Highlights: • Recycling is one of the likely end-of-life fates of nanoproducts. • We assessed the material flows of four nanomaterials in the Swiss recycling system. • After recycling, most nanomaterials will flow to landfills or incineration plants. • Recycled construction waste, plastics and textiles may contain nanomaterials. - Abstract: The use of engineered nanomaterials (ENMs) in diverse applications has increased during the last years and this will likely continue in the near future. As the number of applications increase, more and more waste with nanomaterials will be generated. A portion of this waste will enter the recycling system, for example, in electronic products, textiles and construction materials. The fate of these materials during and after the waste management and recycling operations is poorly understood. The aim of this work is to model the flows of nano-TiO{sub 2}, nano-ZnO, nano-Ag and CNT in the recycling system in Switzerland. The basis for this study is published information on the ENMs flows on the Swiss system. We developed a method to assess their flow after recycling. To incorporate the uncertainties inherent to the limited information available, we applied a probabilistic material flow analysis approach. The results show that the recycling processes does not result in significant further propagation of nanomaterials into new products. Instead, the largest proportion will flow as waste that can subsequently be properly handled in incineration plants or landfills. Smaller fractions of ENMs will be eliminated or end up in materials that are sent abroad to undergo further recovery processes. Only a reduced amount of ENMs will flow back to the productive process of the economy in a limited number of sectors. Overall, the results suggest that risk assessment during recycling should focus on occupational exposure, release of ENMs in landfills and incineration plants, and toxicity assessment in a small number of recycled inputs.

  14. A critical review of the data requirements for fluid flow models through fractured rock

    International Nuclear Information System (INIS)

    Priest, S.D.

    1986-01-01

    The report is a comprehensive critical review of the data requirements for ten models of fluid flow through fractured rock, developed in Europe and North America. The first part of the report contains a detailed review of rock discontinuities and how their important geometrical properties can be quantified. This is followed by a brief summary of the fundamental principles in the analysis of fluid flow through two-dimensional discontinuity networks and an explanation of a new approach to the incorporation of variability and uncertainty into geotechnical models. The report also contains a review of the geological and geotechnical properties of anhydrite and granite. Of the ten fluid flow models reviewed, only three offer a realistic fracture network model for which it is feasible to obtain the input data. Although some of the other models have some valuable or novel features, there is a tendency to concentrate on the simulation of contaminant transport processes, at the expense of providing a realistic fracture network model. Only two of the models reviewed, neither of them developed in Europe, have seriously addressed the problem of analysing fluid flow in three-dimensional networks. (author)

  15. Scenarios for control and data flows in multiprotocol over ATM

    Science.gov (United States)

    Kujoory, Ali

    1997-10-01

    The multiprotocol over ATM (MPOA), specified by the ATM Forum, provides an architecture for transfer of Internetwork layer packets (Layer 3 datagram such as IP, IPX) over ATM subnets or across the emulated LANs. MPOA provides shortcuts that bypass routers to avoid router bottlenecks. It is a grand union of some of the existing standards such as LANE by the ATM Forum, NHRP by the IETF, and the Q.2931 by ITU. The intent of this paper is to clarify the data flows between pairs of source and destination hosts in an MPOA system. It includes scenarios for both the intra- and inter-subnet flows between different pairs of MPOA end-systems. The intrasubnet flows simply use LANE for address resolution or data transfer. The inter-subnet flows may use a default path for short-lived flows or a shortcut for long-lived flows. The default path uses the LANE and router capabilities. The shortcut path uses LANE plus NHRP for ATM address resoluton. An ATM virtual circuit is established before the data transfer. This allows efficient transfer of internetwork layer packets over ATM for real-time applications.

  16. Development of a scintillator detector set with counter and data acquisition for flow measurements

    CERN Document Server

    Costa, F E D

    2002-01-01

    A portable counter with data acquisition system for flow measurements was developed, using the pulse velocity technique. This consists in determining the tracer transit time mixed homogeneously to the liquid or gas pipelines. The counter comprises: (a) two CsI(Tl) crystals solid state detectors, associated with Si PIN photodiodes, with compatible sensitivity to the injected radiotracers activities; (b) amplification units; (c) analogue-to-digital interface, which processes and displays the detectors counting separately and in real time, but in a same temporal axis, via a computer screen and (d) 30-m coaxial cables for signals transmission from each detector to the processing unit. Experiments were carried out for the detector and associated electronic characterizations. The equipment showed to be suitable for flow measurements in an industrial plant, in the real situation.

  17. CyNC: A method for real time analysis of systems with cyclic data flows

    DEFF Research Database (Denmark)

    Jessen, Jan Jacob; Schiøler, Henrik; Nielsen, Jens Frederik Dalsgaard

    2006-01-01

    The paper addresses a novel method for performance analysis of distributed realtime systems with complex, and especially cyclic data flow graphs. The presented method is based on Network Calculus principles, where flow and service constraint functions are used to bound data flows and processing r...... on a relevant example. The method is implemented in a prototype tool also denoted CyNC providing a graphical user interface for model specification based on the MATLAB/SimuLink framework. Udgivelsesdato: DECEMBER...... constraints implicitely given by a fix point equation in a space of constraint functions. In this paper a method denoted CyNC for obtaining a well defined solution to that problem is presented along with a theoretical justification of the method as well as comparative results for CyNC and alternative methods...

  18. A theoretical study of resin flows for thermosetting materials during prepreg processing

    Science.gov (United States)

    Hou, T. H.

    1984-01-01

    A flow model which describes the process of resin consolidation during prepreg lamination was developed. The salient features of model predictions were explored. It is assumed that resin flows in all directions originate from squeezing action between two approaching adjacent fiber/fabric layers. In the horizontal direction, a squeezing flow between two nonporous parallel plates is analyzed, while in the vertical direction a poiseuille type pressure flow through porous media is assumed. Proper force and mass balance was established for the whole system which is composed of these two types of flow. A flow parameter, CF, shows to be a measure of processibility for the curing resin. For a given external load-F the responses of resin flow during prepreg lamination, as measured by CF, are categorized into three regions: (1) the low CF region where resin flows are inhibited by the high chemoviscosity during initial curing stages; (2) the median CF region where resin flows are properly controllable; and (3) the high CF region where resin flows are ceased due to fiber/fabric compression effects. Resin losses in both directions are calculated. Potential uses of this model and quality control of incoming prepreg material are discussed.

  19. Ultrasonic flow measurements for irrigation process monitoring

    Science.gov (United States)

    Ziani, Elmostafa; Bennouna, Mustapha; Boissier, Raymond

    2004-02-01

    This paper presents the state of the art of the general principle of liquid flow measurements by ultrasonic method, and problems of flow measurements. We present an ultrasonic flowmeter designed according to smart sensors concept, for the measurement of irrigation water flowing through pipelines or open channels, using the ultrasonic transit time approach. The new flowmeter works on the principle of measuring time delay differences between sound pulses transmitted upstream and downstream in the flowing liquid. The speed of sound in the flowing medium is eliminated as a variable because the flowrate calculations are based on the reciprocals of the transmission times. The transit time difference is digitally measured by means of a suitable, microprocessor controlled logic. This type of ultrasonic flowmeter will be widely used in industry and water management, it is well studied in this work, followed by some experimental results. For pressurized channels, we use one pair of ultrasonic transducer arranged in proper positions and directions of the pipe, in this case, to determine the liquid velocity, a real time on-line analysis taking account the geometries of the hydraulic system, is applied to the obtained ultrasonic data. In the open channels, we use a single or two pairs of ultrasonic emitter-receiver according to the desired performances. Finally, the goals of this work consist in integrating the smart sensor into irrigation systems monitoring in order to evaluate potential advantages and demonstrate their performance, on the other hand, to understand and use ultrasonic approach for determining flow characteristics and improving flow measurements by reducing errors caused by disturbances of the flow profiles.

  20. MTpy - Python Tools for Magnetotelluric Data Processing and Analysis

    Science.gov (United States)

    Krieger, Lars; Peacock, Jared; Thiel, Stephan; Inverarity, Kent; Kirkby, Alison; Robertson, Kate; Soeffky, Paul; Didana, Yohannes

    2014-05-01

    We present the Python package MTpy, which provides functions for the processing, analysis, and handling of magnetotelluric (MT) data sets. MT is a relatively immature and not widely applied geophysical method in comparison to other geophysical techniques such as seismology. As a result, the data processing within the academic MT community is not thoroughly standardised and is often based on a loose collection of software, adapted to the respective local specifications. We have developed MTpy to overcome problems that arise from missing standards, and to provide a simplification of the general handling of MT data. MTpy is written in Python, and the open-source code is freely available from a GitHub repository. The setup follows the modular approach of successful geoscience software packages such as GMT or Obspy. It contains sub-packages and modules for the various tasks within the standard work-flow of MT data processing and interpretation. In order to allow the inclusion of already existing and well established software, MTpy does not only provide pure Python classes and functions, but also wrapping command-line scripts to run standalone tools, e.g. modelling and inversion codes. Our aim is to provide a flexible framework, which is open for future dynamic extensions. MTpy has the potential to promote the standardisation of processing procedures and at same time be a versatile supplement for existing algorithms. Here, we introduce the concept and structure of MTpy, and we illustrate the workflow of MT data processing, interpretation, and visualisation utilising MTpy on example data sets collected over different regions of Australia and the USA.

  1. Flows of engineered nanomaterials through the recycling process in Switzerland.

    Science.gov (United States)

    Caballero-Guzman, Alejandro; Sun, Tianyin; Nowack, Bernd

    2015-02-01

    The use of engineered nanomaterials (ENMs) in diverse applications has increased during the last years and this will likely continue in the near future. As the number of applications increase, more and more waste with nanomaterials will be generated. A portion of this waste will enter the recycling system, for example, in electronic products, textiles and construction materials. The fate of these materials during and after the waste management and recycling operations is poorly understood. The aim of this work is to model the flows of nano-TiO2, nano-ZnO, nano-Ag and CNT in the recycling system in Switzerland. The basis for this study is published information on the ENMs flows on the Swiss system. We developed a method to assess their flow after recycling. To incorporate the uncertainties inherent to the limited information available, we applied a probabilistic material flow analysis approach. The results show that the recycling processes does not result in significant further propagation of nanomaterials into new products. Instead, the largest proportion will flow as waste that can subsequently be properly handled in incineration plants or landfills. Smaller fractions of ENMs will be eliminated or end up in materials that are sent abroad to undergo further recovery processes. Only a reduced amount of ENMs will flow back to the productive process of the economy in a limited number of sectors. Overall, the results suggest that risk assessment during recycling should focus on occupational exposure, release of ENMs in landfills and incineration plants, and toxicity assessment in a small number of recycled inputs. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Features, Events, and Processes in UZ Flow and Transport

    Energy Technology Data Exchange (ETDEWEB)

    J.E. Houseworth

    2001-04-10

    Unsaturated zone (UZ) flow and radionuclide transport is a component of the natural barriers that affects potential repository performance. The total system performance assessment (TSPA) model, and underlying process models, of this natural barrier component capture some, but not all, of the associated features, events, and processes (FEPs) as identified in the FEPs Database (Freeze, et al. 2001 [154365]). This analysis and model report (AMR) discusses all FEPs identified as associated with UZ flow and radionuclide transport. The purpose of this analysis is to give a comprehensive summary of all UZ flow and radionuclide transport FEPs and their treatment in, or exclusion from, TSPA models. The scope of this analysis is to provide a summary of the FEPs associated with the UZ flow and radionuclide transport and to provide a reference roadmap to other documentation where detailed discussions of these FEPs, treated explicitly in TSPA models, are offered. Other FEPs may be screened out from treatment in TSPA by direct regulatory exclusion or through arguments concerning low probability and/or low consequence of the FEPs on potential repository performance. Arguments for exclusion of FEPs are presented in this analysis. Exclusion of specific FEPs from the UZ flow and transport models does not necessarily imply that the FEP is excluded from the TSPA. Similarly, in the treatment of included FEPs, only the way in which the FEPs are included in the UZ flow and transport models is discussed in this document. This report has been prepared in accordance with the technical work plan for the unsaturated zone subproduct element (CRWMS M&O 2000 [153447]). The purpose of this report is to document that all FEPs are either included in UZ flow and transport models for TSPA, or can be excluded from UZ flow and transport models for TSPA on the basis of low probability or low consequence. Arguments for exclusion are presented in this analysis. Exclusion of specific FEPs from UZ flow and

  3. Features, Events, and Processes in UZ Flow and Transport

    International Nuclear Information System (INIS)

    Houseworth, J.E.

    2001-01-01

    Unsaturated zone (UZ) flow and radionuclide transport is a component of the natural barriers that affects potential repository performance. The total system performance assessment (TSPA) model, and underlying process models, of this natural barrier component capture some, but not all, of the associated features, events, and processes (FEPs) as identified in the FEPs Database (Freeze, et al. 2001 [154365]). This analysis and model report (AMR) discusses all FEPs identified as associated with UZ flow and radionuclide transport. The purpose of this analysis is to give a comprehensive summary of all UZ flow and radionuclide transport FEPs and their treatment in, or exclusion from, TSPA models. The scope of this analysis is to provide a summary of the FEPs associated with the UZ flow and radionuclide transport and to provide a reference roadmap to other documentation where detailed discussions of these FEPs, treated explicitly in TSPA models, are offered. Other FEPs may be screened out from treatment in TSPA by direct regulatory exclusion or through arguments concerning low probability and/or low consequence of the FEPs on potential repository performance. Arguments for exclusion of FEPs are presented in this analysis. Exclusion of specific FEPs from the UZ flow and transport models does not necessarily imply that the FEP is excluded from the TSPA. Similarly, in the treatment of included FEPs, only the way in which the FEPs are included in the UZ flow and transport models is discussed in this document. This report has been prepared in accordance with the technical work plan for the unsaturated zone subproduct element (CRWMS MandO 2000 [153447]). The purpose of this report is to document that all FEPs are either included in UZ flow and transport models for TSPA, or can be excluded from UZ flow and transport models for TSPA on the basis of low probability or low consequence. Arguments for exclusion are presented in this analysis. Exclusion of specific FEPs from UZ flow

  4. Flow Mode Dependent Partitioning Processes of Preferential Flow Dynamics in Unsaturated Fractures - Findings From Analogue Percolation Experiments

    Science.gov (United States)

    Kordilla, J.; Noffz, T.; Dentz, M.; Sauter, M.

    2017-12-01

    To assess the vulnerability of an aquifer system it is of utmost importance to recognize the high potential for a rapid mass transport offered by ow through unsaturated fracture networks. Numerical models have to reproduce complex effects of gravity-driven flow dynamics to generate accurate predictions of flow and transport. However, the non-linear characteristics of free surface flow dynamics and partitioning behaviour at unsaturated fracture intersections often exceed the capacity of classical volume-effective modelling approaches. Laboratory experiments that manage to isolate single aspects of the mass partitioning process can enhance the understanding of underlying dynamics, which ultimately influence travel time distributions on multiple scales. Our analogue fracture network consists of synthetic cubes with dimensions of 20 x 20 x 20 cm creating simple geometries of a single or a cascade of consecutive horizontal fractures. Gravity-driven free surface flow (droplets; rivulets) is established via a high precision multichannel dispenser at flow rates ranging from 1.5 to 4.5 ml/min. Single-inlet experiments show the influence of variable flow rate, atmospheric pressure and temperature on the stability of flow modes and allow to delineate a droplet and rivulet regime. The transition between these regimes exhibits mixed flow characteristics. In addition, multi-inlet setups with constant total infow rates decrease the variance induced by erratic free-surface flow dynamics. We investigate the impacts of variable aperture widths, horizontal offsets of vertical fracture surfaces, and alternating injection methods for both flow regimes. Normalized fracture inflow rates allow to demonstrate and compare the effects of variable geometric features. Firstly, the fracture filling can be described by plug flow. At later stages it transitions into a Washburn-type flow, which we compare to an analytical solution for the case of rivulet flow. Observations show a considerably

  5. Optimizing transformations for automated, high throughput analysis of flow cytometry data.

    Science.gov (United States)

    Finak, Greg; Perez, Juan-Manuel; Weng, Andrew; Gottardo, Raphael

    2010-11-04

    In a high throughput setting, effective flow cytometry data analysis depends heavily on proper data preprocessing. While usual preprocessing steps of quality assessment, outlier removal, normalization, and gating have received considerable scrutiny from the community, the influence of data transformation on the output of high throughput analysis has been largely overlooked. Flow cytometry measurements can vary over several orders of magnitude, cell populations can have variances that depend on their mean fluorescence intensities, and may exhibit heavily-skewed distributions. Consequently, the choice of data transformation can influence the output of automated gating. An appropriate data transformation aids in data visualization and gating of cell populations across the range of data. Experience shows that the choice of transformation is data specific. Our goal here is to compare the performance of different transformations applied to flow cytometry data in the context of automated gating in a high throughput, fully automated setting. We examine the most common transformations used in flow cytometry, including the generalized hyperbolic arcsine, biexponential, linlog, and generalized Box-Cox, all within the BioConductor flowCore framework that is widely used in high throughput, automated flow cytometry data analysis. All of these transformations have adjustable parameters whose effects upon the data are non-intuitive for most users. By making some modelling assumptions about the transformed data, we develop maximum likelihood criteria to optimize parameter choice for these different transformations. We compare the performance of parameter-optimized and default-parameter (in flowCore) data transformations on real and simulated data by measuring the variation in the locations of cell populations across samples, discovered via automated gating in both the scatter and fluorescence channels. We find that parameter-optimized transformations improve visualization, reduce

  6. Optimizing transformations for automated, high throughput analysis of flow cytometry data

    Directory of Open Access Journals (Sweden)

    Weng Andrew

    2010-11-01

    Full Text Available Abstract Background In a high throughput setting, effective flow cytometry data analysis depends heavily on proper data preprocessing. While usual preprocessing steps of quality assessment, outlier removal, normalization, and gating have received considerable scrutiny from the community, the influence of data transformation on the output of high throughput analysis has been largely overlooked. Flow cytometry measurements can vary over several orders of magnitude, cell populations can have variances that depend on their mean fluorescence intensities, and may exhibit heavily-skewed distributions. Consequently, the choice of data transformation can influence the output of automated gating. An appropriate data transformation aids in data visualization and gating of cell populations across the range of data. Experience shows that the choice of transformation is data specific. Our goal here is to compare the performance of different transformations applied to flow cytometry data in the context of automated gating in a high throughput, fully automated setting. We examine the most common transformations used in flow cytometry, including the generalized hyperbolic arcsine, biexponential, linlog, and generalized Box-Cox, all within the BioConductor flowCore framework that is widely used in high throughput, automated flow cytometry data analysis. All of these transformations have adjustable parameters whose effects upon the data are non-intuitive for most users. By making some modelling assumptions about the transformed data, we develop maximum likelihood criteria to optimize parameter choice for these different transformations. Results We compare the performance of parameter-optimized and default-parameter (in flowCore data transformations on real and simulated data by measuring the variation in the locations of cell populations across samples, discovered via automated gating in both the scatter and fluorescence channels. We find that parameter

  7. Experimental Investigation of Rainfall Impact on Overland Flow Driven Erosion Processes and Flow Hydrodynamics on a Steep Hillslope

    Science.gov (United States)

    Tian, P.; Xu, X.; Pan, C.; Hsu, K. L.; Yang, T.

    2016-12-01

    Few attempts have been made to investigate the quantitative effects of rainfall on overland flow driven erosion processes and flow hydrodynamics on steep hillslopes under field conditions. Field experiments were performed in flows for six inflow rates (q: 6-36 Lmin-1m-1) with and without rainfall (60 mm h-1) on a steep slope (26°) to investigate: (1) the quantitative effects of rainfall on runoff and sediment yield processes, and flow hydrodynamics; (2) the effect of interaction between rainfall and overland flow on soil loss. Results showed that the rainfall increased runoff coefficients and the fluctuation of temporal variations in runoff. The rainfall significantly increased soil loss (10.6-68.0%), but this increment declined as q increased. When the interrill erosion dominated (q=6 Lmin-1m-1), the increment in the rill erosion was 1.5 times that in the interrill erosion, and the effect of the interaction on soil loss was negative. When the rill erosion dominated (q=6-36 Lmin-1m-1), the increment in the interrill erosion was 1.7-8.8 times that in the rill erosion, and the effect of the interaction on soil loss became positive. The rainfall was conducive to the development of rills especially for low inflow rates. The rainfall always decreased interrill flow velocity, decreased rill flow velocity (q=6-24 Lmin-1m-1), and enhanced the spatial uniformity of the velocity distribution. Under rainfall disturbance, flow depth, Reynolds number (Re) and resistance were increased but Froude number was reduced, and lower Re was needed to transform a laminar flow to turbulent flow. The rainfall significantly increased flow shear stress (τ) and stream power (φ), with the most sensitive parameters to sediment yield being τ (R2=0.994) and φ (R2=0.993), respectively, for non-rainfall and rainfall conditions. Compared to non-rainfall conditions, there was a reduction in the critical hydrodynamic parameters of mean flow velocity, τ, and φ by the rainfall. These findings

  8. Views on the calculation of flow and dispersion processes in fractured rock

    International Nuclear Information System (INIS)

    Joensson, Lennart

    1990-03-01

    In the report some basic aspects on model types, physical processes, determination of parameters are discussed in relation to a description of flow and dispersion processes in fractured rocks. As far as model types concern it is shown that Darcy's law and the dispersion equation are not especially applicable. These equations can only describe an average situation of flow and spreading while in reality very large deviations could exist between an average situation and the flow and concentration distribution for a certain fracture geometry. The reason for this is primarily the relation between the length scales for the repository and the near field and the fracture system respectively and the poor connectivity between fractures or expressed in another way - the geosphere can not be treated as a continuous medium. The statistical properties of the fractures and the fracture geometry cause large uncertainties in at least two respects: * boundary conditions as to groundwater flow at the repository and thus the mass flow of radioactive material * distribution of flows and concentrations in planes in the geosphere on different distances from the repository. A realistic evaluation of transport and spreading of radioactive material by the groundwater in the geosphere thus requires that the possible variation or uncertainty of the water conducting characteristics of the fracture system is considered. A possible approach is then to describe flow in the geosphere on the basic of the flow in single fractures which are hydraulically connected to each other so that a flow in a fracture system is obtained. The discussion on physical processes which might influence the flow description in single fractures is concentrated to three aspects - factors driving the flow besides the ordinary hydraulic gradient, the viscous properties of water in a very small space (such as a fracture), the influence on the flow of heat release from the repository. (42 figs., 28 refs.)

  9. Evaluating the Financial Flows of Bessel Processes by Using Spectral Analysis

    Directory of Open Access Journals (Sweden)

    Burtnyak Ivan V.

    2017-07-01

    Full Text Available The article solves the two-parameter task of evaluating the intensity of diffuse Bessel processes by the methods of spectral theory. In particular, barriers for cost of options, where the derivative of financial flows turns into zero, have been considered, and a task for the two-barrier option has been solved, which corresponds to Bessel process. A Green’s function has been built for the diffusion Bessel process of the two-barrier option, decomposed according to the first-type system of Bessel functions. The barriers are taken in such a way that the derivative of financial flow in terms of price is turned to zero, i.e. there are the points where flow can acquire extreme values. On the basis of Green’s function, the value of securities has been calculated. It is handier to use similar barriers when monitoring a stock market. The Green’s function for this task, which represents the probability of spreading the option price, is represented through the Fourier series. This provides an opportunity to evaluate the intensity of financial flows in stock markets.

  10. Seston Data from Flow Cytometers and Microscope Environmental Data from Sondes

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Seston data with phytoplankton and size fractioned non-living particles counted by flow cytomter from Penebscot River, Maine in April, May, and June of 2015. High...

  11. Securing a robust electrical discharge drilling process by means of flow rate control

    Science.gov (United States)

    Risto, Matthias; Munz, Markus; Haas, Ruediger; Abdolahi, Ali

    2017-10-01

    This paper deals with the increase of the process robustness while drilling cemented carbide using electrical discharge machining (EDM). A demand for high efficiency in the resulting diameter is equivalent with a high robustness of the EDM drilling process. Analysis were done to investigate the process robustness (standard deviation of the borehole diameter) when drilling cemented carbide. The investigation has shown that the dielectric flow rate changes over the drilling process. In this case the flow rate decreased with a shorter tool electrode due to an uneven wear of the tool electrode's cross section. Using a controlled flow rate during the drilling process has led to a reduced standard deviation of the borehole diameter, thus to a higher process robustness when drilling cemented carbide.

  12. Process and sensor diagnostic: Data reconciliation for a flue gas channel; Process- och sensordiagnostik: Dataaaterfoerening foer ett roekgastaag

    Energy Technology Data Exchange (ETDEWEB)

    Karlsson, Christer; Dahlquist, Erik [Maelardalen Univ., Vaesteraas (Sweden). Dept. of Public Technology

    2003-10-01

    The project has shown that model-based data reconciliation can be used in heat and power plants, but it needs support of soft sensors. Generally power plants are not equipped with more sensors than required by control systems, environment and financial reports. Soft sensors are needed to compensate for lack of redundancy in mass-flow sensors. Redundancy makes it possible to isolate gross errors. The smallest error needed to be determined sets the requirement on the process model accuracy. Tools available today from this project are; classification of different process sections with redundancy analysis and gross error detection. Quantification of the errors with the mass balance model has not been successful and this part needs further development. Theoretical comparison of the three different methods presented resulted in favour of data reconciliation based on a mass balance model. The mass balance model has a structure based on physical reality. The searches for gross errors are transparent to the user. It can handle sensor failure. The statistical linear model is preferred for smaller process sections when transparency is not needed and focus is on fast, simple and cheap implementation. Data reconciliation based on steady-state energy balance has the same origin as the mass balance model. Data reconciliation based on energy balance is harder to compute and its sensors difficult to classify. The drawback is complexity, but the strength is that the large number of temperature sensors can be used in the data reconciliation. Large gross errors are detected and quantified for most process mass flows with acceptable accuracy. Performances for small errors are not as good. Performance of the data reconciliation is strongly dependent on precision in the process models. This conclusion is drawn from comparison with other studies that show good performance for laboratory simulations. There are still many parts to develop further as: Soft sensors, tests for identification

  13. The Akzo-Fina cold flow improvement process

    Energy Technology Data Exchange (ETDEWEB)

    Free, H.W.H.; Schockaert, T.; Sonnemans, J.W.M. (Akzo Chemicals B.V., Amersfoort (Netherlands). Hydroprocessing Catalysts)

    1993-09-01

    The Akzo-Fina CFI process is a very flexible process in which improvement of cold flow properties, desulfurization and hydroconversion are achieved. One of the main characteristics is the dewaxing obtained by the selective hydrocracking of normal paraffins combined with hydro-desulfurization and hydroconversion. Since its introduction in 1988, five licenses have been sold. The units currently run for heavy gasoil upgrading show an excellent performance and reach pour point improvements of over 50[degree]C, long cycle lengths and product sulfur levels well below 0.05 wt%. 2 figs., 2 tabs.

  14. Effects Of Thermal Exchange On Material Flow During Steel Thixoextrusion Process

    International Nuclear Information System (INIS)

    Becker, Eric; Gu Guochao; Langlois, Laurent; Bigot, Regis; Pesci, Raphael

    2011-01-01

    Semisolid processing is an innovative technology for near net-shape production of components, where the metallic alloys are processed in the semisolid state. Taking advantage of the thixotropic behavior of alloys in the semisolid state, significant progress has been made in semisolid processing. However, the consequences of such behavior on the flow during thixoforming are still not completely understood. To explore and better understand the influence of the different parameters on material flow during thixoextrusion process, thixoextrusion experiments were performed using the low carbon steel C38. The billet was partially melted at high solid fraction. Effects of various process parameters including the initial billet temperature, the temperature of die, the punch speed during process and the presence of a Ceraspray layer at the interface of tool and billet were investigated through experiments and simulation. After analyzing the results thus obtained, it was identified that the aforementioned parameters mainly affect thermal exchanges between die and part. The Ceraspray layer not only plays a lubricant role, but also acts as a thermal barrier at the interface of tool and billet. Furthermore, the thermal effects can affect the material flow which is composed of various distinct zones.

  15. Modular toolkit for Data Processing (MDP: a Python data processing framework

    Directory of Open Access Journals (Sweden)

    Tiziano Zito

    2009-01-01

    Full Text Available Modular toolkit for Data Processing (MDP is a data processing framework written in Python. From the user's perspective, MDP is a collection of supervised and unsupervised learning algorithms and other data processing units that can be combined into data processing sequences and more complex feed-forward network architectures. Computations are performed efficiently in terms of speed and memory requirements. From the scientific developer's perspective, MDP is a modular framework, which can easily be expanded. The implementation of new algorithms is easy and intuitive. The new implemented units are then automatically integrated with the rest of the library. MDP has been written in the context of theoretical research in neuroscience, but it has been designed to be helpful in any context where trainable data processing algorithms are used. Its simplicity on the user's side, the variety of readily available algorithms, and the reusability of the implemented units make it also a useful educational tool.

  16. Modelling of Gas Flow in the Underground Coal Gasification Process and its Interactions with the Rock Environment

    Directory of Open Access Journals (Sweden)

    Tomasz Janoszek

    2013-01-01

    Full Text Available The main goal of this study was the analysis of gas flow in the underground coal gasification process and interactions with the surrounding rock mass. The article is a discussion of the assumptions for the geometric model and for the numerical method for its solution as well as assumptions for modelling the geochemical model of the interaction between gas-rock-water, in terms of equilibrium calculations, chemical and gas flow modelling in porous mediums. Ansys-Fluent software was used to describe the underground coal gasification process (UCG. The numerical solution was compared with experimental data. The PHREEQC program was used to describe the chemical reaction between the gaseous products of the UCG process and the rock strata in the presence of reservoir waters.

  17. Batch-processed carbon nanotube wall as pressure and flow sensor

    International Nuclear Information System (INIS)

    Choi, Jungwook; Kim, Jongbaeg

    2010-01-01

    A pressure and flow sensor based on the electrothermal-thermistor effect of a batch-processed carbon nanotube wall (CNT wall) is presented. The negative temperature coefficient of resistance (TCR) of CNTs and the temperature dependent tunneling rate through the CNT/silicon junction enable vacuum pressure and flow velocity sensing because the heat transfer rate between CNTs and the surrounding gas molecules differs depending on pressure and flow rate. The CNT walls are synthesized by thermal chemical vapor deposition (CVD) on an array of microelectrodes fabricated on a silicon-on-insulator (SOI) wafer. The CNTs are self-assembled between the microelectrodes and substrate across the thickness of a buried oxide layer during the synthesis process, and the simple batch fabrication results in high throughput and yield. A wide pressure range, down to 3 x 10 -3 from 10 5 Pa, and a nitrogen flow velocity range between 1 and 52.4 mm s -1 , are sensed. Further experimental characterizations of the bias voltage dependent response of the sensor as a vacuum pressure gauge are presented.

  18. Improving urban wind flow predictions through data assimilation

    Science.gov (United States)

    Sousa, Jorge; Gorle, Catherine

    2017-11-01

    Computational fluid dynamic is fundamentally important to several aspects in the design of sustainable and resilient urban environments. The prediction of the flow pattern for example can help to determine pedestrian wind comfort, air quality, optimal building ventilation strategies, and wind loading on buildings. However, the significant variability and uncertainty in the boundary conditions poses a challenge when interpreting results as a basis for design decisions. To improve our understanding of the uncertainties in the models and develop better predictive tools, we started a pilot field measurement campaign on Stanford University's campus combined with a detailed numerical prediction of the wind flow. The experimental data is being used to investigate the potential use of data assimilation and inverse techniques to better characterize the uncertainty in the results and improve the confidence in current wind flow predictions. We consider the incoming wind direction and magnitude as unknown parameters and perform a set of Reynolds-averaged Navier-Stokes simulations to build a polynomial chaos expansion response surface at each sensor location. We subsequently use an inverse ensemble Kalman filter to retrieve an estimate for the probabilistic density function of the inflow parameters. Once these distributions are obtained, the forward analysis is repeated to obtain predictions for the flow field in the entire urban canopy and the results are compared with the experimental data. We would like to acknowledge high-performance computing support from Yellowstone (ark:/85065/d7wd3xhc) provided by NCAR.

  19. Fractional Flow Theory Applicable to Non-Newtonian Behavior in EOR Processes

    NARCIS (Netherlands)

    Rossen, W.R.; Venkatraman, A.; Johns, R.T.; Kibodeaux, K.R.; Lai, H.; Moradi Tehrani, N.

    2011-01-01

    The method of characteristics, or fractional-flow theory, is extremely useful in understanding complex Enhanced Oil Recovery (EOR) processes and in calibrating simulators. One limitation has been its restriction to Newtonian rheology except in rectilinear flow. Its inability to deal with

  20. Process Measurement Deviation Analysis for Flow Rate due to Miscalibration

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Eunsuk; Kim, Byung Rae; Jeong, Seog Hwan; Choi, Ji Hye; Shin, Yong Chul; Yun, Jae Hee [KEPCO Engineering and Construction Co., Deajeon (Korea, Republic of)

    2016-10-15

    An analysis was initiated to identify the root cause, and the exemption of high static line pressure correction to differential pressure (DP) transmitters was one of the major deviation factors. Also the miscalibrated DP transmitter range was identified as another major deviation factor. This paper presents considerations to be incorporated in the process flow measurement instrumentation calibration and the analysis results identified that the DP flow transmitter electrical output decreased by 3%. Thereafter, flow rate indication decreased by 1.9% resulting from the high static line pressure correction exemption and measurement range miscalibration. After re-calibration, the flow rate indication increased by 1.9%, which is consistent with the analysis result. This paper presents the brief calibration procedures for Rosemount DP flow transmitter, and analyzes possible three cases of measurement deviation including error and cause. Generally, the DP transmitter is required to be calibrated with precise process input range according to the calibration procedure provided for specific DP transmitter. Especially, in case of the DP transmitter installed in high static line pressure, it is important to correct the high static line pressure effect to avoid the inherent systematic error for Rosemount DP transmitter. Otherwise, failure to notice the correction may lead to indicating deviation from actual value.

  1. Soil Heat Flow. Physical Processes in Terrestrial and Aquatic Ecosystems, Transport Processes.

    Science.gov (United States)

    Simpson, James R.

    These materials were designed to be used by life science students for instruction in the application of physical theory to ecosystem operation. Most modules contain computer programs which are built around a particular application of a physical process. Soil heat flow and the resulting soil temperature distributions have ecological consequences…

  2. Ventilator flow data predict bronchopulmonary dysplasia in extremely premature neonates

    Directory of Open Access Journals (Sweden)

    Mariann H. Bentsen

    2018-03-01

    Full Text Available Early prediction of bronchopulmonary dysplasia (BPD may facilitate tailored management for neonates at risk. We investigated whether easily accessible flow data from a mechanical ventilator can predict BPD in neonates born extremely premature (EP. In a prospective population-based study of EP-born neonates, flow data were obtained from the ventilator during the first 48 h of life. Data were logged for >10 min and then converted to flow–volume loops using custom-made software. Tidal breathing parameters were calculated and averaged from ≥200 breath cycles, and data were compared between those who later developed moderate/severe and no/mild BPD. Of 33 neonates, 18 developed moderate/severe and 15 no/mild BPD. The groups did not differ in gestational age, surfactant treatment or ventilator settings. The infants who developed moderate/severe BPD had evidence of less airflow obstruction, significantly so for tidal expiratory flow at 50% of tidal expiratory volume (TEF50 expressed as a ratio of peak tidal expiratory flow (PTEF (p=0.007. A compound model estimated by multiple logistic regression incorporating TEF50/PTEF, birthweight z-score and sex predicted moderate/severe BPD with good accuracy (area under the curve 0.893, 95% CI 0.735–0.973. This study suggests that flow data obtained from ventilators during the first hours of life may predict later BPD in premature neonates. Future and larger studies are needed to validate these findings and to determine their clinical usefulness.

  3. Formation of a Methodological Approach to Evaluating the State of Management of Enterprise Flow Processes

    Directory of Open Access Journals (Sweden)

    Dzobko Iryna P.

    2016-02-01

    Full Text Available The formation of a methodological approach to evaluating management of the state of enterprise flow processes has been considered. Proceeding from the developed and presented in literary sources theoretical propositions on organization of management of enterprise flow processes, the hypothesis of the study is correlation of quantitative and qualitative evaluations of management effectiveness and formation of the integral index on their basis. The article presents stages of implementation of a methodological approach to evaluating the state of management of enterprise flow processes, which implies indicating the components, their characteristics and methods of research. The composition of indicators, on the basis of which it is possible to evaluate effectiveness of management of enterprise flow processes, has been determined. Grouping of such indicators based on the flow nature of enterprise processes has been performed. The grouping of indicators is justified by a pairwise determination of canonical correlations between the selected groups (the obtained high correlation coefficients confirmed the author’s systematization of indicators. It is shown that a specificity of the formation of a methodological approach to evaluating the state of management of enterprise flow processes requires expansion in the direction of aggregation of the results and determination of factors that influence effectiveness of flow processes management. The article carries out such aggregation using the factor analysis. Distribution of a set of objects into different classes according to the results of the cluster analysis has been presented. To obtain an integral estimation of effectiveness of flow processes management, the taxonomic index of a multidimensional object has been built. A peculiarity of the formed methodological approach to evaluating the state of management of enterprise flow processes is in the matrix correlation of integral indicators calculated on

  4. Fusion environment sensitive flow and fracture processes

    International Nuclear Information System (INIS)

    1980-01-01

    As a planning activity, the objectives of the workshop were to list, prioritize and milestone the activities necessary to understand, interpret and control the mechanical behavior of candidate fusion reactor alloys. Emphasis was placed on flow and fracture processes which are unique to the fusion environment since the national fusion materials program must evaluate these effects without assistance from other reactor programs

  5. Boostream: a dynamic fluid flow process to assemble nanoparticles at liquid interface

    Science.gov (United States)

    Delléa, Olivier; Lebaigue, Olivier

    2017-12-01

    CEA-LITEN develops an original process called Boostream® to manipulate, assemble and connect micro- or nanoparticles of various materials, sizes, shapes and functions to obtain monolayer colloidal crystals (MCCs). This process uses the upper surface of a liquid film flowing down a ramp to assemble particles in a manner that is close to the horizontal situation of a Langmuir-Blodgett film construction. In presence of particles at the liquid interface, the film down-flow configuration exhibits an unusual hydraulic jump which results from the fluid flow accommodation to the particle monolayer. In order to master our process, the fluid flow has been modeled and experimentally characterized by optical means, such as with the moiré technique that consists in observing the reflection of a succession of periodic black-and-red fringes on the liquid surface mirror. The fringe images are deformed when reflected by the curved liquid surface associated with the hydraulic jump, the fringe deformation being proportional to the local slope of the surface. This original experimental setup allowed us to get the surface profile in the jump region and to measure it along with the main process parameters (liquid flow rate, slope angle, temperature sensitive fluid properties such as dynamic viscosity or surface tension, particle sizes). This work presents the experimental setup and its simple model, the different experimental characterization techniques used and will focus on the way the hydraulic jump relies on the process parameters.

  6. Traffic Flow Prediction Using MI Algorithm and Considering Noisy and Data Loss Conditions: An Application to Minnesota Traffic Flow Prediction

    Directory of Open Access Journals (Sweden)

    Seyed Hadi Hosseini

    2014-10-01

    Full Text Available Traffic flow forecasting is useful for controlling traffic flow, traffic lights, and travel times. This study uses a multi-layer perceptron neural network and the mutual information (MI technique to forecast traffic flow and compares the prediction results with conventional traffic flow forecasting methods. The MI method is used to calculate the interdependency of historical traffic data and future traffic flow. In numerical case studies, the proposed traffic flow forecasting method was tested against data loss, changes in weather conditions, traffic congestion, and accidents. The outcomes were highly acceptable for all cases and showed the robustness of the proposed flow forecasting method.

  7. A Labeled Data Set For Flow-based Intrusion Detection

    NARCIS (Netherlands)

    Sperotto, Anna; Sadre, R.; van Vliet, Frank; Pras, Aiko; Nunzi, Giorgio; Scoglio, Caterina; Li, Xing

    2009-01-01

    Flow-based intrusion detection has recently become a promising security mechanism in high speed networks (1-10 Gbps). Despite the richness in contributions in this field, benchmarking of flow-based IDS is still an open issue. In this paper, we propose the first publicly available, labeled data set

  8. Functional language and data flow architectures

    Science.gov (United States)

    Ercegovac, M. D.; Patel, D. R.; Lang, T.

    1983-01-01

    This is a tutorial article about language and architecture approaches for highly concurrent computer systems based on the functional style of programming. The discussion concentrates on the basic aspects of functional languages, and sequencing models such as data-flow, demand-driven and reduction which are essential at the machine organization level. Several examples of highly concurrent machines are described.

  9. Go With the Flow, on Jupiter and Snow. Coherence from Model-Free Video Data Without Trajectories

    Science.gov (United States)

    AlMomani, Abd AlRahman R.; Bollt, Erik

    2018-06-01

    Viewing a data set such as the clouds of Jupiter, coherence is readily apparent to human observers, especially the Great Red Spot, but also other great storms and persistent structures. There are now many different definitions and perspectives mathematically describing coherent structures, but we will take an image processing perspective here. We describe an image processing perspective inference of coherent sets from a fluidic system directly from image data, without attempting to first model underlying flow fields, related to a concept in image processing called motion tracking. In contrast to standard spectral methods for image processing which are generally related to a symmetric affinity matrix, leading to standard spectral graph theory, we need a not symmetric affinity which arises naturally from the underlying arrow of time. We develop an anisotropic, directed diffusion operator corresponding to flow on a directed graph, from a directed affinity matrix developed with coherence in mind, and corresponding spectral graph theory from the graph Laplacian. Our methodology is not offered as more accurate than other traditional methods of finding coherent sets, but rather our approach works with alternative kinds of data sets, in the absence of vector field. Our examples will include partitioning the weather and cloud structures of Jupiter, and a local to Potsdam, NY, lake effect snow event on Earth, as well as the benchmark test double-gyre system.

  10. Data processing made simple

    CERN Document Server

    Wooldridge, Susan

    2013-01-01

    Data Processing: Made Simple, Second Edition presents discussions of a number of trends and developments in the world of commercial data processing. The book covers the rapid growth of micro- and mini-computers for both home and office use; word processing and the 'automated office'; the advent of distributed data processing; and the continued growth of database-oriented systems. The text also discusses modern digital computers; fundamental computer concepts; information and data processing requirements of commercial organizations; and the historical perspective of the computer industry. The

  11. Bayesian Nonlinear Assimilation of Eulerian and Lagrangian Coastal Flow Data

    Science.gov (United States)

    2015-09-30

    Lagrangian Coastal Flow Data Dr. Pierre F.J. Lermusiaux Department of Mechanical Engineering Center for Ocean Science and Engineering Massachusetts...Develop and apply theory, schemes and computational systems for rigorous Bayesian nonlinear assimilation of Eulerian and Lagrangian coastal flow data...coastal ocean fields, both in Eulerian and Lagrangian forms. - Further develop and implement our GMM-DO schemes for robust Bayesian nonlinear estimation

  12. PACTOLUS, Nuclear Power Plant Cost and Economics by Discounted Cash Flow Method. CLOTHO, Mass Flow Data Calculation for Program PACTOLUS

    International Nuclear Information System (INIS)

    Haffner, D.R.

    1976-01-01

    1 - Description of problem or function: PACTOLUS is a code for computing nuclear power costs using the discounted cash flow method. The cash flows are generated from input unit costs, time schedules and burnup data. CLOTHO calculates and communicates to PACTOLUS mass flow data to match a specified load factor history. 2 - Method of solution: Plant lifetime power costs are calculated using the discounted cash flow method. 3 - Restrictions on the complexity of the problem - Maxima of: 40 annual time periods into which all costs and mass flows are accumulated, 20 isotopic mass flows charged into and discharged from the reactor model

  13. A Fault Detection Mechanism in a Data-flow Scheduled Multithreaded Processor

    NARCIS (Netherlands)

    Fu, J.; Yang, Q.; Poss, R.; Jesshope, C.R.; Zhang, C.

    2014-01-01

    This paper designs and implements the Redundant Multi-Threading (RMT) in a Data-flow scheduled MultiThreaded (DMT) multicore processor, called Data-flow scheduled Redundant Multi-Threading (DRMT). Meanwhile, It presents Asynchronous Output Comparison (AOC) for RMT techniques to avoid fault detection

  14. MetaboLab - advanced NMR data processing and analysis for metabolomics

    Directory of Open Access Journals (Sweden)

    Günther Ulrich L

    2011-09-01

    Full Text Available Abstract Background Despite wide-spread use of Nuclear Magnetic Resonance (NMR in metabolomics for the analysis of biological samples there is a lack of graphically driven, publicly available software to process large one and two-dimensional NMR data sets for statistical analysis. Results Here we present MetaboLab, a MATLAB based software package that facilitates NMR data processing by providing automated algorithms for processing series of spectra in a reproducible fashion. A graphical user interface provides easy access to all steps of data processing via a script builder to generate MATLAB scripts, providing an option to alter code manually. The analysis of two-dimensional spectra (1H,13C-HSQC spectra is facilitated by the use of a spectral library derived from publicly available databases which can be extended readily. The software allows to display specific metabolites in small regions of interest where signals can be picked. To facilitate the analysis of series of two-dimensional spectra, different spectra can be overlaid and assignments can be transferred between spectra. The software includes mechanisms to account for overlapping signals by highlighting neighboring and ambiguous assignments. Conclusions The MetaboLab software is an integrated software package for NMR data processing and analysis, closely linked to the previously developed NMRLab software. It includes tools for batch processing and gives access to a wealth of algorithms available in the MATLAB framework. Algorithms within MetaboLab help to optimize the flow of metabolomics data preparation for statistical analysis. The combination of an intuitive graphical user interface along with advanced data processing algorithms facilitates the use of MetaboLab in a broader metabolomics context.

  15. Coaching, lean processes and the concept of flow

    DEFF Research Database (Denmark)

    Skytte Gørtz, Kim Erik

    2008-01-01

    The chapter takes us inside Nordea Bank to look at how coaching was used to support their leadership development as they underwent a major change effort implementation. Drawing on the literature on Lean processes, flow and coaching, it demonstrates some of the challenges and opportunities...

  16. A Data Flow Model to Solve the Data Distribution Changing Problem in Machine Learning

    Directory of Open Access Journals (Sweden)

    Shang Bo-Wen

    2016-01-01

    Full Text Available Continuous prediction is widely used in broad communities spreading from social to business and the machine learning method is an important method in this problem.When we use the machine learning method to predict a problem. We use the data in the training set to fit the model and estimate the distribution of data in the test set.But when we use machine learning to do the continuous prediction we get new data as time goes by and use the data to predict the future data, there may be a problem. As the size of the data set increasing over time, the distribution changes and there will be many garbage data in the training set.We should remove the garbage data as it reduces the accuracy of the prediction. The main contribution of this article is using the new data to detect the timeliness of historical data and remove the garbage data.We build a data flow model to describe how the data flow among the test set, training set, validation set and the garbage set and improve the accuracy of prediction. As the change of the data set, the best machine learning model will change.We design a hybrid voting algorithm to fit the data set better that uses seven machine learning models predicting the same problem and uses the validation set putting different weights on the learning models to give better model more weights. Experimental results show that, when the distribution of the data set changes over time, our time flow model can remove most of the garbage data and get a better result than the traditional method that adds all the data to the data set; our hybrid voting algorithm has a better prediction result than the average accuracy of other predict models

  17. Effects of non-homogeneous flow on ADCP data processing in a hydroturbine forebay

    Energy Technology Data Exchange (ETDEWEB)

    Harding, S. F.; Richmond, M. C.; Romero-Gomez, P.; Serkowski, J. A.

    2016-12-01

    Observations of the flow conditions in the forebay of a hydroelectric power station indicate significant regions of non-homogeneous velocities near the intakes and shoreline. The effect of these non-homogeneous regions on the velocity measurement of an acoustic Doppler current profiler (ADCP) is investigated. By using a numerical model of an ADCP operating in a velocity field calculated using computational fluid dynamics (CFD), the errors due to the spatial variation of the flow velocity are identified. The numerical model of the ADCP is referred to herein as a Virtual ADCP (VADCP). Two scenarios are modeled in the numerical analyses presented. Firstly the measurement error of the VADCP is calculated for a single instrument adjacent to the short converging intake of the powerhouse. Secondly, the flow discharge through the forebay is estimated from a transect of VADCP instruments at dif- ferent distances from the powerhouse. The influence of instrument location and orientation are investigated for both cases. A velocity error of over up to 94% of the reference velocity is calculated for a VADCP modeled adjacent to an operating intake. Qualitative agreement is observed between the calculated VADCP velocities and reference velocities by an offset of one intake height upstream of the powerhouse.

  18. On the self-organizing process of large scale shear flows

    Energy Technology Data Exchange (ETDEWEB)

    Newton, Andrew P. L. [Department of Applied Maths, University of Sheffield, Sheffield, Yorkshire S3 7RH (United Kingdom); Kim, Eun-jin [School of Mathematics and Statistics, University of Sheffield, Sheffield, Yorkshire S3 7RH (United Kingdom); Liu, Han-Li [High Altitude Observatory, National Centre for Atmospheric Research, P. O. BOX 3000, Boulder, Colorado 80303-3000 (United States)

    2013-09-15

    Self organization is invoked as a paradigm to explore the processes governing the evolution of shear flows. By examining the probability density function (PDF) of the local flow gradient (shear), we show that shear flows reach a quasi-equilibrium state as its growth of shear is balanced by shear relaxation. Specifically, the PDFs of the local shear are calculated numerically and analytically in reduced 1D and 0D models, where the PDFs are shown to converge to a bimodal distribution in the case of finite correlated temporal forcing. This bimodal PDF is then shown to be reproduced in nonlinear simulation of 2D hydrodynamic turbulence. Furthermore, the bimodal PDF is demonstrated to result from a self-organizing shear flow with linear profile. Similar bimodal structure and linear profile of the shear flow are observed in gulf stream, suggesting self-organization.

  19. Numerical simulations of rarefied gas flows in thin film processes

    NARCIS (Netherlands)

    Dorsman, R.

    2007-01-01

    Many processes exist in which a thin film is deposited from the gas phase, e.g. Chemical Vapor Deposition (CVD). These processes are operated at ever decreasing reactor operating pressures and with ever decreasing wafer feature dimensions, reaching into the rarefied flow regime. As numerical

  20. Experimental measurement of oil–water two-phase flow by data fusion of electrical tomography sensors and venturi tube

    International Nuclear Information System (INIS)

    Liu, Yinyan; Deng, Yuchi; Zhang, Maomao; Yu, Peining; Li, Yi

    2017-01-01

    Oil–water two-phase flows are commonly found in the production processes of the petroleum industry. Accurate online measurement of flow rates is crucial to ensure the safety and efficiency of oil exploration and production. A research team from Tsinghua University has developed an experimental apparatus for multiphase flow measurement based on an electrical capacitance tomography (ECT) sensor, an electrical resistance tomography (ERT) sensor, and a venturi tube. This work presents the phase fraction and flow rate measurements of oil–water two-phase flows based on the developed apparatus. Full-range phase fraction can be obtained by the combination of the ECT sensor and the ERT sensor. By data fusion of differential pressures measured by venturi tube and the phase fraction, the total flow rate and single-phase flow rate can be calculated. Dynamic experiments were conducted on the multiphase flow loop in horizontal and vertical pipelines and at various flow rates. (paper)

  1. ATLAS DataFlow Infrastructure recent results from ATLAS cosmic and first-beam data-taking

    CERN Document Server

    Vandelli, W

    2010-01-01

    The ATLAS DataFlow infrastructure is responsible for the collection and conveyance of event data from the detector front-end electronics to the mass storage. Several optimized and multi-threaded applications fulfill this purpose operating over a multi-stage Gigabit Ethernet network which is the backbone of the ATLAS Trigger and Data Acquisition System. The system must be able to efficiently transport event-data with high reliability, while providing aggregated bandwidths larger than 5 GByte/s and coping with many thousands network connections. Nevertheless, routing and streaming capabilities and monitoring and data accounting functionalities are also fundamental requirements. During 2008, a few months of ATLAS cosmic data-taking and the first experience with the LHC beams provided an unprecedented testbed for the evaluation of the performance of the ATLAS DataFlow, in terms of functionality, robustness and stability. Besides, operating the system far from its design specifications helped in exercising its fle...

  2. 3D-CFD Simulation of Confined Cross-Flow Injection Process Using Single Piston Pump

    Directory of Open Access Journals (Sweden)

    M. Elashmawy

    2017-12-01

    Full Text Available Injection process into a confined cross flow is quite important for many applications including chemical engineering and water desalination technology. The aim of this study is to investigate the performance of the injection process into a confined cross-flow of a round pipe using a single piston injection pump. A computational fluid dynamics (CFD analysis has been carried out to investigate the effect of the locations of the maximum velocity and minimum pressure on the confined cross-flow process. The jet trajectory is analyzed and related to the injection pump shaft angle of rotation during the injection duty cycle by focusing on the maximum instant injection flow of the piston action. Results indicate a low effect of the jet trajectory within the range related to the injection pump operational conditions. Constant cross-flow was used and injection flow is altered to vary the jet to line flow ratio (QR. The maximum jet trajectory exhibits low penetration inside the cross-flow. The results showed three regions of the flow ratio effect zones with different behaviors. Results also showed that getting closer to the injection port causes a significant decrease on the locations of the maximum velocity and minimum pressure.

  3. Data File Standard for Flow Cytometry, version FCS 3.1.

    Science.gov (United States)

    Spidlen, Josef; Moore, Wayne; Parks, David; Goldberg, Michael; Bray, Chris; Bierre, Pierre; Gorombey, Peter; Hyun, Bill; Hubbard, Mark; Lange, Simon; Lefebvre, Ray; Leif, Robert; Novo, David; Ostruszka, Leo; Treister, Adam; Wood, James; Murphy, Robert F; Roederer, Mario; Sudar, Damir; Zigon, Robert; Brinkman, Ryan R

    2010-01-01

    The flow cytometry data file standard provides the specifications needed to completely describe flow cytometry data sets within the confines of the file containing the experimental data. In 1984, the first Flow Cytometry Standard format for data files was adopted as FCS 1.0. This standard was modified in 1990 as FCS 2.0 and again in 1997 as FCS 3.0. We report here on the next generation flow cytometry standard data file format. FCS 3.1 is a minor revision based on suggested improvements from the community. The unchanged goal of the standard is to provide a uniform file format that allows files created by one type of acquisition hardware and software to be analyzed by any other type.The FCS 3.1 standard retains the basic FCS file structure and most features of previous versions of the standard. Changes included in FCS 3.1 address potential ambiguities in the previous versions and provide a more robust standard. The major changes include simplified support for international characters and improved support for storing compensation. The major additions are support for preferred display scale, a standardized way of capturing the sample volume, information about originality of the data file, and support for plate and well identification in high throughput, plate based experiments. Please see the normative version of the FCS 3.1 specification in Supporting Information for this manuscript (or at http://www.isac-net.org/ in the Current standards section) for a complete list of changes.

  4. Metal droplet erosion and shielding plasma layer under plasma flows typical of transient processes in tokamaks

    Energy Technology Data Exchange (ETDEWEB)

    Martynenko, Yu. V., E-mail: Martynenko-YV@nrcki.ru [National Research Nuclear University “MEPhI” (Russian Federation)

    2017-03-15

    It is shown that the shielding plasma layer and metal droplet erosion in tokamaks are closely interrelated, because shielding plasma forms from the evaporated metal droplets, while droplet erosion is caused by the shielding plasma flow over the melted metal surface. Analysis of experimental data and theoretical models of these processes is presented.

  5. The thermodynamic quantity minimized in steady heat and fluid flow processes: A control volume approach

    International Nuclear Information System (INIS)

    Sahin, Ahmet Z.

    2012-01-01

    Highlights: ► The optimality in both heat and fluid flow systems has been investigated. ► A new thermodynamic property has been introduced. ► The second law of thermodynamics was extended to present the temheat balance that included the temheat destruction. ► The principle of temheat destruction minimization was introduced. ► It is shown that the rate of total temheat destruction is minimized in steady heat conduction and fluid flow problems. - Abstract: Heat transfer and fluid flow processes exhibit similarities as they occur naturally and are governed by the same type of differential equations. Natural phenomena occur always in an optimum way. In this paper, the natural optimality that exists in the heat transfer and fluid flow processes is investigated. In this regard, heat transfer and fluid flow problems are treated as optimization problems. We discovered a thermodynamic quantity that is optimized during the steady heat transfer and fluid flow processes. Consequently, a new thermodynamic property, the so called temheat, is introduced using the second law of thermodynamics and the definition of entropy. It is shown, through several examples, that overall temheat destruction is always minimized in steady heat and fluid flow processes. The principle of temheat destruction minimization that is based on the temheat balance equation provides a better insight to understand how the natural flow processes take place.

  6. Morphometric differences in debris flow and mixed flow fans in eastern Death Valley, CA

    Science.gov (United States)

    Wasklewicz, T. A.; Whitworth, J.

    2004-12-01

    Geomorphological features are best examined through direct measurement and parameterization of accurate topographic data. Fine-scale data are therefore required to produce a complete set of elevation data. Airborne Laser Swath Mapping (ALSM) data provide high-resolution data over large spatially continuous areas. The National Center for Advanced Laser Mapping (NCALM) collected ALSM data for an area along the eastern side of Death Valley extending from slightly north of Badwater to Mormon Point. The raw ALSM data were post-processed and delivered by NCALM in one-meter grid nodes that we converted to one-meter raster data sets. ALSM data are used to assess variations in the dimensions of surficial features found in 32 alluvial fans (21 debris flow and 11 mixed flow fans). Planimetric curvature of the fan surfaces is used to develop a topographic signature to distinguish debris flow from mixed flow fans. These two groups of fans are identified from field analysis of near vertical exposures along channels as well as surficial exposures at proximal, medial, and distal fan locations. One group of fans exhibited debris flow characteristics (DF), while the second group contained a mixture of fluid and debris flows (MF). Local planimetric curvature of the alluvial fan surfaces was derived from the one-meter DEM. The local curvature data were reclassified into concave and convex features. This sequence corresponds to two broad classes of fan features: channels and interfluves. Thirty random points were generated inside each fan polygon. The length of the nearest concave-convex (channel-interfluve) couplet was measured at each point and the percentage of convex and concave pixels in a 10m box centered on the random point was also recorded. Plots and statistical analyses of the data show clear indication that local planimetric curvature can be used as a topographic signature to distinguish between the varying formative processes in alluvial fans. Significant differences in the

  7. Modeling post-wildfire hydrological processes with ParFlow

    Science.gov (United States)

    Escobar, I. S.; Lopez, S. R.; Kinoshita, A. M.

    2017-12-01

    Wildfires alter the natural processes within a watershed, such as surface runoff, evapotranspiration rates, and subsurface water storage. Post-fire hydrologic models are typically one-dimensional, empirically-based models or two-dimensional, conceptually-based models with lumped parameter distributions. These models are useful for modeling and predictions at the watershed outlet; however, do not provide detailed, distributed hydrologic processes at the point scale within the watershed. This research uses ParFlow, a three-dimensional, distributed hydrologic model to simulate post-fire hydrologic processes by representing the spatial and temporal variability of soil burn severity (via hydrophobicity) and vegetation recovery. Using this approach, we are able to evaluate the change in post-fire water components (surface flow, lateral flow, baseflow, and evapotranspiration). This work builds upon previous field and remote sensing analysis conducted for the 2003 Old Fire Burn in Devil Canyon, located in southern California (USA). This model is initially developed for a hillslope defined by a 500 m by 1000 m lateral extent. The subsurface reaches 12.4 m and is assigned a variable cell thickness to explicitly consider soil burn severity throughout the stages of recovery and vegetation regrowth. We consider four slope and eight hydrophobic layer configurations. Evapotranspiration is used as a proxy for vegetation regrowth and is represented by the satellite-based Simplified Surface Energy Balance (SSEBOP) product. The pre- and post-fire surface runoff, subsurface storage, and surface storage interactions are evaluated at the point scale. Results will be used as a basis for developing and fine-tuning a watershed-scale model. Long-term simulations will advance our understanding of post-fire hydrological partitioning between water balance components and the spatial variability of watershed processes, providing improved guidance for post-fire watershed management. In reference

  8. Fusion of product and process data: Batch-mode and real-time streaming

    Energy Technology Data Exchange (ETDEWEB)

    Vincent De Sapio; Spike Leonard

    1999-12-01

    In today's DP product realization enterprise it is imperative to reduce the design-to-fabrication cycle time and cost while improving the quality of DP parts (reducing defects). Much of this challenge resides in the inherent gap between the product and process worlds. The lack of seamless, bi-directional flow of information prevents true concurrency in the product realization world. This report addresses a framework for product-process data fusion to help achieve next generation product realization. A fundamental objective is to create an open environment for multichannel observation of process date, and subsequent mapping of that data onto product geometry. In addition to the sensor-based observation of manufacturing processes, model-based process data provides an important complement to empirically acquired data. Two basic groups of manufacturing models are process physics, and machine kinematics and dynamics. Process physics addresses analytical models that describe the physical phenomena of the process itself. Machine kinematic and dynamic models address the mechanical behavior of the processing equipment. As a secondary objective, an attempt has been made in this report to address part of the model-based realm through the development of an open object-oriented library and toolkit for machine kinematics and dynamics. Ultimately, it is desirable to integrate design definition, with all types of process data; both sensor-based and model-based. Collectively, the goal is to allow all disciplines within the product realization enterprise to have a centralized medium for the fusion of product and process data.

  9. Experiment 2-B data processing system

    International Nuclear Information System (INIS)

    Price, H.; Svrcek, F.

    1976-08-01

    A new set of programs has been written for the analysis of data from the Fermilab 30-inch bubble chamber--wide gap spark chamber hybrid system. This report describes those programs, provides operating instructions, and indicates how they fit into the overall data flow

  10. Overland flow generation processes in sub-humid Mediterranean forest stands

    Science.gov (United States)

    Ferreira, A. J. D.; Ferreira, C. S. S.; Coelho, C. O. A.; Walsh, R. P. D.; Shakesby, R. A.

    2012-04-01

    Forest soils in north and central Portugal have suffered and continue to suffer major structural changes as a result of forest management techniques, such as clear-felling and as a result of wildfire and rip-ploughing, which is carried out to prepare the ground for planting tree seedlings. In soils that have undergone these changes, the characteristics tend to be different for coniferous plantations, where the root system tends to die when the trees are cut following fire and subsequently may be consumed by fire to form a macropore network, and other types of tree plantations where the root system remains alive and allows regrowth from the sawn tree stumps. Overland flow thresholds decrease sharply as a result of rip-ploughing and forest fires and increase following clear-felling. The time taken for trees to reach maturity after wildfire differs markedly betwen the two main species (Pinus pinaster Aiton and Eucalyptus globulus Labill.) stands. In this paper, overland flow is considered in relation to rainfall, throughfall and throughflow, both in terms of hydrology and hydrochemistry in an attempt to understand overland flow generation mechanisms for a variety of forest land uses (mature pine and eucalyptus, pine seedling regrowth and eucalyptus regrowth from tree stumps, eucalyptus plantations and burned pine). Overland flow generation processes change sharply, even within a single rainfall event, as reflected in the soil hydrological processes and the hydrochemical fingerprints. These effects result from the different contact times for water and soil, which cause differences in the absorption and exhudation processes for the two species

  11. Data near processing support for climate data analysis

    Science.gov (United States)

    Kindermann, Stephan; Ehbrecht, Carsten; Hempelmann, Nils

    2016-04-01

    Climate data repositories grow in size exponentially. Scalable data near processing capabilities are required to meet future data analysis requirements and to replace current "data download and process at home" workflows and approaches. On one hand side, these processing capabilities should be accessible via standardized interfaces (e.g. OGC WPS), on the other side a large variety of processing tools, toolboxes and deployment alternatives have to be supported and maintained at the data/processing center. We present a community approach of a modular and flexible system supporting the development, deployment and maintenace of OGC-WPS based web processing services. This approach is organized in an open source github project (called "bird-house") supporting individual processing services ("birds", e.g. climate index calculations, model data ensemble calculations), which rely on basic common infrastructural components (e.g. installation and deployment recipes, analysis code dependencies management). To support easy deployment at data centers as well as home institutes (e.g. for testing and development) the system supports the management of the often very complex package dependency chain of climate data analysis packages as well as docker based packaging and installation. We present a concrete deployment scenario at the German Climate Computing Center (DKRZ). The DKRZ one hand side hosts a multi-petabyte climate archive which is integrated e.g. into the european ENES and worldwide ESGF data infrastructure, and on the other hand hosts an HPC center supporting (model) data production and data analysis. The deployment scenario also includes openstack based data cloud services to support data import and data distribution for bird-house based WPS web processing services. Current challenges for inter-institutionnal deployments of web processing services supporting the european and international climate modeling community as well as the climate impact community are highlighted

  12. Modeling studies of multiphase fluid and heat flow processes in nuclear waste isolation

    International Nuclear Information System (INIS)

    Pruess, K.

    1989-01-01

    Multiphase fluid and heat flow plays an important role in many problems relating to the disposal of nuclear wastes in geologic media. Examples include boiling and condensation processes near heat-generating wastes, flow of water and formation gas in partially saturated formations, evolution of a free gas phase from waste package corrosion in initially water-saturated environments, and redistribution (dissolution, transport and precipitation) of rock minerals in non-isothermal flow fields. Such processes may strongly impact upon waste package and repository design considerations and performance. This paper summarizes important physical phenomena occurring in multiphase and nonisothermal flows, as well as techniques for their mathematical modeling and numerical simulation. Illustrative applications are given for a number of specific fluid and heat flow problems, including: thermohydrologic conditions near heat-generating waste packages in the unsaturated zone; repositorywide convection effects in the unsaturated zone; effects of quartz dissolution and precipitation for disposal in the saturated zone; and gas pressurization and flow effects from corrosion of low-level waste packages

  13. The U.S. Geological Survey Peak-Flow File Data Verification Project, 2008–16

    Science.gov (United States)

    Ryberg, Karen R.; Goree, Burl B.; Williams-Sether, Tara; Mason, Robert R.

    2017-11-21

    Annual peak streamflow (peak flow) at a streamgage is defined as the maximum instantaneous flow in a water year. A water year begins on October 1 and continues through September 30 of the following year; for example, water year 2015 extends from October 1, 2014, through September 30, 2015. The accuracy, characterization, and completeness of the peak streamflow data are critical in determining flood-frequency estimates that are used daily to design water and transportation infrastructure, delineate flood-plain boundaries, and regulate development and utilization of lands throughout the United States and are essential to understanding the implications of climate and land-use change on flooding and high-flow conditions.As of November 14, 2016, peak-flow data existed for 27,240 unique streamgages in the United States and its territories. The data, collectively referred to as the “peak-flow file,” are available as part of the U.S. Geological Survey (USGS) public web interface, the National Water Information System, at https://nwis.waterdata.usgs.gov/usa/nwis/peak. Although the data have been routinely subjected to periodic review by the USGS Office of Surface Water and screening at the USGS Water Science Center level, these data were not reviewed in a national, systematic manner until 2008 when automated scripts were developed and applied to detect potential errors in peak-flow values and their associated dates, gage heights, and peak-flow qualification codes, as well as qualification codes associated with the gage heights. USGS scientists and hydrographers studied the resulting output, accessed basic records and field notes, and corrected observed errors or, more commonly, confirmed existing data as correct.This report summarizes the changes in peak-flow file data at a national level, illustrates their nature and causation, and identifies the streamgages affected by these changes. Specifically, the peak-flow data were compared for streamgages with peak flow

  14. Analysis of Optimal Process Flow Diagrams of Light Naphtha Isomerization Process by Mathematic Modelling Method

    OpenAIRE

    Chuzlov, Vyacheslav Alekseevich; Molotov, Konstantin

    2016-01-01

    An approach to simulation of hydrocarbons refining processes catalytic reactors. The kinetic and thermodynamic research of light naphtha isomerization process was conducted. The kinetic parameters of hydrocarbon feedstock chemical conversion on different types of platinum-content catalysts was established. The estimation of efficiency of including different types of isomerization technologies in oil refinery flow diagram was performed.

  15. Non-equilibrium reacting gas flows kinetic theory of transport and relaxation processes

    CERN Document Server

    Nagnibeda, Ekaterina; Nagnibeda, Ekaterina

    2009-01-01

    This volume develops the kinetic theory of transport phenomena and relaxation processes in the flows of reacting gas mixtures. The theory is applied to the modeling of non-equilibrium flows behind strong shock waves, in the boundary layer, and in nozzles.

  16. Optimization of mass flow rate in RGTT200K coolant purification for Carbon Monoxide conversion process

    International Nuclear Information System (INIS)

    Sumijanto; Sriyono

    2016-01-01

    Carbon monoxide is a species that is difficult to be separated from the reactor coolant helium because it has a relatively small molecular size. So it needs a process of conversion from carbon monoxide to carbondioxide. The rate of conversion of carbon monoxide in the purification system is influenced by several parameters including concentration, temperature and mass flow rate. In this research, optimization of the mass flow rate in coolant purification of RGTT200K for carbon monoxide conversion process was done. Optimization is carried out by using software Super Pro Designer. The rate of reduction of reactant species, the growth rate between the species and the species products in the conversion reactions equilibrium were analyzed to derive the mass flow rate optimization of purification for carbon monoxide conversion process. The purpose of this study is to find the mass flow rate of purification for the preparation of the basic design of the RGTT200K coolant helium purification system. The analysis showed that the helium mass flow rate of 0.6 kg/second resulted in an un optimal conversion process. The optimal conversion process was reached at a mass flow rate of 1.2 kg/second. A flow rate of 3.6 kg/second – 12 kg/second resulted in an ineffective process. For supporting the basic design of the RGTT200K helium purification system, the mass flow rate for carbon monoxide conversion process is suggested to be 1.2 kg/second. (author)

  17. Effect of river flow fluctuations on riparian vegetation dynamics: Processes and models

    Science.gov (United States)

    Vesipa, Riccardo; Camporeale, Carlo; Ridolfi, Luca

    2017-12-01

    Several decades of field observations, laboratory experiments and mathematical modelings have demonstrated that the riparian environment is a disturbance-driven ecosystem, and that the main source of disturbance is river flow fluctuations. The focus of the present work has been on the key role that flow fluctuations play in determining the abundance, zonation and species composition of patches of riparian vegetation. To this aim, the scientific literature on the subject, over the last 20 years, has been reviewed. First, the most relevant ecological, morphological and chemical mechanisms induced by river flow fluctuations are described from a process-based perspective. The role of flow variability is discussed for the processes that affect the recruitment of vegetation, the vegetation during its adult life, and the morphological and nutrient dynamics occurring in the riparian habitat. Particular emphasis has been given to studies that were aimed at quantifying the effect of these processes on vegetation, and at linking them to the statistical characteristics of the river hydrology. Second, the advances made, from a modeling point of view, have been considered and discussed. The main models that have been developed to describe the dynamics of riparian vegetation have been presented. Different modeling approaches have been compared, and the corresponding advantages and drawbacks have been pointed out. Finally, attention has been paid to identifying the processes considered by the models, and these processes have been compared with those that have actually been observed or measured in field/laboratory studies.

  18. Distribution flow: a general process in the top layer of water repellent soils

    NARCIS (Netherlands)

    Ritsema, C.J.; Dekker, L.W.

    1995-01-01

    Distribution flow is the process of water and solute flowing in a lateral direction over and through the very first millimetre or centimetre of the soil profile. A potassium bromide tracer was applied in two water-repellent sandy soils to follow the actual flow paths of water and solutes in the

  19. Modeling a novel glass immobilization waste treatment process using flow

    International Nuclear Information System (INIS)

    Ferrada, J.J.; Nehls, J.W. Jr.; Welch, T.D.; Giardina, J.L.

    1996-01-01

    One option for control and disposal of surplus fissile materials is the Glass Material Oxidation and Dissolution System (GMODS), a process developed at ORNL for directly converting Pu-bearing material into a durable high-quality glass waste form. This paper presents a preliminary assessment of the GMODS process flowsheet using FLOW, a chemical process simulator. The simulation showed that the glass chemistry postulated ion the models has acceptable levels of risks

  20. Analysis of pressure-flow data in terms of computer-derived urethral resistance parameters.

    Science.gov (United States)

    van Mastrigt, R; Kranse, M

    1995-01-01

    The simultaneous measurement of detrusor pressure and flow rate during voiding is at present the only way to measure or grade infravesical obstruction objectively. Numerous methods have been introduced to analyze the resulting data. These methods differ in aim (measurement of urethral resistance and/or diagnosis of obstruction), method (manual versus computerized data processing), theory or model used, and resolution (continuously variable parameters or a limited number of classes, the so-called monogram). In this paper, some aspects of these fundamental differences are discussed and illustrated. Subsequently, the properties and clinical performance of two computer-based methods for deriving continuous urethral resistance parameters are treated.

  1. Carolinas Coastal Change Processes Project data report for nearshore observations at Cape Hatteras, North Carolina

    Science.gov (United States)

    Armstrong, Brandy N.; Warner, John C.; Voulgaris, George; List, Jeffrey H.; Thieler, Robert; Martini, Marinna A.; Montgomery, Ellyn T.; McNinch, Jesse E.; Book, Jeffrey W.; Haas, Kevin

    2013-01-01

    An oceanographic field study conducted in February 2010 investigated processes that control nearshore flow and sediment transport dynamics at Cape Hatteras, North Carolina. This report describes the project background, field program, instrumentation setup, and locations of the sensor deployments. The data collected, and supporting meteorological and streamflow observations, are presented as time-series plots for data visualization. Additionally, the data are available as part of this report.

  2. HYDICE postflight data processing

    Science.gov (United States)

    Aldrich, William S.; Kappus, Mary E.; Resmini, Ronald G.; Mitchell, Peter A.

    1996-06-01

    The hyperspectral digital imagery collection experiment (HYDICE) sensor records instrument counts for scene data, in-flight spectral and radiometric calibration sequences, and dark current levels onto an AMPEX DCRsi data tape. Following flight, the HYDICE ground data processing subsystem (GDPS) transforms selected scene data from digital numbers (DN) to calibrated radiance levels at the sensor aperture. This processing includes: dark current correction, spectral and radiometric calibration, conversion to radiance, and replacement of bad detector elements. A description of the algorithms for post-flight data processing is presented. A brief analysis of the original radiometric calibration procedure is given, along with a description of the development of the modified procedure currently used. Example data collected during the 1995 flight season, but uncorrected and processed, are shown to demonstrate the removal of apparent sensor artifacts (e.g., non-uniformities in detector response over the array) as a result of this transformation.

  3. A nuclear data acquisition system flow control model

    International Nuclear Information System (INIS)

    Hack, S.N.

    1988-01-01

    A general Petri Net representation of a nuclear data acquisition system model is presented. This model provides for the unique requirements of a nuclear data acquisition system including the capabilities of concurrently acquiring asynchronous and synchronous data, of providing multiple priority levels of flow control arbitration, and of permitting multiple input sources to reside at the same priority without the problem of channel lockout caused by a high rate data source. Finally, a previously implemented gamma camera/physiological signal data acquisition system is described using the models presented

  4. The iFlow modelling framework v2.4: a modular idealized process-based model for flow and transport in estuaries

    Science.gov (United States)

    Dijkstra, Yoeri M.; Brouwer, Ronald L.; Schuttelaars, Henk M.; Schramkowski, George P.

    2017-07-01

    The iFlow modelling framework is a width-averaged model for the systematic analysis of the water motion and sediment transport processes in estuaries and tidal rivers. The distinctive solution method, a mathematical perturbation method, used in the model allows for identification of the effect of individual physical processes on the water motion and sediment transport and study of the sensitivity of these processes to model parameters. This distinction between processes provides a unique tool for interpreting and explaining hydrodynamic interactions and sediment trapping. iFlow also includes a large number of options to configure the model geometry and multiple choices of turbulence and salinity models. Additionally, the model contains auxiliary components, including one that facilitates easy and fast sensitivity studies. iFlow has a modular structure, which makes it easy to include, exclude or change individual model components, called modules. Depending on the required functionality for the application at hand, modules can be selected to construct anything from very simple quasi-linear models to rather complex models involving multiple non-linear interactions. This way, the model complexity can be adjusted to the application. Once the modules containing the required functionality are selected, the underlying model structure automatically ensures modules are called in the correct order. The model inserts iteration loops over groups of modules that are mutually dependent. iFlow also ensures a smooth coupling of modules using analytical and numerical solution methods. This way the model combines the speed and accuracy of analytical solutions with the versatility of numerical solution methods. In this paper we present the modular structure, solution method and two examples of the use of iFlow. In the examples we present two case studies, of the Yangtze and Scheldt rivers, demonstrating how iFlow facilitates the analysis of model results, the understanding of the

  5. Computer network prepared to handle massive data flow

    CERN Multimedia

    2006-01-01

    "Massive quantities of data will soon begin flowing from the largest scientific instrument ever built into an internationl network of computer centers, including one operated jointly by the University of Chicago and Indiana University." (2 pages)

  6. Analysis of Optimal Process Flow Diagrams of Light Naphtha Isomerization Process by Mathematic Modelling Method

    Directory of Open Access Journals (Sweden)

    Chuzlov Vjacheslav

    2016-01-01

    Full Text Available An approach to simulation of hydrocarbons refining processes catalytic reactors. The kinetic and thermodynamic research of light naphtha isomerization process was conducted. The kinetic parameters of hydrocarbon feedstock chemical conversion on different types of platinum-content catalysts was established. The estimation of efficiency of including different types of isomerization technologies in oil refinery flow diagram was performed.

  7. Off-line data processing and analysis for the GERDA experiment

    International Nuclear Information System (INIS)

    Agostini, M; Pandola, L; Zavarise, P

    2012-01-01

    Gerda is an experiment designed to look for the neutrinoless double beta decay of 76 Ge. The experiment uses an array of high-purity germanium detectors (enriched in 76 Ge) directly immersed in liquid argon. Gerda is presently operating eight enriched coaxial detectors (approximately 15 kg of 76 Ge) and about 25 new custom-made enriched BEGe detectors will be deployed in the next phase (additional 20kg of 76 Ge). The paper describes the Gerda off-line analysis of the high-purity germanium detector data. Firstly we present the signal processing flow, focusing on the digital filters and on the algorithms used. Secondly we discuss the rejection of non-physical events and the data quality monitoring. The analysis is performed completely with the Gerda software framework (Gelatio), designed to support a multi-channel processing and to perform a modular analysis of digital signals.

  8. Processes meet big data : connecting data science with process science

    NARCIS (Netherlands)

    van der Aalst, W.; Damiani, E.

    2015-01-01

    As more and more companies are embracing Big data, it has become apparent that the ultimate challenge is to relate massive amounts of event data to processes that are highly dynamic. To unleash the value of event data, events need to be tightly connected to the control and management of operational

  9. Flow behavior of polymers during the roll-to-roll hot embossing process

    International Nuclear Information System (INIS)

    Deng, Yujun; Yi, Peiyun; Peng, Linfa; Lai, Xinmin; Lin, Zhongqin

    2015-01-01

    The roll-to-roll (R2R) hot embossing process is a recent advancement in the micro hot embossing process and is capable of continuously fabricating micro/nano-structures on polymers, with a high efficiency and a high throughput. However, the fast forming of the R2R hot embossing process limits the time for material flow and results in complicated flow behavior in the polymers. This study presents a fundamental investigation into the flow behavior of polymers and aims towards the comprehensive understanding of the R2R hot embossing process. A three-dimensional (3D) finite element (FE) model based on the viscoelastic model of polymers is established and validated for the fabrication of micro-pyramids using the R2R hot embossing process. The deformation and recovery of micro-pyramids on poly(vinyl chloride) (PVC) film are analyzed in the filling stage and the demolding stage, respectively. Firstly, in the analysis of the filling stage, the temperature distribution on the PVC film is discussed. A large temperature gradient is observed along the thickness direction of the PVC film and the temperature of the top surface is found to be higher than that of the bottom surface, due to the poor thermal conductivity of PVC. In addition, creep strains are demonstrated to depend highly on the temperature and are also observed to concentrate on the top layer of the PVC film because of high local temperature. In the demolding stage, the recovery of the embossed micro-pyramids is obvious. The cooling process is shown to be efficient for the reduction of recovery, especially when the mold temperature is high. In conclusion, this research advances the understanding of the flow behavior of polymers in the R2R hot embossing process and might help in the development of the highly accurate and highly efficient fabrication of microstructures on polymers. (paper)

  10. Interactive handling of regional cerebral blood flow data using a macrolanguage

    International Nuclear Information System (INIS)

    Sveinsdottir, E.; Schomacker, T.; Lassen, N.A.

    1976-01-01

    A general image handling software system has been developed for on-line collection, processing and display of gamma camera images (IMAGE system). The most distinguishable feature of the system is the ability for the user to interactively specify sequences, called macros, of basic functions to be performed. Information about a specified sequence is retained in the system, thus enabling new sequences or macros to be defined using already specified sequences. Facilities for parameter setting and parameter transfer between functions, as well as facilities for repetition of a function, are included. Finally, functions, be it basic or macro, can be specified to be iteratively activated using a physiological trigger signal as f.ex. the ECG. In addition, a special program system was developed for handling the dynamic data, from Xenon-133 studies of regional cerebral blood flow (CBF system). Parametric or functional images derived from the CBF system and depicting estimates of regional cerebral blood flow, relative weights of grey matter or other parameters can after computation be handled in the IMAGE system

  11. Combustion Chemistry of Fuels: Quantitative Speciation Data Obtained from an Atmospheric High-temperature Flow Reactor with Coupled Molecular-beam Mass Spectrometer.

    Science.gov (United States)

    Köhler, Markus; Oßwald, Patrick; Krueger, Dominik; Whitside, Ryan

    2018-02-19

    This manuscript describes a high-temperature flow reactor experiment coupled to the powerful molecular beam mass spectrometry (MBMS) technique. This flexible tool offers a detailed observation of chemical gas-phase kinetics in reacting flows under well-controlled conditions. The vast range of operating conditions available in a laminar flow reactor enables access to extraordinary combustion applications that are typically not achievable by flame experiments. These include rich conditions at high temperatures relevant for gasification processes, the peroxy chemistry governing the low temperature oxidation regime or investigations of complex technical fuels. The presented setup allows measurements of quantitative speciation data for reaction model validation of combustion, gasification and pyrolysis processes, while enabling a systematic general understanding of the reaction chemistry. Validation of kinetic reaction models is generally performed by investigating combustion processes of pure compounds. The flow reactor has been enhanced to be suitable for technical fuels (e.g. multi-component mixtures like Jet A-1) to allow for phenomenological analysis of occurring combustion intermediates like soot precursors or pollutants. The controlled and comparable boundary conditions provided by the experimental design allow for predictions of pollutant formation tendencies. Cold reactants are fed premixed into the reactor that are highly diluted (in around 99 vol% in Ar) in order to suppress self-sustaining combustion reactions. The laminar flowing reactant mixture passes through a known temperature field, while the gas composition is determined at the reactors exhaust as a function of the oven temperature. The flow reactor is operated at atmospheric pressures with temperatures up to 1,800 K. The measurements themselves are performed by decreasing the temperature monotonically at a rate of -200 K/h. With the sensitive MBMS technique, detailed speciation data is acquired and

  12. Modeling studies for multiphase fluid and heat flow processes in nuclear waste isolation

    International Nuclear Information System (INIS)

    Pruess, K.

    1988-07-01

    Multiphase fluid and heat flow plays an important role in many problems relating to the disposal of nuclear wastes in geologic media. Examples include boiling and condensation processes near heat-generating wastes, flow of water and formation gas in partially saturated formations, evolution of a free gas phase from waste package corrosion in initially water-saturated environments, and redistribution (dissolution, transport, and precipitation) of rock minerals in non-isothermal flow fields. Such processes may strongly impact upon waste package and repository design considerations and performance. This paper summarizes important physical phenomena occurring in multiphase and nonisothermal flows, as well as techniques for their mathematical modeling and numerical simulation. Illustrative applications are given for a number of specific fluid and heat flow problems, including: thermohydrologic conditions near heat-generating waste packages in the unsaturated zone; repository-wide convection effects in the unsaturated zone; effects of quartz dissolution and precipitation for disposal in the saturated zone; and gas pressurization and flow corrosion of low-level waste packages. 34 refs; 7 figs; 2 tabs

  13. Regional variation of flow duration curves in the eastern United States: Process-based analyses of the interaction between climate and landscape properties

    Science.gov (United States)

    Wafa Chouaib; Peter V. Caldwell; Younes Alila

    2018-01-01

    This paper advances the physical understanding of the flow duration curve (FDC) regional variation. It provides a process-based analysis of the interaction between climate and landscape properties to explain disparities in FDC shapes. We used (i) long term measured flow and precipitation data over 73 catchments from the eastern US. (ii) We calibrated the...

  14. Big Data-Driven Based Real-Time Traffic Flow State Identification and Prediction

    Directory of Open Access Journals (Sweden)

    Hua-pu Lu

    2015-01-01

    Full Text Available With the rapid development of urban informatization, the era of big data is coming. To satisfy the demand of traffic congestion early warning, this paper studies the method of real-time traffic flow state identification and prediction based on big data-driven theory. Traffic big data holds several characteristics, such as temporal correlation, spatial correlation, historical correlation, and multistate. Traffic flow state quantification, the basis of traffic flow state identification, is achieved by a SAGA-FCM (simulated annealing genetic algorithm based fuzzy c-means based traffic clustering model. Considering simple calculation and predictive accuracy, a bilevel optimization model for regional traffic flow correlation analysis is established to predict traffic flow parameters based on temporal-spatial-historical correlation. A two-stage model for correction coefficients optimization is put forward to simplify the bilevel optimization model. The first stage model is built to calculate the number of temporal-spatial-historical correlation variables. The second stage model is present to calculate basic model formulation of regional traffic flow correlation. A case study based on a real-world road network in Beijing, China, is implemented to test the efficiency and applicability of the proposed modeling and computing methods.

  15. MODELING COUPLED PROCESSES OF MULTIPHASE FLOW AND HEAT TRANSFER IN UNSATURATED FRACTURED ROCK

    International Nuclear Information System (INIS)

    Y. Wu; S. Mukhopadhyay; K. Zhang; G.S. Bodvarsson

    2006-01-01

    A mountain-scale, thermal-hydrologic (TH) numerical model is developed for investigating unsaturated flow behavior in response to decay heat from the radioactive waste repository at Yucca Mountain, Nevada, USA. The TH model, consisting of three-dimensional (3-D) representations of the unsaturated zone, is based on the current repository design, drift layout, and thermal loading scenario under estimated current and future climate conditions. More specifically, the TH model implements the current geological framework and hydrogeological conceptual models, and incorporates the most updated, best-estimated input parameters. This mountain-scale TH model simulates the coupled TH processes related to mountain-scale multiphase fluid flow, and evaluates the impact of radioactive waste heat on the hydrogeological system, including thermally perturbed liquid saturation, gas- and liquid-phase fluxes, and water and rock temperature elevations, as well as the changes in water flux driven by evaporation/condensation processes and drainage between drifts. For a better description of the ambient geothermal condition of the unsaturated zone system, the TH model is first calibrated against measured borehole temperature data. The ambient temperature calibration provides the necessary surface and water table boundary as well as initial conditions. Then, the TH model is used to obtain scientific understanding of TH processes in the Yucca Mountain unsaturated zone under the designed schedule of repository thermal load

  16. Composable Data Processing in Environmental Science - A Process View

    NARCIS (Netherlands)

    Wombacher, Andreas

    Data processing in environmental science is essential for doing science. The heterogeneity of data sources, data processing operations and infrastructures results in a lot of manual data and process integration work done by each scientist individually. This is very inefficient and time consuming.

  17. Multibeam sonar backscatter data processing

    Science.gov (United States)

    Schimel, Alexandre C. G.; Beaudoin, Jonathan; Parnum, Iain M.; Le Bas, Tim; Schmidt, Val; Keith, Gordon; Ierodiaconou, Daniel

    2018-06-01

    Multibeam sonar systems now routinely record seafloor backscatter data, which are processed into backscatter mosaics and angular responses, both of which can assist in identifying seafloor types and morphology. Those data products are obtained from the multibeam sonar raw data files through a sequence of data processing stages that follows a basic plan, but the implementation of which varies greatly between sonar systems and software. In this article, we provide a comprehensive review of this backscatter data processing chain, with a focus on the variability in the possible implementation of each processing stage. Our objective for undertaking this task is twofold: (1) to provide an overview of backscatter data processing for the consideration of the general user and (2) to provide suggestions to multibeam sonar manufacturers, software providers and the operators of these systems and software for eventually reducing the lack of control, uncertainty and variability associated with current data processing implementations and the resulting backscatter data products. One such suggestion is the adoption of a nomenclature for increasingly refined levels of processing, akin to the nomenclature adopted for satellite remote-sensing data deliverables.

  18. Improving the visualization of electron-microscopy data through optical flow interpolation

    KAUST Repository

    Carata, Lucian

    2013-01-01

    Technical developments in neurobiology have reached a point where the acquisition of high resolution images representing individual neurons and synapses becomes possible. For this, the brain tissue samples are sliced using a diamond knife and imaged with electron-microscopy (EM). However, the technique achieves a low resolution in the cutting direction, due to limitations of the mechanical process, making a direct visualization of a dataset difficult. We aim to increase the depth resolution of the volume by adding new image slices interpolated from the existing ones, without requiring modifications to the EM image-capturing method. As classical interpolation methods do not provide satisfactory results on this type of data, the current paper proposes a re-framing of the problem in terms of motion volumes, considering the depth axis as a temporal axis. An optical flow method is adapted to estimate the motion vectors of pixels in the EM images, and this information is used to compute and insert multiple new images at certain depths in the volume. We evaluate the visualization results in comparison with interpolation methods currently used on EM data, transforming the highly anisotropic original dataset into a dataset with a larger depth resolution. The interpolation based on optical flow better reveals neurite structures with realistic undistorted shapes, and helps to easier map neuronal connections. © 2011 ACM.

  19. Advanced multilateration theory, software development, and data processing: The MICRODOT system

    Science.gov (United States)

    Escobal, P. R.; Gallagher, J. F.; Vonroos, O. H.

    1976-01-01

    The process of geometric parameter estimation to accuracies of one centimeter, i.e., multilateration, is defined and applications are listed. A brief functional explanation of the theory is presented. Next, various multilateration systems are described in order of increasing system complexity. Expected systems accuracy is discussed from a general point of view and a summary of the errors is listed. An outline of the design of a software processing system for multilateration, called MICRODOT, is presented next. The links of this software, which can be used for multilateration data simulations or operational data reduction, are examined on an individual basis. Functional flow diagrams are presented to aid in understanding the software capability. MICRODOT capability is described with respect to vehicle configurations, interstation coordinate reduction, geophysical parameter estimation, and orbit determination. Numerical results obtained from MICRODOT via data simulations are displayed both for hypothetical and real world vehicle/station configurations such as used in the GEOS-3 Project. These simulations show the inherent power of the multilateration procedure.

  20. Water-Level Data Analysis for the Saturated Zone Site-Scale Flow and Transport Model

    International Nuclear Information System (INIS)

    K. Rehfeldt

    2004-01-01

    This report is an updated analysis of water-level data performed to provide the ''Saturated Zone Site-Scale Flow Model'' (BSC 2004 [DIRS 170037]) (referred to as the saturated zone (SZ) site-scale flow model or site-scale SZ flow model in this report) with the configuration of the potentiometric surface, target water-level data, and hydraulic gradients for calibration of groundwater flow models. This report also contains an expanded discussion of uncertainty in the potentiometric-surface map. The analysis of the potentiometric data presented in Revision 00 of this report (USGS 2001 [DIRS 154625]) provides the configuration of the potentiometric surface, target heads, and hydraulic gradients for the calibration of the SZ site-scale flow model (BSC 2004 [DIRS 170037]). Revision 01 of this report (USGS 2004 [DIRS 168473]) used updated water-level data for selected wells through the year 2000 as the basis for estimating water-level altitudes and the potentiometric surface in the SZ site-scale flow and transport model domain based on an alternative interpretation of perched water conditions. That revision developed computer files containing: Water-level data within the model area (DTN: GS010908312332.002); A table of known vertical head differences (DTN: GS010908312332.003); and A potentiometric-surface map (DTN: GS010608312332.001) using an alternative concept from that presented by USGS (2001 [DIRS 154625]) for the area north of Yucca Mountain. The updated water-level data presented in USGS (2004 [DIRS 168473]) include data obtained from the Nye County Early Warning Drilling Program (EWDP) Phases I and II and data from Borehole USW WT-24. This document is based on Revision 01 (USGS 2004 [DIRS 168473]) and expands the discussion of uncertainty in the potentiometric-surface map. This uncertainty assessment includes an analysis of the impact of more recent water-level data and the impact of adding data from the EWDP Phases III and IV wells. In addition to being utilized

  1. Transient flow characteristics of nuclear reactor coolant pump in recessive cavitation transition process

    International Nuclear Information System (INIS)

    Wang Xiuli; Yuan Shouqi; Zhu Rongsheng; Yu Zhijun

    2013-01-01

    The numerical simulation calculation of the transient flow characteristics of nuclear reactor coolant pump in the recessive cavitation transition process in the nuclear reactor coolant pump impeller passage is conducted by CFX, and the transient flow characteristics of nuclear reactor coolant pump in the transition process from reducing the inlet pressure at cavitation-born conditions to NPSHc condition is studied and analyzed. The flow field analysis shows that, in the recessive cavitation transition process, the speed diversification at the inlet is relative to the bubble increasing, and makes the speed near the blade entrance increase when the bubble phase region becomes larger. The bubble generation and collapse will affect the the speed fluctuation near the entrance. The vorticity close to the blade entrance gradually increasing is influenced by the bubble phase, and the collapse of bubble generated by cavitation will reduce the vorticity from the collapse to impeller outlet. Pump asymmetric structure causes the asymmetry of the flow, velocity and outlet pressure distribution within every impeller flow passage, which cause the asymmetry of the transient radial force. From the dimensionless t/T = 0.6, the bubble phase starts to have impact on the impeller transient radial force, and results in the irregular fluctuations. (authors)

  2. Covariance data processing code. ERRORJ

    International Nuclear Information System (INIS)

    Kosako, Kazuaki

    2001-01-01

    The covariance data processing code, ERRORJ, was developed to process the covariance data of JENDL-3.2. ERRORJ has the processing functions of covariance data for cross sections including resonance parameters, angular distribution and energy distribution. (author)

  3. Buffer mass test - data aquisition and data processing systems

    International Nuclear Information System (INIS)

    Hagvall, B.

    1982-08-01

    This report describes data aquisition and data processing systems used for the Buffer Mass Test at Stripa. A data aquisition system, designed mainly to provide high reliability, in Stripa produces raw-data log tapes. Copies of these tapes are mailed to the computer center at the University of Luleaa for processing of raw-data. The computer systems in Luleaa offer a wide range of processing facilities: large mass storage units, several plotting facilities, programs for processing and monitoring of vast amounts of data, etc.. (Author)

  4. An open-source solution for advanced imaging flow cytometry data analysis using machine learning.

    Science.gov (United States)

    Hennig, Holger; Rees, Paul; Blasi, Thomas; Kamentsky, Lee; Hung, Jane; Dao, David; Carpenter, Anne E; Filby, Andrew

    2017-01-01

    Imaging flow cytometry (IFC) enables the high throughput collection of morphological and spatial information from hundreds of thousands of single cells. This high content, information rich image data can in theory resolve important biological differences among complex, often heterogeneous biological samples. However, data analysis is often performed in a highly manual and subjective manner using very limited image analysis techniques in combination with conventional flow cytometry gating strategies. This approach is not scalable to the hundreds of available image-based features per cell and thus makes use of only a fraction of the spatial and morphometric information. As a result, the quality, reproducibility and rigour of results are limited by the skill, experience and ingenuity of the data analyst. Here, we describe a pipeline using open-source software that leverages the rich information in digital imagery using machine learning algorithms. Compensated and corrected raw image files (.rif) data files from an imaging flow cytometer (the proprietary .cif file format) are imported into the open-source software CellProfiler, where an image processing pipeline identifies cells and subcellular compartments allowing hundreds of morphological features to be measured. This high-dimensional data can then be analysed using cutting-edge machine learning and clustering approaches using "user-friendly" platforms such as CellProfiler Analyst. Researchers can train an automated cell classifier to recognize different cell types, cell cycle phases, drug treatment/control conditions, etc., using supervised machine learning. This workflow should enable the scientific community to leverage the full analytical power of IFC-derived data sets. It will help to reveal otherwise unappreciated populations of cells based on features that may be hidden to the human eye that include subtle measured differences in label free detection channels such as bright-field and dark-field imagery

  5. Rapid data processing for ultrafast X-ray computed tomography using scalable and modular CUDA based pipelines

    Science.gov (United States)

    Frust, Tobias; Wagner, Michael; Stephan, Jan; Juckeland, Guido; Bieberle, André

    2017-10-01

    Ultrafast X-ray tomography is an advanced imaging technique for the study of dynamic processes basing on the principles of electron beam scanning. A typical application case for this technique is e.g. the study of multiphase flows, that is, flows of mixtures of substances such as gas-liquidflows in pipelines or chemical reactors. At Helmholtz-Zentrum Dresden-Rossendorf (HZDR) a number of such tomography scanners are operated. Currently, there are two main points limiting their application in some fields. First, after each CT scan sequence the data of the radiation detector must be downloaded from the scanner to a data processing machine. Second, the current data processing is comparably time-consuming compared to the CT scan sequence interval. To enable online observations or use this technique to control actuators in real-time, a modular and scalable data processing tool has been developed, consisting of user-definable stages working independently together in a so called data processing pipeline, that keeps up with the CT scanner's maximal frame rate of up to 8 kHz. The newly developed data processing stages are freely programmable and combinable. In order to achieve the highest processing performance all relevant data processing steps, which are required for a standard slice image reconstruction, were individually implemented in separate stages using Graphics Processing Units (GPUs) and NVIDIA's CUDA programming language. Data processing performance tests on different high-end GPUs (Tesla K20c, GeForce GTX 1080, Tesla P100) showed excellent performance. Program Files doi:http://dx.doi.org/10.17632/65sx747rvm.1 Licensing provisions: LGPLv3 Programming language: C++/CUDA Supplementary material: Test data set, used for the performance analysis. Nature of problem: Ultrafast computed tomography is performed with a scan rate of up to 8 kHz. To obtain cross-sectional images from projection data computer-based image reconstruction algorithms must be applied. The

  6. Multivariate analysis of flow cytometric data using decision trees.

    Science.gov (United States)

    Simon, Svenja; Guthke, Reinhard; Kamradt, Thomas; Frey, Oliver

    2012-01-01

    Characterization of the response of the host immune system is important in understanding the bidirectional interactions between the host and microbial pathogens. For research on the host site, flow cytometry has become one of the major tools in immunology. Advances in technology and reagents allow now the simultaneous assessment of multiple markers on a single cell level generating multidimensional data sets that require multivariate statistical analysis. We explored the explanatory power of the supervised machine learning method called "induction of decision trees" in flow cytometric data. In order to examine whether the production of a certain cytokine is depended on other cytokines, datasets from intracellular staining for six cytokines with complex patterns of co-expression were analyzed by induction of decision trees. After weighting the data according to their class probabilities, we created a total of 13,392 different decision trees for each given cytokine with different parameter settings. For a more realistic estimation of the decision trees' quality, we used stratified fivefold cross validation and chose the "best" tree according to a combination of different quality criteria. While some of the decision trees reflected previously known co-expression patterns, we found that the expression of some cytokines was not only dependent on the co-expression of others per se, but was also dependent on the intensity of expression. Thus, for the first time we successfully used induction of decision trees for the analysis of high dimensional flow cytometric data and demonstrated the feasibility of this method to reveal structural patterns in such data sets.

  7. The RiverFish Approach to Business Process Modeling: Linking Business Steps to Control-Flow Patterns

    Science.gov (United States)

    Zuliane, Devanir; Oikawa, Marcio K.; Malkowski, Simon; Alcazar, José Perez; Ferreira, João Eduardo

    Despite the recent advances in the area of Business Process Management (BPM), today’s business processes have largely been implemented without clearly defined conceptual modeling. This results in growing difficulties for identification, maintenance, and reuse of rules, processes, and control-flow patterns. To mitigate these problems in future implementations, we propose a new approach to business process modeling using conceptual schemas, which represent hierarchies of concepts for rules and processes shared among collaborating information systems. This methodology bridges the gap between conceptual model description and identification of actual control-flow patterns for workflow implementation. We identify modeling guidelines that are characterized by clear phase separation, step-by-step execution, and process building through diagrams and tables. The separation of business process modeling in seven mutually exclusive phases clearly delimits information technology from business expertise. The sequential execution of these phases leads to the step-by-step creation of complex control-flow graphs. The process model is refined through intuitive table and diagram generation in each phase. Not only does the rigorous application of our modeling framework minimize the impact of rule and process changes, but it also facilitates the identification and maintenance of control-flow patterns in BPM-based information system architectures.

  8. Study of cache performance in distributed environment for data processing

    International Nuclear Information System (INIS)

    Makatun, Dzmitry; Lauret, Jérôme; Šumbera, Michal

    2014-01-01

    Processing data in distributed environment has found its application in many fields of science (Nuclear and Particle Physics (NPP), astronomy, biology to name only those). Efficiently transferring data between sites is an essential part of such processing. The implementation of caching strategies in data transfer software and tools, such as the Reasoner for Intelligent File Transfer (RIFT) being developed in the STAR collaboration, can significantly decrease network load and waiting time by reusing the knowledge of data provenance as well as data placed in transfer cache to further expand on the availability of sources for files and data-sets. Though, a great variety of caching algorithms is known, a study is needed to evaluate which one can deliver the best performance in data access considering the realistic demand patterns. Records of access to the complete data-sets of NPP experiments were analyzed and used as input for computer simulations. Series of simulations were done in order to estimate the possible cache hits and cache hits per byte for known caching algorithms. The simulations were done for cache of different sizes within interval 0.001 – 90% of complete data-set and low-watermark within 0-90%. Records of data access were taken from several experiments and within different time intervals in order to validate the results. In this paper, we will discuss the different data caching strategies from canonical algorithms to hybrid cache strategies, present the results of our simulations for the diverse algorithms, debate and identify the choice for the best algorithm in the context of Physics Data analysis in NPP. While the results of those studies have been implemented in RIFT, they can also be used when setting up cache in any other computational work-flow (Cloud processing for example) or managing data storages with partial replicas of the entire data-set

  9. Comparison of Inflation Processes at the 1859 Mauna Loa Flow, HI, and the McCartys Flow Field, NM

    Science.gov (United States)

    Bleacher, Jacob E.; Garry, W. Brent; Zimbelman, James R.; Crumpler, Larry S.

    2012-01-01

    Basaltic lavas typically form channels or tubes during flow emplacement. However, the importance of sheet flow in the development of basalt ic terrains received recognition over the last 15 years. George Walke r?s research on the 1859 Mauna Loa Flow was published posthumously in 2009. In this paper he discusses the concept of endogenous growth, or inflation, for the distal portion of this otherwise channeldominated lava flow. We used this work as a guide when visiting the 1859 flow to help us better interpret the inflation history of the McCartys flow field in NM. Both well preserved flows display similar clues about the process of inflation. The McCartys lava flow field is among the you ngest (approx.3000 yrs) basaltic lava flows in the continental United States. It was emplaced over slopes of <1 degree, which is similar to the location within the 1859 flow where inflation occurred. Although older than the 1859 flow, the McCartys is located in an arid environ ment and is among the most pristine examples of sheet flow morphologies. At the meter scale the flow surface typically forms smooth, undula ting swales that create a polygonal terrain. The literature for simil ar features includes multiple explanatory hypotheses, original breakouts from adjacent lobes, or inflation related upwarping of crust or sa gging along fractures that enable gas release. It is not clear which of these processes is responsible for polygonal terrains, and it is po ssible that one explanation is not the sole cause of this morphology between all inflated flows. Often, these smooth surfaces within an inflated sheet display lineated surfaces and occasional squeeze-ups alon g swale contacts. We interpret the lineations to preserve original fl ow direction and have begun mapping these orientations to better interpret the emplacement history. At the scale of 10s to 100s of meters t he flow comprises multiple topographic plateaus and depressions. Some depressions display level floors with

  10. Detection and quantification of flow consistency in business process models.

    Science.gov (United States)

    Burattin, Andrea; Bernstein, Vered; Neurauter, Manuel; Soffer, Pnina; Weber, Barbara

    2018-01-01

    Business process models abstract complex business processes by representing them as graphical models. Their layout, as determined by the modeler, may have an effect when these models are used. However, this effect is currently not fully understood. In order to systematically study this effect, a basic set of measurable key visual features is proposed, depicting the layout properties that are meaningful to the human user. The aim of this research is thus twofold: first, to empirically identify key visual features of business process models which are perceived as meaningful to the user and second, to show how such features can be quantified into computational metrics, which are applicable to business process models. We focus on one particular feature, consistency of flow direction, and show the challenges that arise when transforming it into a precise metric. We propose three different metrics addressing these challenges, each following a different view of flow consistency. We then report the results of an empirical evaluation, which indicates which metric is more effective in predicting the human perception of this feature. Moreover, two other automatic evaluations describing the performance and the computational capabilities of our metrics are reported as well.

  11. Regional regression models of percentile flows for the contiguous United States: Expert versus data-driven independent variable selection

    Directory of Open Access Journals (Sweden)

    Geoffrey Fouad

    2018-06-01

    New hydrological insights for the region: A set of three variables selected based on an expert assessment of factors that influence percentile flows performed similarly to larger sets of variables selected using a data-driven method. Expert assessment variables included mean annual precipitation, potential evapotranspiration, and baseflow index. Larger sets of up to 37 variables contributed little, if any, additional predictive information. Variables used to describe the distribution of basin data (e.g. standard deviation were not useful, and average values were sufficient to characterize physical and climatic basin conditions. Effectiveness of the expert assessment variables may be due to the high degree of multicollinearity (i.e. cross-correlation among additional variables. A tool is provided in the Supplementary material to predict percentile flows based on the three expert assessment variables. Future work should develop new variables with a strong understanding of the processes related to percentile flows.

  12. Special Issue: Design and Engineering of Microreactor and Smart-Scaled Flow Processes

    Directory of Open Access Journals (Sweden)

    Volker Hessel

    2014-12-01

    Full Text Available Reaction-oriented research in flow chemistry and microreactor has been extensively focused upon in special journal issues and books. On a process level, this resembled the “drop-in” (retrofit concept with the microreactor replacing a conventional (batch reactor. Meanwhile, with the introduction of the mobile, compact, modular container technology, the focus is more on the process side, including also providing an end-to-end vision of intensified process design. Exactly this is the focus of the current special issue “Design and Engineering of Microreactor and Smart-Scaled Flow Processes” of the journal “Processes”. This special issue comprises three review papers, five research articles and two communications. [...

  13. Water-Level Data Analysis for the Saturated Zone Site-Scale Flow and Transport Model

    International Nuclear Information System (INIS)

    Tucci, P.

    2001-01-01

    This Analysis/Model Report (AMR) documents an updated analysis of water-level data performed to provide the saturated-zone, site-scale flow and transport model (CRWMS M and O 2000) with the configuration of the potentiometric surface, target water-level data, and hydraulic gradients for model calibration. The previous analysis was presented in ANL-NBS-HS-000034, Rev 00 ICN 01, Water-Level Data Analysis for the Saturated Zone Site-Scale Flow and Transport Model (USGS 2001). This analysis is designed to use updated water-level data as the basis for estimating water-level altitudes and the potentiometric surface in the SZ site-scale flow and transport model domain. The objectives of this revision are to develop computer files containing (1) water-level data within the model area (DTN: GS010908312332.002), (2) a table of known vertical head differences (DTN: GS0109083 12332.003), and (3) a potentiometric-surface map (DTN: GS010608312332.001) using an alternate concept from that presented in ANL-NBS-HS-000034, Rev 00 ICN 01 for the area north of Yucca Mountain. The updated water-level data include data obtained from the Nye County Early Warning Drilling Program (EWDP) and data from borehole USW WT-24. In addition to being utilized by the SZ site-scale flow and transport model, the water-level data and potentiometric-surface map contained within this report will be available to other government agencies and water users for ground-water management purposes. The potentiometric surface defines an upper boundary of the site-scale flow model, as well as provides information useful to estimation of the magnitude and direction of lateral ground-water flow within the flow system. Therefore, the analysis documented in this revision is important to SZ flow and transport calculations in support of total system performance assessment

  14. Aerodynamic structures and processes in rotationally augmented flow fields

    DEFF Research Database (Denmark)

    Schreck, S.J.; Sørensen, Niels N.; Robinson, M.C.

    2007-01-01

    . Experimental measurements consisted of surface pressure data statistics used to infer sectional boundary layer state and to quantify normal force levels. Computed predictions included high-resolution boundary layer topologies and detailed above-surface flow field structures. This synergy was exploited...... to reliably identify and track pertinent features in the rotating blade boundary layer topology as they evolved in response to varying wind speed. Subsequently, boundary layer state was linked to above-surface flow field structure and used to deduce mechanisms; underlying augmented aerodynamic force...

  15. Information flow security for business process models - just one click away

    NARCIS (Netherlands)

    Lehmann, A.; Fahland, D.; Lohmann, N.; Moser, S.

    2012-01-01

    When outsourcing tasks of a business process to a third party, information flow security becomes a critical issue. In particular implicit information leaks are an intriguing problem. Given a business process one could ask whether the execution of a confidential task is kept secret to a third party

  16. Necessary Processing of Personal Data

    DEFF Research Database (Denmark)

    Tranberg, Charlotte Bagger

    2006-01-01

    The Data Protection Directive prohibits processing of sensitive data (racial or ethnic origin, political, religious or philosophical convictions, trade union membership and information on health and sex life). All other personal data may be processed, provided processing is deemed necessary in re...

  17. Editorial: "Business process intelligence : connecting data and processes"

    NARCIS (Netherlands)

    Aalst, van der W.M.P.; Zhao, J.L.; Wang, H.; Wang, Harry Jiannan

    2015-01-01

    This introduction to the special issue on Business Process Intelligence (BPI) discusses the relation between data and processes. The recent attention for Big Data illustrates that organizations are aware of the potential of the torrents of data generated by today's information systems. However, at

  18. The iFlow modelling framework v2.4: a modular idealized process-based model for flow and transport in estuaries

    Directory of Open Access Journals (Sweden)

    Y. M. Dijkstra

    2017-07-01

    Full Text Available The iFlow modelling framework is a width-averaged model for the systematic analysis of the water motion and sediment transport processes in estuaries and tidal rivers. The distinctive solution method, a mathematical perturbation method, used in the model allows for identification of the effect of individual physical processes on the water motion and sediment transport and study of the sensitivity of these processes to model parameters. This distinction between processes provides a unique tool for interpreting and explaining hydrodynamic interactions and sediment trapping. iFlow also includes a large number of options to configure the model geometry and multiple choices of turbulence and salinity models. Additionally, the model contains auxiliary components, including one that facilitates easy and fast sensitivity studies. iFlow has a modular structure, which makes it easy to include, exclude or change individual model components, called modules. Depending on the required functionality for the application at hand, modules can be selected to construct anything from very simple quasi-linear models to rather complex models involving multiple non-linear interactions. This way, the model complexity can be adjusted to the application. Once the modules containing the required functionality are selected, the underlying model structure automatically ensures modules are called in the correct order. The model inserts iteration loops over groups of modules that are mutually dependent. iFlow also ensures a smooth coupling of modules using analytical and numerical solution methods. This way the model combines the speed and accuracy of analytical solutions with the versatility of numerical solution methods. In this paper we present the modular structure, solution method and two examples of the use of iFlow. In the examples we present two case studies, of the Yangtze and Scheldt rivers, demonstrating how iFlow facilitates the analysis of model results, the

  19. A REMUS based crate controller for the autonomous processing of multichannel data streams

    International Nuclear Information System (INIS)

    Cittolin, S.; Loefstedt, B.

    1981-01-01

    This paper describes a device designed to perform the autonomous acquisition of considerable quantities of raw data, process them and present results in an easily digestible format for subsequent analysis. It has been primarily created for read-out of complex three dimensional drift chambers, but is of general interest. The unit is based on a dual processor system consisting of a Signetics 8 x 300 and a Motorola 68 B 00. The 8 x 300 section operates as a fast dedicated Data Processor and flow controller that reads the input modules, processes the data and constructs the output blocklets. The 68 B 00 supervises the activity of the 8 x 300 and is responsible for the holding and loading of appropriate routines. It also obtains samples of the final data for statistical purposes and executes periodic calibration and diagnostic functions. (orig.)

  20. A REMUS based crate controller for the autonomous processing of multichannel data streams

    CERN Document Server

    Cittolin, S

    1981-01-01

    This paper describes a device designed to perform the autonomous acquisition of considerable quantities of raw data, process them and present results in an easily digestible format for subsequent analysis. It has been primarily created for read-out of complex three dimensional drift chambers, but is of general interest. The unit is based on a dual processor system consisting of a Signetics 8X300 and a Motorola 68B00. The 8X300 section operates as a fast dedicated Data Processor and flow controller that reads the input modules, processes the data and constructs the output blocklets. The 68B00 supervises the activity of the 8X300 and is responsible for the holding and loading of appropriate routines. It also obtains samples of the final data for statistical purposes and executes periodic calibration and diagnostic functions.

  1. A REMUS based crate controller for the autonomous processing of multichannel data streams

    CERN Document Server

    Cittolin, Sergio

    1981-01-01

    Describes a device designed to perform the autonomous acquisition of considerable quantities of raw data, process them and present results in an easily digestible format for subsequent analysis. It has been primarily created for read-out of complex three dimensional drift chambers, but is of general interest. The unit is based on a dual processor system consisting of a Signetics 8X300 and a Motorola 68B00. The 8X300 section operates as a fast dedicated data processor and flow controller that reads the input modules processes the data and constructs the output blocklets. The 68B00 supervises the activity of the 8X300 and is responsible for the holding and loading of appropriate routines. It also obtains samples of the final data for statistical purposes and executes periodic calibration and diagnostic functions. (8 refs).

  2. Hydrography for the non-Hydrographer: A Paradigm shift in Data Processing

    Science.gov (United States)

    Malzone, C.; Bruce, S.

    2017-12-01

    Advancements in technology have led to overall systematic improvements including; hardware design, software architecture, data transmission/ telepresence. Historically, utilization of this technology has required a high knowledge level obtained with many years of experience, training and/or education. High training costs are incurred to achieve and maintain an acceptable level proficiency within an organization. Recently, engineers have developed off-the-shelf software technology called Qimera that has simplified the processing of hydrographic data. The core technology is centered around the isolation of tasks within the work- flow to capitalize on the technological advances in computing technology to automate the mundane error prone tasks to bring more value to the stages in which the human brain brings value. Key design features include: guided workflow, transcription automation, processing state management, real-time QA, dynamic workflow for validation, collaborative cleaning and production line processing. Since, Qimera is designed to guide the user, it allows expedition leaders to focus on science while providing an educational opportunity for students to quickly learn the hydrographic processing workflow including ancillary data analysis, trouble-shooting, calibration and cleaning. This paper provides case studies on how Qimera is currently implemented in scientific expeditions, benefits of implementation and how it is directing the future of on-board research for the non-hydrographer.

  3. Microparticle Flow Sensor

    Science.gov (United States)

    Morrison, Dennis R.

    2005-01-01

    The microparticle flow sensor (MFS) is a system for identifying and counting microscopic particles entrained in a flowing liquid. The MFS includes a transparent, optoelectronically instrumented laminar-flow chamber (see figure) and a computer for processing instrument-readout data. The MFS could be used to count microparticles (including micro-organisms) in diverse applications -- for example, production of microcapsules, treatment of wastewater, pumping of industrial chemicals, and identification of ownership of liquid products.

  4. Large deviations in stochastic heat-conduction processes provide a gradient-flow structure for heat conduction

    International Nuclear Information System (INIS)

    Peletier, Mark A.; Redig, Frank; Vafayi, Kiamars

    2014-01-01

    We consider three one-dimensional continuous-time Markov processes on a lattice, each of which models the conduction of heat: the family of Brownian Energy Processes with parameter m (BEP(m)), a Generalized Brownian Energy Process, and the Kipnis-Marchioro-Presutti (KMP) process. The hydrodynamic limit of each of these three processes is a parabolic equation, the linear heat equation in the case of the BEP(m) and the KMP, and a nonlinear heat equation for the Generalized Brownian Energy Process with parameter a (GBEP(a)). We prove the hydrodynamic limit rigorously for the BEP(m), and give a formal derivation for the GBEP(a). We then formally derive the pathwise large-deviation rate functional for the empirical measure of the three processes. These rate functionals imply gradient-flow structures for the limiting linear and nonlinear heat equations. We contrast these gradient-flow structures with those for processes describing the diffusion of mass, most importantly the class of Wasserstein gradient-flow systems. The linear and nonlinear heat-equation gradient-flow structures are each driven by entropy terms of the form −log ρ; they involve dissipation or mobility terms of order ρ 2 for the linear heat equation, and a nonlinear function of ρ for the nonlinear heat equation

  5. Effect of flow velocity on the process of air-steam condensation in a vertical tube condenser

    Science.gov (United States)

    Havlík, Jan; Dlouhý, Tomáš

    2018-06-01

    This article describes the influence of flow velocity on the condensation process in a vertical tube. For the case of condensation in a vertical tube condenser, both the pure steam condensation process and the air-steam mixture condensation process were theoretically and experimentally analyzed. The influence of steam flow velocity on the value of the heat transfer coefficient during the condensation process was evaluated. For the condensation of pure steam, the influence of flow velocity on the value of the heat transfer coefficient begins to be seen at higher speeds, conversely, this effect is negligible at low values of steam velocity. On the other hand, for the air-steam mixture condensation, the influence of flow velocity must always be taken into account. The flow velocity affects the water vapor diffusion process through non-condensing air. The presence of air significantly reduces the value of the heat transfer coefficient. This drop in the heat transfer coefficient is significant at low velocities; on the contrary, the decrease is relatively small at high values of the velocity.

  6. The use of Ethernet in the DataFlow of the ATLAS Trigger & DAQ

    CERN Document Server

    Stancu, Stefan; Dobinson, Bob; Korcyl, Krzysztof; Knezo, Emil; CHEP 2003 Computing in High Energy Physics

    2003-01-01

    The article analyzes a proposed network topology for the ATLAS DAQ DataFlow, and identifies the Ethernet features required for a proper operation of the network: MAC address table size, switch performance in terms of throughput and latency, the use of Flow Control, Virtual LANs and Quality of Service. We investigate these features on some Ethernet switches, and conclude on their usefulness for the ATLAS DataFlow network

  7. Hidden flows and waste processing--an analysis of illustrative futures.

    Science.gov (United States)

    Schiller, F; Raffield, T; Angus, A; Herben, M; Young, P J; Longhurst, P J; Pollard, S J T

    2010-12-14

    An existing materials flow model is adapted (using Excel and AMBER model platforms) to account for waste and hidden material flows within a domestic environment. Supported by national waste data, the implications of legislative change, domestic resource depletion and waste technology advances are explored. The revised methodology offers additional functionality for economic parameters that influence waste generation and disposal. We explore this accounting system under hypothetical future waste and resource management scenarios, illustrating the utility of the model. A sensitivity analysis confirms that imports, domestic extraction and their associated hidden flows impact mostly on waste generation. The model offers enhanced utility for policy and decision makers with regard to economic mass balance and strategic waste flows, and may promote further discussion about waste technology choice in the context of reducing carbon budgets.

  8. Water-Level Data Analysis for the Saturated Zone Site-Scale Flow and Transport Model

    Energy Technology Data Exchange (ETDEWEB)

    K. Rehfeldt

    2004-10-08

    This report is an updated analysis of water-level data performed to provide the ''Saturated Zone Site-Scale Flow Model'' (BSC 2004 [DIRS 170037]) (referred to as the saturated zone (SZ) site-scale flow model or site-scale SZ flow model in this report) with the configuration of the potentiometric surface, target water-level data, and hydraulic gradients for calibration of groundwater flow models. This report also contains an expanded discussion of uncertainty in the potentiometric-surface map. The analysis of the potentiometric data presented in Revision 00 of this report (USGS 2001 [DIRS 154625]) provides the configuration of the potentiometric surface, target heads, and hydraulic gradients for the calibration of the SZ site-scale flow model (BSC 2004 [DIRS 170037]). Revision 01 of this report (USGS 2004 [DIRS 168473]) used updated water-level data for selected wells through the year 2000 as the basis for estimating water-level altitudes and the potentiometric surface in the SZ site-scale flow and transport model domain based on an alternative interpretation of perched water conditions. That revision developed computer files containing: Water-level data within the model area (DTN: GS010908312332.002); A table of known vertical head differences (DTN: GS010908312332.003); and A potentiometric-surface map (DTN: GS010608312332.001) using an alternative concept from that presented by USGS (2001 [DIRS 154625]) for the area north of Yucca Mountain. The updated water-level data presented in USGS (2004 [DIRS 168473]) include data obtained from the Nye County Early Warning Drilling Program (EWDP) Phases I and II and data from Borehole USW WT-24. This document is based on Revision 01 (USGS 2004 [DIRS 168473]) and expands the discussion of uncertainty in the potentiometric-surface map. This uncertainty assessment includes an analysis of the impact of more recent water-level data and the impact of adding data from the EWDP Phases III and IV wells. In

  9. Wave data processing toolbox manual

    Science.gov (United States)

    Sullivan, Charlene M.; Warner, John C.; Martini, Marinna A.; Lightsom, Frances S.; Voulgaris, George; Work, Paul

    2006-01-01

    Researchers routinely deploy oceanographic equipment in estuaries, coastal nearshore environments, and shelf settings. These deployments usually include tripod-mounted instruments to measure a suite of physical parameters such as currents, waves, and pressure. Instruments such as the RD Instruments Acoustic Doppler Current Profiler (ADCP(tm)), the Sontek Argonaut, and the Nortek Aquadopp(tm) Profiler (AP) can measure these parameters. The data from these instruments must be processed using proprietary software unique to each instrument to convert measurements to real physical values. These processed files are then available for dissemination and scientific evaluation. For example, the proprietary processing program used to process data from the RD Instruments ADCP for wave information is called WavesMon. Depending on the length of the deployment, WavesMon will typically produce thousands of processed data files. These files are difficult to archive and further analysis of the data becomes cumbersome. More imperative is that these files alone do not include sufficient information pertinent to that deployment (metadata), which could hinder future scientific interpretation. This open-file report describes a toolbox developed to compile, archive, and disseminate the processed wave measurement data from an RD Instruments ADCP, a Sontek Argonaut, or a Nortek AP. This toolbox will be referred to as the Wave Data Processing Toolbox. The Wave Data Processing Toolbox congregates the processed files output from the proprietary software into two NetCDF files: one file contains the statistics of the burst data and the other file contains the raw burst data (additional details described below). One important advantage of this toolbox is that it converts the data into NetCDF format. Data in NetCDF format is easy to disseminate, is portable to any computer platform, and is viewable with public-domain freely-available software. Another important advantage is that a metadata

  10. Exploring the potential of blood flow network data

    NARCIS (Netherlands)

    Poelma, C.

    2015-01-01

    To gain a better understanding of the role of haemodynamic forces during the development of the cardiovascular system, a series of studies have been reported recently that describe flow fields in the vasculature of model systems. Such data sets, in particular those reporting networks at multiple

  11. Front-end data processing the SLD data acquisition system

    International Nuclear Information System (INIS)

    Nielsen, B.S.

    1986-07-01

    The data acquisition system for the SLD detector will make extensive use of parallel at the front-end level. Fastbus acquisition modules are being built with powerful processing capabilities for calibration, data reduction and further pre-processing of the large amount of analog data handled by each module. This paper describes the read-out electronics chain and data pre-processing system adapted for most of the detector channels, exemplified by the central drift chamber waveform digitization and processing system

  12. The shear flow processing of controlled DNA tethering and stretching for organic molecular electronics.

    Science.gov (United States)

    Yu, Guihua; Kushwaha, Amit; Lee, Jungkyu K; Shaqfeh, Eric S G; Bao, Zhenan

    2011-01-25

    DNA has been recently explored as a powerful tool for developing molecular scaffolds for making reproducible and reliable metal contacts to single organic semiconducting molecules. A critical step in the process of exploiting DNA-organic molecule-DNA (DOD) array structures is the controlled tethering and stretching of DNA molecules. Here we report the development of reproducible surface chemistry for tethering DNA molecules at tunable density and demonstrate shear flow processing as a rationally controlled approach for stretching/aligning DNA molecules of various lengths. Through enzymatic cleavage of λ-phage DNA to yield a series of DNA chains of various lengths from 17.3 μm down to 4.2 μm, we have investigated the flow/extension behavior of these tethered DNA molecules under different flow strengths in the flow-gradient plane. We compared Brownian dynamic simulations for the flow dynamics of tethered λ-DNA in shear, and found our flow-gradient plane experimental results matched well with our bead-spring simulations. The shear flow processing demonstrated in our studies represents a controllable approach for tethering and stretching DNA molecules of various lengths. Together with further metallization of DNA chains within DOD structures, this bottom-up approach can potentially enable efficient and reliable fabrication of large-scale nanoelectronic devices based on single organic molecules, therefore opening opportunities in both fundamental understanding of charge transport at the single molecular level and many exciting applications for ever-shrinking molecular circuits.

  13. Flow Forecasting in Drainage Systems with Extrapolated Radar Rainfall Data and Auto Calibration on Flow Observations

    DEFF Research Database (Denmark)

    Thorndahl, Søren Liedtke; Grum, M.; Rasmussen, Michael R.

    2011-01-01

    Forecasting of flows, overflow volumes, water levels, etc. in drainage systems can be applied in real time control of drainage systems in the future climate in order to fully utilize system capacity and thus save possible construction costs. An online system for forecasting flows and water levels......-calibrated on flow measurements in order to produce the best possible forecast for the drainage system at all times. The system shows great potential for the implementation of real time control in drainage systems and forecasting flows and water levels.......Forecasting of flows, overflow volumes, water levels, etc. in drainage systems can be applied in real time control of drainage systems in the future climate in order to fully utilize system capacity and thus save possible construction costs. An online system for forecasting flows and water levels...... in a small urban catchment has been developed. The forecast is based on application of radar rainfall data, which by a correlation based technique, is extrapolated with a lead time up to two hours. The runoff forecast in the drainage system is based on a fully distributed MOUSE model which is auto...

  14. GPU applications for data processing

    Energy Technology Data Exchange (ETDEWEB)

    Vladymyrov, Mykhailo, E-mail: mykhailo.vladymyrov@cern.ch [LPI - Lebedev Physical Institute of the Russian Academy of Sciences, RUS-119991 Moscow (Russian Federation); Aleksandrov, Andrey [LPI - Lebedev Physical Institute of the Russian Academy of Sciences, RUS-119991 Moscow (Russian Federation); INFN sezione di Napoli, I-80125 Napoli (Italy); Tioukov, Valeri [INFN sezione di Napoli, I-80125 Napoli (Italy)

    2015-12-31

    Modern experiments that use nuclear photoemulsion imply fast and efficient data acquisition from the emulsion can be performed. The new approaches in developing scanning systems require real-time processing of large amount of data. Methods that use Graphical Processing Unit (GPU) computing power for emulsion data processing are presented here. It is shown how the GPU-accelerated emulsion processing helped us to rise the scanning speed by factor of nine.

  15. A tool to increase information-processing capacity for consumer water meter data

    Directory of Open Access Journals (Sweden)

    Heinz E. Jacobs

    2012-06-01

    Objective: The objective of this research article was to describe the development of Swift, a locally developed software tool for analysing water meter data from an information management perspective, which engineers in the water field generally use, and to assess critically the influence of Swift on published research and industry. This article focuses on water usage and the challenge of data interchange and extraction as issues that various industries face. Method: This article presents the first detailed report on Swift. It uses a detailed knowledge review and presents and summarises the findings chronologically. Results: The water meter data flow path used to be quite simple. The risk of breaches in confidentiality was limited. Technological advances over the years have led to additional knowledge coming from the same water meter readings with subsequent research outputs. However, there are also complicated data flow paths and increased risks. Users have used Swift to analyse more than two million consumers’ water meter readings to date. Studies have culminated in 10 peer-reviewed journal articles using the data. Seven of them were in the last five years. Conclusion: Swift-based data was the basis of various research studies in the past decade. Practical guidelines in the civil engineering fraternity for estimating water use in South Africa have incorporated knowledge from these studies. Developments after 1995 have increased the information processing capacity for water meter data.

  16. Flow Asymmetric Propargylation: Development of Continuous Processes for the Preparation of a Chiral β-Amino Alcohol.

    Science.gov (United States)

    Li, Hui; Sheeran, Jillian W; Clausen, Andrew M; Fang, Yuan-Qing; Bio, Matthew M; Bader, Scott

    2017-08-01

    The development of a flow chemistry process for asymmetric propargylation using allene gas as a reagent is reported. The connected continuous process of allene dissolution, lithiation, Li-Zn transmetallation, and asymmetric propargylation provides homopropargyl β-amino alcohol 1 with high regio- and diastereoselectivity in high yield. This flow process enables practical use of an unstable allenyllithium intermediate. The process uses the commercially available and recyclable (1S,2R)-N-pyrrolidinyl norephedrine as a ligand to promote the highly diastereoselective (32:1) propargylation. Judicious selection of mixers based on the chemistry requirement and real-time monitoring of the process using process analytical technology (PAT) enabled stable and scalable flow chemistry runs. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Large scale steam flow test: Pressure drop data and calculated pressure loss coefficients

    International Nuclear Information System (INIS)

    Meadows, J.B.; Spears, J.R.; Feder, A.R.; Moore, B.P.; Young, C.E.

    1993-12-01

    This report presents the result of large scale steam flow testing, 3 million to 7 million lbs/hr., conducted at approximate steam qualities of 25, 45, 70 and 100 percent (dry, saturated). It is concluded from the test data that reasonable estimates of piping component pressure loss coefficients for single phase flow in complex piping geometries can be calculated using available engineering literature. This includes the effects of nearby upstream and downstream components, compressibility, and internal obstructions, such as splitters, and ladder rungs on individual piping components. Despite expected uncertainties in the data resulting from the complexity of the piping geometry and two-phase flow, the test data support the conclusion that the predicted dry steam K-factors are accurate and provide useful insight into the effect of entrained liquid on the flow resistance. The K-factors calculated from the wet steam test data were compared to two-phase K-factors based on the Martinelli-Nelson pressure drop correlations. This comparison supports the concept of a two-phase multiplier for estimating the resistance of piping with liquid entrained into the flow. The test data in general appears to be reasonably consistent with the shape of a curve based on the Martinelli-Nelson correlation over the tested range of steam quality

  18. Quantifying spatial and temporal patterns of flow intermittency using spatially contiguous runoff data

    Science.gov (United States)

    Yu (于松延), Songyan; Bond, Nick R.; Bunn, Stuart E.; Xu, Zongxue; Kennard, Mark J.

    2018-04-01

    River channel drying caused by intermittent stream flow is a widely-recognized factor shaping stream ecosystems. There is a strong need to quantify the distribution of intermittent streams across catchments to inform management. However, observational gauge networks provide only point estimates of streamflow variation. Increasingly, this limitation is being overcome through the use of spatially contiguous estimates of the terrestrial water-balance, which can also assist in estimating runoff and streamflow at large-spatial scales. Here we proposed an approach to quantifying spatial and temporal variation in monthly flow intermittency throughout river networks in eastern Australia. We aggregated gridded (5 × 5 km) monthly water-balance data with a hierarchically nested catchment dataset to simulate catchment runoff accumulation throughout river networks from 1900 to 2016. We also predicted zero flow duration for the entire river network by developing a robust predictive model relating measured zero flow duration (% months) to environmental predictor variables (based on 43 stream gauges). We then combined these datasets by using the predicted zero flow duration from the regression model to determine appropriate 'zero' flow thresholds for the modelled discharge data, which varied spatially across the catchments examined. Finally, based on modelled discharge data and identified actual zero flow thresholds, we derived summary metrics describing flow intermittency across the catchment (mean flow duration and coefficient-of-variation in flow permanence from 1900 to 2016). We also classified the relative degree of flow intermittency annually to characterise temporal variation in flow intermittency. Results showed that the degree of flow intermittency varied substantially across streams in eastern Australia, ranging from perennial streams flowing permanently (11-12 months) to strongly intermittent streams flowing 4 months or less of year. Results also showed that the

  19. Reconstructing Data Flow Diagrams from Structure Charts Based on the Input and Output Relationship

    OpenAIRE

    YAMAMOTO, Shuichiro

    1995-01-01

    The traceability of data flow diagrams against structure charts is very important for large software development. Specifying if there is a relationship between a data flow diagram and a structure chart is a time consuming task. Existing CASE tools provide a way to maintain traceability. If we can extract the input-output relationship of a system from a structure chart, the corresponding data flow diagram can be automatically generated from the relationship. For example, Benedusi et al. propos...

  20. DAFi: A directed recursive data filtering and clustering approach for improving and interpreting data clustering identification of cell populations from polychromatic flow cytometry data.

    Science.gov (United States)

    Lee, Alexandra J; Chang, Ivan; Burel, Julie G; Lindestam Arlehamn, Cecilia S; Mandava, Aishwarya; Weiskopf, Daniela; Peters, Bjoern; Sette, Alessandro; Scheuermann, Richard H; Qian, Yu

    2018-04-17

    Computational methods for identification of cell populations from polychromatic flow cytometry data are changing the paradigm of cytometry bioinformatics. Data clustering is the most common computational approach to unsupervised identification of cell populations from multidimensional cytometry data. However, interpretation of the identified data clusters is labor-intensive. Certain types of user-defined cell populations are also difficult to identify by fully automated data clustering analysis. Both are roadblocks before a cytometry lab can adopt the data clustering approach for cell population identification in routine use. We found that combining recursive data filtering and clustering with constraints converted from the user manual gating strategy can effectively address these two issues. We named this new approach DAFi: Directed Automated Filtering and Identification of cell populations. Design of DAFi preserves the data-driven characteristics of unsupervised clustering for identifying novel cell subsets, but also makes the results interpretable to experimental scientists through mapping and merging the multidimensional data clusters into the user-defined two-dimensional gating hierarchy. The recursive data filtering process in DAFi helped identify small data clusters which are otherwise difficult to resolve by a single run of the data clustering method due to the statistical interference of the irrelevant major clusters. Our experiment results showed that the proportions of the cell populations identified by DAFi, while being consistent with those by expert centralized manual gating, have smaller technical variances across samples than those from individual manual gating analysis and the nonrecursive data clustering analysis. Compared with manual gating segregation, DAFi-identified cell populations avoided the abrupt cut-offs on the boundaries. DAFi has been implemented to be used with multiple data clustering methods including K-means, FLOCK, FlowSOM, and

  1. Quantitative analysis of flow processes in a sand using synchrotron-based X-ray microtomography

    DEFF Research Database (Denmark)

    Wildenschild, Dorthe; Hopmans, J.W.; Rivers, M.L.

    2005-01-01

    been of a mostly qualitative nature and no experiments have been presented in the existing literature where a truly quantitative approach to investigating the multiphase flow process has been taken, including a thorough image-processing scheme. The tomographic images presented here show, both......Pore-scale multiphase flow experiments were developed to nondestructively visualize water flow in a sample of porous material using X-ray microtomography. The samples were exposed to similar boundary conditions as in a previous investigation, which examined the effect of initial flow rate...... by qualitative comparison and quantitative analysis in the form of a nearest neighbor analysis, that the dynamic effects seen in previous experiments are likely due to the fast and preferential drainage of large pores in the sample. Once a continuous drained path has been established through the sample, further...

  2. Investigation on transient flow of a centrifugal charging pump in the process of high pressure safety injection

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Fan, E-mail: zhangfan4060@gmail.com; Yuan, Shouqi; Fu, Qiang; Tao, Yi

    2015-11-15

    Highlights: • The transient flow characteristics of the charging pump with the first stage impeller in the HPSI process have been investigated numerically by CFD. • The hydraulic performance of the charging pump during the HPSI are discussed, andthe absolute errors between the simulated and measured results are analyzed in the paper. • Pressure fluctuation in the impeller and flow pattern in the impeller were studied in the HPSI process. It is influenced little at the beginning of the HPSI process while fluctuates strongly in the end of the HPSI process. - Abstract: In order to investigate the transient flow characteristics of the centrifugal charging pump during the transient transition process of high pressure safety injection (HPSI) from Q = 148 m{sup 3}/h to Q = 160 m{sup 3}/h, numerical simulation and experiment are implemented in this study. The transient flow rate, which is the most important factor, is obtained from the experiment and works as the boundary condition to accurately accomplish the numerical simulation in the transient process. Internal characteristics under the variable operating conditions are analyzed through the transient simulation. The results shows that the absolute error between the simulated and measured heads is less than 2.26% and the absolute error between the simulated and measured efficiency is less than 2.04%. Pressure fluctuation in the impeller is less influenced by variable flow rate in the HPSI process, while flow pattern in the impeller is getting better and better with the flow rate increasing. As flow rate increases, fluid blocks on the tongue of the volute and it strikes in this area at large flow rate. Correspondingly, the pressure fluctuation is intense and vortex occurs gradually during this period, which obviously lowers the efficiency of the pump. The contents of the current work can provide references for the design optimization and fluid control of the pump used in the transient process of variable operating

  3. Investigation on transient flow of a centrifugal charging pump in the process of high pressure safety injection

    International Nuclear Information System (INIS)

    Zhang, Fan; Yuan, Shouqi; Fu, Qiang; Tao, Yi

    2015-01-01

    Highlights: • The transient flow characteristics of the charging pump with the first stage impeller in the HPSI process have been investigated numerically by CFD. • The hydraulic performance of the charging pump during the HPSI are discussed, andthe absolute errors between the simulated and measured results are analyzed in the paper. • Pressure fluctuation in the impeller and flow pattern in the impeller were studied in the HPSI process. It is influenced little at the beginning of the HPSI process while fluctuates strongly in the end of the HPSI process. - Abstract: In order to investigate the transient flow characteristics of the centrifugal charging pump during the transient transition process of high pressure safety injection (HPSI) from Q = 148 m"3/h to Q = 160 m"3/h, numerical simulation and experiment are implemented in this study. The transient flow rate, which is the most important factor, is obtained from the experiment and works as the boundary condition to accurately accomplish the numerical simulation in the transient process. Internal characteristics under the variable operating conditions are analyzed through the transient simulation. The results shows that the absolute error between the simulated and measured heads is less than 2.26% and the absolute error between the simulated and measured efficiency is less than 2.04%. Pressure fluctuation in the impeller is less influenced by variable flow rate in the HPSI process, while flow pattern in the impeller is getting better and better with the flow rate increasing. As flow rate increases, fluid blocks on the tongue of the volute and it strikes in this area at large flow rate. Correspondingly, the pressure fluctuation is intense and vortex occurs gradually during this period, which obviously lowers the efficiency of the pump. The contents of the current work can provide references for the design optimization and fluid control of the pump used in the transient process of variable operating conditions.

  4. Multilevel flow modelling of process plant for diagnosis and control

    International Nuclear Information System (INIS)

    Lind, M.

    1982-08-01

    The paper describes the multilevel flow modelling methodology which can be used to construct functional models of energy and material processing systems. The models describe mass and energy flow topology on different levels of abstraction and represent the hierarchical functional structure of complex systems. A model of a nuclear power plant (PWR) is presented in the paper for illustration. Due to the consistency of the method, multilevel flow models provide specifications of plant goals and functions and may be used as a basis for design of computer-based support systems for the plant operator. Plant control requirements can be derived from the models and due to independence of the actual controller implementation the method may be used as basic for design of control strategies and for the allocation of control tasks to the computer and the plant operator. (author)

  5. Multilevel Flow Modelling of Process Plant for Diagnosis and Control

    DEFF Research Database (Denmark)

    Lind, Morten

    1982-01-01

    The paper describes the multilevel flow modelling methodology which can be used to construct functional models of energy and material processing systems. The models describe mass and energy flow topology on different levels of abstraction and represent the hierarchical functional structure...... of complex systems. A model of a nuclear power plant (PWR) is presented in the paper for illustration. Due to the consistency of the method, multilevel flow models provide specifications of plant goals and functions and may be used as a basis for design of computer-based support systems for the plant...... operator. Plant control requirements can be derived from the models and due to independence of the actual controller implementation the method may be used as a basis for design of control strategies and for the allocation of control tasks to the computer and the plant operator....

  6. Identification of potential groundwater flow paths using geological and geophysical data

    International Nuclear Information System (INIS)

    Pohlmann, K.; Andricevic, R.

    1994-09-01

    This project represents the first phase in the development of a methodology for generating three-dimensional equiprobable maps of hydraulic conductivity for the Nevada Test Site (NTS). In this study, potential groundwater flow paths were investigated for subsurface tuffs at Yucca Flat by studying how these units are connected. The virtual absence of site-specific hydraulic conductivity data dictates that as a first step a surrogate attribute (geophysical logs) be utilized. In this first phase, the connectivity patterns of densely welded ash-flow tuffs were studied because these tuffs are the most likely to form zones of high hydraulic conductivity. Densely welded tuffs were identified based on the response shown on resistivity logs and this information was transformed into binary indicator values. The spatial correlation of the indicator data was estimated through geostatistical methods. Equiprobable three-dimensional maps of the distribution of the densely-welded and nonwelded tuffs (i.e., subsurface heterogeneity) were then produced using a multiple indicator simulation formalism. The simulations demonstrate that resistivity logs are effective as soft data for indicating densely welded tuffs. The simulated welded tuffs reproduce the stratigraphic relationships of the welded tuffs observed in hydrogeologic cross sections, while incorporating the heterogeneity and anisotropy that is expected in this subsurface setting. Three-dimensional connectivity of the densely welded tuffs suggests potential groundwater flow paths with lengths easily over 1 km. The next phase of this investigation should incorporate other geophysical logs (e.g., gamma-gamma logs) and then calibrate the resulting soft data maps with available hard hydraulic conductivity data. The soft data maps can then augment the hard data to produce the final maps of the spatial distribution of hydraulic conductivity that can be used as input for numerical solution of groundwater flow and transport

  7. Flow Orientation Analysis for Major Activity Regions Based on Smart Card Transit Data

    Directory of Open Access Journals (Sweden)

    Parul Singh

    2017-10-01

    Full Text Available Analyzing public movement in transportation networks in a city is significant in understanding the life of citizen and making improved city plans for the future. This study focuses on investigating the flow orientation of major activity regions based on smart card transit data. The flow orientation based on the real movements such as transit data can provide the easiest way of understanding public movement in the complicated transportation networks. First, high inflow regions (HIRs are identified from transit data for morning and evening peak hours. The morning and evening HIRs are used to represent major activity regions for major daytime activities and residential areas, respectively. Second, the directional orientation of flow is then derived through the directional inflow vectors of the HIRs to show the bias in directional orientation and compare flow orientation among major activity regions. Finally, clustering analysis for HIRs is applied to capture the main patterns of flow orientations in the city and visualize the patterns on the map. The proposed methodology was illustrated with smart card transit data of bus and subway transportation networks in Seoul, Korea. Some remarkable patterns in the distribution of movements and orientations were found inside the city. The proposed methodology is useful since it unfolds the complexity and makes it easy to understand the main movement patterns in terms of flow orientation.

  8. Real-time acquisition and display of flow contrast using speckle variance optical coherence tomography in a graphics processing unit.

    Science.gov (United States)

    Xu, Jing; Wong, Kevin; Jian, Yifan; Sarunic, Marinko V

    2014-02-01

    In this report, we describe a graphics processing unit (GPU)-accelerated processing platform for real-time acquisition and display of flow contrast images with Fourier domain optical coherence tomography (FDOCT) in mouse and human eyes in vivo. Motion contrast from blood flow is processed using the speckle variance OCT (svOCT) technique, which relies on the acquisition of multiple B-scan frames at the same location and tracking the change of the speckle pattern. Real-time mouse and human retinal imaging using two different custom-built OCT systems with processing and display performed on GPU are presented with an in-depth analysis of performance metrics. The display output included structural OCT data, en face projections of the intensity data, and the svOCT en face projections of retinal microvasculature; these results compare projections with and without speckle variance in the different retinal layers to reveal significant contrast improvements. As a demonstration, videos of real-time svOCT for in vivo human and mouse retinal imaging are included in our results. The capability of performing real-time svOCT imaging of the retinal vasculature may be a useful tool in a clinical environment for monitoring disease-related pathological changes in the microcirculation such as diabetic retinopathy.

  9. Run-Time HW/SW Scheduling of Data Flow Applications on Reconfigurable Architectures

    Directory of Open Access Journals (Sweden)

    Ghaffari Fakhreddine

    2009-01-01

    Full Text Available This paper presents an efficient dynamic and run-time Hardware/Software scheduling approach. This scheduling heuristic consists in mapping online the different tasks of a highly dynamic application in such a way that the total execution time is minimized. We consider soft real-time data flow graph oriented applications for which the execution time is function of the input data nature. The target architecture is composed of two processors connected to a dynamically reconfigurable hardware accelerator. Our approach takes advantage of the reconfiguration property of the considered architecture to adapt the treatment to the system dynamics. We compare our heuristic with another similar approach. We present the results of our scheduling method on several image processing applications. Our experiments include simulation and synthesis results on a Virtex V-based platform. These results show a better performance against existing methods.

  10. Flow effects on benthic stream invertebrates and ecological processes

    Science.gov (United States)

    Koprivsek, Maja; Brilly, Mitja

    2010-05-01

    Flow is the main abiotic factor in the streams. Flow affects the organisms in many direct and indirect ways. The organisms are directly affected by various hydrodynamic forces and mass transfer processes like drag forces, drift, shear stress, food and gases supply and washing metabolites away. Indirect effects on the organisms are determining and distribution of the particle size and structure of the substrate and determining the morphology of riverbeds. Flow does not affect only on individual organism, but also on many ecological effects. To expose just the most important: dispersal of the organisms, habitat use, resource acquisition, competition and predator-prey interactions. Stream invertebrates are adapted to the various flow conditions in many kinds of way. Some of them are avoiding the high flow with living in a hyporeic zone, while the others are adapted to flow with physical adaptations (the way of feeding, respiration, osmoregulation and resistance to draught), morphological adaptations (dorsoventrally flattened shape of organism, streamlined shape of organism, heterogeneous suckers, silk, claws, swimming hair, bristles and ballast gravel) or with behaviour. As the flow characteristics in a particular stream vary over a broad range of space and time scales, it is necessary to measure accurately the velocity in places where the organisms are present to determine the actual impact of flow on aquatic organisms. By measuring the mean flow at individual vertical in a single cross-section, we cannot get any information about the velocity situation close to the bottom of the riverbed where the stream invertebrates are living. Just measuring the velocity near the bottom is a major problem, as technologies for measuring the velocity and flow of natural watercourses is not adapted to measure so close to the bottom. New researches in the last two decades has shown that the thickness of laminar border layer of stones in the stream is only a few 100 micrometers, what

  11. Investigation of Multiscale and Multiphase Flow, Transport and Reaction in Heavy Oil Recovery Processes

    Energy Technology Data Exchange (ETDEWEB)

    Yorstos, Yannis C.

    2003-03-19

    The report describes progress made in the various thrust areas of the project, which include internal drives for oil recovery, vapor-liquid flows, combustion and reaction processes and the flow of fluids with yield stress.

  12. Data inversion in coupled subsurface flow and geomechanics models

    International Nuclear Information System (INIS)

    Iglesias, Marco A; McLaughlin, Dennis

    2012-01-01

    We present an inverse modeling approach to estimate petrophysical and elastic properties of the subsurface. The aim is to use the fully coupled geomechanics-flow model of Girault et al (2011 Math. Models Methods Appl. Sci. 21 169–213) to jointly invert surface deformation and pressure data from wells. We use a functional-analytic framework to construct a forward operator (parameter-to-output map) that arises from the geomechanics-flow model of Girault et al. Then, we follow a deterministic approach to pose the inverse problem of finding parameter estimates from measurements of the output of the forward operator. We prove that this inverse problem is ill-posed in the sense of stability. The inverse problem is then regularized with the implementation of the Newton-conjugate gradient (CG) algorithm of Hanke (1997 Numer. Funct. Anal. Optim. 18 18–971). For a consistent application of the Newton-CG scheme, we establish the differentiability of the forward map and characterize the adjoint of its linearization. We provide assumptions under which the theory of Hanke ensures convergence and regularizing properties of the Newton-CG scheme. These properties are verified in our numerical experiments. In addition, our synthetic experiments display the capabilities of the proposed inverse approach to estimate parameters of the subsurface by means of data inversion. In particular, the added value of measurements of surface deformation in the estimation of absolute permeability is quantified with respect to the standard history matching approach of inverting production data with flow models. The proposed methodology can be potentially used to invert satellite geodetic data (e.g. InSAR and GPS) in combination with production data for optimal monitoring and characterization of the subsurface. (paper)

  13. Flow and Stress Field Analysis of Different Fluids and Blades for Fermentation Process

    Directory of Open Access Journals (Sweden)

    Cheng-Chi Wang

    2014-02-01

    Full Text Available Fermentation techniques are applied for the biotechnology and are widely used for food manufacturing, materials processing, chemical reaction, and so forth. Different fluids and types of blades in the tank for fermentation cause distinct flow and stress field distributions on the surface between fluid and blade and various flow reactions in the tank appear. This paper is mainly focused on the analysis of flow field with different fluid viscosities and also studied the stress field acting on the blades with different scales and shapes of them under specific rotational speed. The results show that the viscosity of fluid influences the flow field and stress distributions on the blades. The maximum stress that acts on the blade is increased with the increasing of viscosity. On the other hand, the ratio of blade length to width influences stress distributions on the blade. At the same time, the inclined angle of blade is also the key parameter for the consideration of design and appropriate inclined angle of blade will decrease the maximum stress. The results provide effective means of gaining insights into the flow and stress distribution of fermentation process.

  14. Calibration of Yucca Mountain unsaturated zone flow and transport model using porewater chloride data

    International Nuclear Information System (INIS)

    Liu, Jianchun; Sonnenthal, Eric L.; Bodvarsson, Gudmundur S.

    2002-01-01

    In this study, porewater chloride data from Yucca Mountain, Nevada, are analyzed and modeled by 3-D chemical transport simulations and analytical methods. The simulation modeling approach is based on a continuum formulation of coupled multiphase fluid flow and tracer transport processes through fractured porous rock, using a dual-continuum concept. Infiltration-rate calibrations were using the pore water chloride data. Model results of chloride distributions were improved in matching the observed data with the calibrated infiltration rates. Statistical analyses of the frequency distribution for overall percolation fluxes and chloride concentration in the unsaturated zone system demonstrate that the use of the calibrated infiltration rates had insignificant effect on the distribution of simulated percolation fluxes but significantly changed the predicated distribution of simulated chloride concentrations. An analytical method was also applied to model transient chloride transport. The method was verified by 3-D simulation results as able to capture major chemical transient behavior and trends. Effects of lateral flow in the Paintbrush nonwelded unit on percolation fluxes and chloride distribution were studied by 3-D simulations with increased horizontal permeability. The combined results from these model calibrations furnish important information for the UZ model studies, contributing to performance assessment of the potential repository

  15. Confocal Microscopy and Flow Cytometry System Performance: Assessment of QA Parameters that affect data Quanitification

    Science.gov (United States)

    Flow and image cytometers can provide useful quantitative fluorescence data. We have devised QA tests to be used on both a flow cytometer and a confocal microscope to assure that the data is accurate, reproducible and precise. Flow Cytometry: We have provided two simple perform...

  16. Traffic flow dynamics data, models and simulation

    CERN Document Server

    Treiber, Martin

    2013-01-01

    This textbook provides a comprehensive and instructive coverage of vehicular traffic flow dynamics and modeling. It makes this fascinating interdisciplinary topic, which to date was only documented in parts by specialized monographs, accessible to a broad readership. Numerous figures and problems with solutions help the reader to quickly understand and practice the presented concepts. This book is targeted at students of physics and traffic engineering and, more generally, also at students and professionals in computer science, mathematics, and interdisciplinary topics. It also offers material for project work in programming and simulation at college and university level. The main part, after presenting different categories of traffic data, is devoted to a mathematical description of the dynamics of traffic flow, covering macroscopic models which describe traffic in terms of density, as well as microscopic many-particle models in which each particle corresponds to a vehicle and its driver. Focus chapters on ...

  17. Estimating Bus Loads and OD Flows Using Location-Stamped Farebox and Wi-Fi Signal Data

    Directory of Open Access Journals (Sweden)

    Yuxiong Ji

    2017-01-01

    Full Text Available Electronic fareboxes integrated with Automatic Vehicle Location (AVL systems can provide location-stamped records to infer passenger boarding at individual stops. However, bus loads and Origin-Destination (OD flows, which are useful for route planning, design, and real-time controls, cannot be derived directly from farebox data. Recently, Wi-Fi sensors have been used to collect passenger OD flow information. But the data are insufficient to capture the variation of passenger demand across bus trips. In this study, we propose a hierarchical Bayesian model to estimate trip-level OD flow matrices and a period-level OD flow matrix using sampled OD flow data collected by Wi-Fi sensors and boarding data provided by fareboxes. Bus loads on each bus trip are derived directly from the estimated trip-level OD flow matrices. The proposed method is evaluated empirically on an operational bus route and the results demonstrate that it provides good and detailed transit route-level passenger demand information by combining farebox and Wi-Fi signal data.

  18. Numerical modelling of river processes: flow and river bed deformation

    NARCIS (Netherlands)

    Tassi, P.A.

    2007-01-01

    The morphology of alluvial river channels is a consequence of complex interaction among a number of constituent physical processes, such as flow, sediment transport and river bed deformation. This is, an alluvial river channel is formed from its own sediment. From time to time, alluvial river

  19. Numerical Modeling of Fluid Flow in the Tape Casting Process

    DEFF Research Database (Denmark)

    Jabbari, Masoud; Hattel, Jesper Henri

    2011-01-01

    The flow behavior of the fluid in the tape casting process is analyzed. A simple geometry is assumed for running the numerical calculations in ANSYS Fluent and the main parameters are expressed in non-dimensional form. The effect of different values for substrate velocity and pressure force...

  20. Flow and Stress Field Analysis of Different Fluids and Blades for Fermentation Process

    OpenAIRE

    Cheng-Chi Wang; Po-Jen Cheng; Kuo-Chi Liu; Ming-Yi Tsai

    2014-01-01

    Fermentation techniques are applied for the biotechnology and are widely used for food manufacturing, materials processing, chemical reaction, and so forth. Different fluids and types of blades in the tank for fermentation cause distinct flow and stress field distributions on the surface between fluid and blade and various flow reactions in the tank appear. This paper is mainly focused on the analysis of flow field with different fluid viscosities and also studied the stress field acting on t...

  1. Understanding the care.data conundrum: New information flows for economic growth

    Directory of Open Access Journals (Sweden)

    Paraskevas Vezyridis

    2017-01-01

    Full Text Available The analysis of data from electronic health records aspires to facilitate healthcare efficiencies and biomedical innovation. There are also ethical, legal and social implications from the handling of sensitive patient information. The paper explores the concerns, expectations and implications of the National Health Service (NHS England care.data programme: a national data sharing initiative of linked electronic health records for healthcare and other research purposes. Using Nissenbaum’s contextual integrity of privacy framework through a critical Science and Technology Studies (STS lens, it examines the way technologies and policies are developed to promote sustainability, governance and economic growth as the de facto social values, while reducing privacy to an individualistic preference. The state, acting as a new, central data broker reappropriates public ownership rights and establishes those information flows and transmission principles that facilitate the assetisation of NHS datasets for the knowledge economy. Various actors and processes from other contexts attempt to erode the public healthcare sector and privilege new information recipients. However, such data sharing initiatives in healthcare will be resisted if we continue to focus only on the monetary and scientific values of these datasets and keep ignoring their equally important social and ethical values.

  2. Modeling field scale unsaturated flow and transport processes

    International Nuclear Information System (INIS)

    Gelhar, L.W.; Celia, M.A.; McLaughlin, D.

    1994-08-01

    The scales of concern in subsurface transport of contaminants from low-level radioactive waste disposal facilities are in the range of 1 to 1,000 m. Natural geologic materials generally show very substantial spatial variability in hydraulic properties over this range of scales. Such heterogeneity can significantly influence the migration of contaminants. It is also envisioned that complex earth structures will be constructed to isolate the waste and minimize infiltration of water into the facility. The flow of water and gases through such facilities must also be a concern. A stochastic theory describing unsaturated flow and contamination transport in naturally heterogeneous soils has been enhanced by adopting a more realistic characterization of soil variability. The enhanced theory is used to predict field-scale effective properties and variances of tension and moisture content. Applications illustrate the important effects of small-scale heterogeneity on large-scale anisotropy and hysteresis and demonstrate the feasibility of simulating two-dimensional flow systems at time and space scales of interest in radioactive waste disposal investigations. Numerical algorithms for predicting field scale unsaturated flow and contaminant transport have been improved by requiring them to respect fundamental physical principles such as mass conservation. These algorithms are able to provide realistic simulations of systems with very dry initial conditions and high degrees of heterogeneity. Numerical simulation of the movement of water and air in unsaturated soils has demonstrated the importance of air pathways for contaminant transport. The stochastic flow and transport theory has been used to develop a systematic approach to performance assessment and site characterization. Hypothesis-testing techniques have been used to determine whether model predictions are consistent with observed data

  3. AUTOMATING THE DATA SECURITY PROCESS

    Directory of Open Access Journals (Sweden)

    Florin Ogigau-Neamtiu

    2017-11-01

    Full Text Available Contemporary organizations face big data security challenges in the cyber environment due to modern threats and actual business working model which relies heavily on collaboration, data sharing, tool integration, increased mobility, etc. The nowadays data classification and data obfuscation selection processes (encryption, masking or tokenization suffer because of the human implication in the process. Organizations need to shirk data security domain by classifying information based on its importance, conduct risk assessment plans and use the most cost effective data obfuscation technique. The paper proposes a new model for data protection by using automated machine decision making procedures to classify data and to select the appropriate data obfuscation technique. The proposed system uses natural language processing capabilities to analyze input data and to select the best course of action. The system has capabilities to learn from previous experiences thus improving itself and reducing the risk of wrong data classification.

  4. Unsaturated flow characterization utilizing water content data collected within the capillary fringe

    Science.gov (United States)

    Baehr, Arthur; Reilly, Timothy J.

    2014-01-01

    An analysis is presented to determine unsaturated zone hydraulic parameters based on detailed water content profiles, which can be readily acquired during hydrological investigations. Core samples taken through the unsaturated zone allow for the acquisition of gravimetrically determined water content data as a function of elevation at 3 inch intervals. This dense spacing of data provides several measurements of the water content within the capillary fringe, which are utilized to determine capillary pressure function parameters via least-squares calibration. The water content data collected above the capillary fringe are used to calculate dimensionless flow as a function of elevation providing a snapshot characterization of flow through the unsaturated zone. The water content at a flow stagnation point provides an in situ estimate of specific yield. In situ determinations of capillary pressure function parameters utilizing this method, together with particle-size distributions, can provide a valuable supplement to data libraries of unsaturated zone hydraulic parameters. The method is illustrated using data collected from plots within an agricultural research facility in Wisconsin.

  5. AUTOMATING THE DATA SECURITY PROCESS

    OpenAIRE

    Florin Ogigau-Neamtiu

    2017-01-01

    Contemporary organizations face big data security challenges in the cyber environment due to modern threats and actual business working model which relies heavily on collaboration, data sharing, tool integration, increased mobility, etc. The nowadays data classification and data obfuscation selection processes (encryption, masking or tokenization) suffer because of the human implication in the process. Organizations need to shirk data security domain by classifying information based on its...

  6. Effective parameters, effective processes: From porous flow physics to in situ remediation technology

    International Nuclear Information System (INIS)

    Pruess, K.

    1995-06-01

    This paper examines the conceptualization of multiphase flow processes on the macroscale, as needed in field applications. It emphasizes that upscaling from the pore-level will in general not only introduce effective parameters but will also give rise to ''effective processes,'' i.e., the emergence of new physical effects that may not have a microscopic counterpart. ''Phase dispersion'' is discussed as an example of an effective process for the migration and remediation of non-aqueous phase liquid (NAPL) contaminants in heterogeneous media. An approximate space-and-time scaling invariance is derived for gravity-driven liquid flow in unsaturated two-dimensional porous media (fractures). Issues for future experimental and theoretical work are identified

  7. Optimizing Endoscope Reprocessing Resources Via Process Flow Queuing Analysis.

    Science.gov (United States)

    Seelen, Mark T; Friend, Tynan H; Levine, Wilton C

    2018-05-04

    The Massachusetts General Hospital (MGH) is merging its older endoscope processing facilities into a single new facility that will enable high-level disinfection of endoscopes for both the ORs and Endoscopy Suite, leveraging economies of scale for improved patient care and optimal use of resources. Finalized resource planning was necessary for the merging of facilities to optimize staffing and make final equipment selections to support the nearly 33,000 annual endoscopy cases. To accomplish this, we employed operations management methodologies, analyzing the physical process flow of scopes throughout the existing Endoscopy Suite and ORs and mapping the future state capacity of the new reprocessing facility. Further, our analysis required the incorporation of historical case and reprocessing volumes in a multi-server queuing model to identify any potential wait times as a result of the new reprocessing cycle. We also performed sensitivity analysis to understand the impact of future case volume growth. We found that our future-state reprocessing facility, given planned capital expenditures for automated endoscope reprocessors (AERs) and pre-processing sinks, could easily accommodate current scope volume well within the necessary pre-cleaning-to-sink reprocessing time limit recommended by manufacturers. Further, in its current planned state, our model suggested that the future endoscope reprocessing suite at MGH could support an increase in volume of at least 90% over the next several years. Our work suggests that with simple mathematical analysis of historic case data, significant changes to a complex perioperative environment can be made with ease while keeping patient safety as the top priority.

  8. Extracting quantitative three-dimensional unsteady flow direction from tuft flow visualizations

    Energy Technology Data Exchange (ETDEWEB)

    Omata, Noriyasu; Shirayama, Susumu, E-mail: omata@nakl.t.u-tokyo.ac.jp, E-mail: sirayama@sys.t.u-tokyo.ac.jp [Department of Systems Innovation, School of Engineering, The University of Tokyo, Hongo 7-3-1, Bunkyo-ku, Tokyo, 113-8656 (Japan)

    2017-10-15

    We focus on the qualitative but widely used method of tuft flow visualization, and propose a method for quantifying it using information technology. By applying stereo image processing and computer vision, the three-dimensional (3D) flow direction in a real environment can be obtained quantitatively. In addition, we show that the flow can be divided temporally by performing appropriate machine learning on the data. Acquisition of flow information in real environments is important for design development, but it is generally considered difficult to apply simulations or quantitative experiments to such environments. Hence, qualitative methods including the tuft method are still in use today. Although attempts have been made previously to quantify such methods, it has not been possible to acquire 3D information. Furthermore, even if quantitative data could be acquired, analysis was often performed empirically or qualitatively. In contrast, we show that our method can acquire 3D information and analyze the measured data quantitatively. (paper)

  9. Extracting quantitative three-dimensional unsteady flow direction from tuft flow visualizations

    International Nuclear Information System (INIS)

    Omata, Noriyasu; Shirayama, Susumu

    2017-01-01

    We focus on the qualitative but widely used method of tuft flow visualization, and propose a method for quantifying it using information technology. By applying stereo image processing and computer vision, the three-dimensional (3D) flow direction in a real environment can be obtained quantitatively. In addition, we show that the flow can be divided temporally by performing appropriate machine learning on the data. Acquisition of flow information in real environments is important for design development, but it is generally considered difficult to apply simulations or quantitative experiments to such environments. Hence, qualitative methods including the tuft method are still in use today. Although attempts have been made previously to quantify such methods, it has not been possible to acquire 3D information. Furthermore, even if quantitative data could be acquired, analysis was often performed empirically or qualitatively. In contrast, we show that our method can acquire 3D information and analyze the measured data quantitatively. (paper)

  10. An analysis of transient flow in upland watersheds: interactions between structure and process

    Science.gov (United States)

    David Lawrence Brown

    1995-01-01

    The physical structure and hydrological processes of upland watersheds interact in response to forcing functions such as rainfall, leading to storm runoff generation and pore pressure evolution. Transient fluid flow through distinct flow paths such as the soil matrix, macropores, saprolite, and bedrock may be viewed as a consequence of such interactions. Field...

  11. Combined Acquisition/Processing For Data Reduction

    Science.gov (United States)

    Kruger, Robert A.

    1982-01-01

    Digital image processing systems necessarily consist of three components: acquisition, storage/retrieval and processing. The acquisition component requires the greatest data handling rates. By coupling together the acquisition witn some online hardwired processing, data rates and capacities for short term storage can be reduced. Furthermore, long term storage requirements can be reduced further by appropriate processing and editing of image data contained in short term memory. The net result could be reduced performance requirements for mass storage, processing and communication systems. Reduced amounts of data also snouid speed later data analysis and diagnostic decision making.

  12. Estimation of daily flow rate of photovoltaic water pumping systems using solar radiation data

    Directory of Open Access Journals (Sweden)

    M. Benghanem

    2018-03-01

    Full Text Available This paper presents a simple model which allows us to contribute in the studies of photovoltaic (PV water pumping systems sizing. The nonlinear relation between water flow rate and solar power has been obtained experimentally in a first step and then used for performance prediction. The model proposed enables us to simulate the water flow rate using solar radiation data for different heads (50 m, 60 m, 70 m and 80 m and for 8S × 3P PV array configuration. The experimental data are obtained with our pumping test facility located at Madinah site (Saudi Arabia. The performances are calculated using the measured solar radiation data of different locations in Saudi Arabia. Knowing the solar radiation data, we have estimated with a good precision the water flow rate Q in five locations (Al-Jouf, Solar Village, AL-Ahsa, Madinah and Gizan in Saudi Arabia. The flow rate Q increases with the increase of pump power for different heads following the nonlinear model proposed. Keywords: Photovoltaic water pumping system, Solar radiation data, Simulation, Flow rate

  13. Development of a methodology to assess future trends in low flows at the watershed scale using solely climate data

    Science.gov (United States)

    Foulon, Étienne; Rousseau, Alain N.; Gagnon, Patrick

    2018-02-01

    Low flow conditions are governed by short-to-medium term weather conditions or long term climate conditions. This prompts the question: given climate scenarios, is it possible to assess future extreme low flow conditions from climate data indices (CDIs)? Or should we rely on the conventional approach of using outputs of climate models as inputs to a hydrological model? Several CDIs were computed using 42 climate scenarios over the years 1961-2100 for two watersheds located in Québec, Canada. The relationship between the CDIs and hydrological data indices (HDIs; 7- and 30-day low flows for two hydrological seasons) were examined through correlation analysis to identify the indices governing low flows. Results of the Mann-Kendall test, with a modification for autocorrelated data, clearly identified trends. A partial correlation analysis allowed attributing the observed trends in HDIs to trends in specific CDIs. Furthermore, results showed that, even during the spatial validation process, the methodological framework was able to assess trends in low flow series from: (i) trends in the effective drought index (EDI) computed from rainfall plus snowmelt minus PET amounts over ten to twelve months of the hydrological snow cover season or (ii) the cumulative difference between rainfall and potential evapotranspiration over five months of the snow free season. For 80% of the climate scenarios, trends in HDIs were successfully attributed to trends in CDIs. Overall, this paper introduces an efficient methodological framework to assess future trends in low flows given climate scenarios. The outcome may prove useful to municipalities concerned with source water management under changing climate conditions.

  14. Interpretation of lunar heat flow data

    International Nuclear Information System (INIS)

    Conel, J.E.; Morton, J.B.

    1975-01-01

    Lunar heat flow observations at the Apollo 15 and 17 sites can be interpreted to imply bulk U concentrations for the Moon of 5 to 8 times those of normal chondrites and 2 to 4 times terrestrial values inferred from the Earth's heat flow and the assumption of thermal steady state between surface heat flow and heat production. A simple model of nearsurface structure that takes into account the large difference in (highly insulating) regolith thickness between mare and highland provinces is considered. This model predicts atypically high local values of heat flow near the margins of mare regions--possibly a factor of 10 or so higher than the global average. A test of the proposed model using multifrequency microwave techniques appears possible wherein heat flow traverse measurements are made across mare-highland contacts. The theoretical considerations discussed here urge caution in attributing global significance to point heat-flow measurements on the Moon

  15. Flow Dynamics of green sand in the DISAMATIC moulding process using Discrete element method (DEM)

    International Nuclear Information System (INIS)

    Hovad, E; Walther, J H; Thorborg, J; Hattel, J H; Larsen, P

    2015-01-01

    The DISAMATIC casting process production of sand moulds is simulated with DEM (discrete element method). The main purpose is to simulate the dynamics of the flow of green sand, during the production of the sand mould with DEM. The sand shot is simulated, which is the first stage of the DISAMATIC casting process. Depending on the actual casting geometry the mould can be geometrically quite complex involving e.g. shadowing effects and this is directly reflected in the sand flow during the moulding process. In the present work a mould chamber with “ribs” at the walls is chosen as a baseline geometry to emulate some of these important conditions found in the real moulding process. The sand flow is simulated with the DEM and compared with corresponding video footages from the interior of the chamber during the moulding process. The effect of the rolling resistance and the static friction coefficient is analysed and discussed in relation to the experimental findings. (paper)

  16. Parallel processing of genomics data

    Science.gov (United States)

    Agapito, Giuseppe; Guzzi, Pietro Hiram; Cannataro, Mario

    2016-10-01

    The availability of high-throughput experimental platforms for the analysis of biological samples, such as mass spectrometry, microarrays and Next Generation Sequencing, have made possible to analyze a whole genome in a single experiment. Such platforms produce an enormous volume of data per single experiment, thus the analysis of this enormous flow of data poses several challenges in term of data storage, preprocessing, and analysis. To face those issues, efficient, possibly parallel, bioinformatics software needs to be used to preprocess and analyze data, for instance to highlight genetic variation associated with complex diseases. In this paper we present a parallel algorithm for the parallel preprocessing and statistical analysis of genomics data, able to face high dimension of data and resulting in good response time. The proposed system is able to find statistically significant biological markers able to discriminate classes of patients that respond to drugs in different ways. Experiments performed on real and synthetic genomic datasets show good speed-up and scalability.

  17. Flow control and routing techniques for integrated voice and data networks

    Science.gov (United States)

    Ibe, O. C.

    1981-10-01

    We consider a model of integrated voice and data networks. In this model the network flow problem is formulated as a convex optimization problem. The objective function comprises two types of cost functions: the congestion cost functions, which limit the average input traffic to values compatible with the network conditions; and the rate limitation cost functions, which ensure that all conversations are fairly treated. A joint flow control and routing algorithm is constructed which determines the routes for each conversation, and effects flow control by setting voice packet lengths and data input rates in a manner that achieves optimal tradeoff between each user's satisfaction and the cost of network congestion. An additional congestion control protocol is specified which could be used in conjunction with the algorithm to make the latter respond more dynamically to network congestion.

  18. Neutron radiography for visualization of liquid metal processes: bubbly flow for CO2 free production of Hydrogen and solidification processes in EM field

    Science.gov (United States)

    Baake, E.; Fehling, T.; Musaeva, D.; Steinberg, T.

    2017-07-01

    The paper describes the results of two experimental investigations aimed to extend the abilities of a neutron radiography to visualize two-phase processes in the electromagnetically (EM) driven melt flow. In the first experiment the Argon bubbly flow in the molten Gallium - a simulation of the CO2 free production of Hydrogen process - was investigated and visualized. Abilities of EM stirring for control on the bubbles residence time in the melt were tested. The second experiment was directed to visualization of a solidification front formation under the influence of EM field. On the basis of the neutron shadow pictures the form of growing ingot, influenced by turbulent flows, was considered. In the both cases rotating permanent magnets were agitating the melt flow. The experimental results have shown that the neutron radiography can be successfully employed for obtaining the visual information about the described processes.

  19. Data processing on FPGAs

    CERN Document Server

    Teubner, Jens

    2013-01-01

    Roughly a decade ago, power consumption and heat dissipation concerns forced the semiconductor industry to radically change its course, shifting from sequential to parallel computing. Unfortunately, improving performance of applications has now become much more difficult than in the good old days of frequency scaling. This is also affecting databases and data processing applications in general, and has led to the popularity of so-called data appliances-specialized data processing engines, where software and hardware are sold together in a closed box. Field-programmable gate arrays (FPGAs) incr

  20. Calculation of pressure gradients from MR velocity data in a laminar flow model

    International Nuclear Information System (INIS)

    Adler, R.S.; Chenevert, T.L.; Fowlkes, J.B.; Pipe, J.G.; Rubin, J.M.

    1990-01-01

    This paper reports on the ability of current imaging modalities to provide velocity-distribution data that offers the possibility of noninvasive pressure-gradient determination from an appropriate rheologic model of flow. A simple laminar flow model is considered at low Reynolds number, RE calc = 0.59 + (1.13 x (dp/dz) meas ), R 2 = .994, in units of dyne/cm 2 /cm for the range of flows considered. The authors' results indicate the potential usefulness of noninvasive pressure-gradient determinations from quantitative analysis of imaging-derived velocity data

  1. Metal flow of a tailor-welded blank in deep drawing process

    Science.gov (United States)

    Yan, Qi; Guo, Ruiquan

    2005-01-01

    Tailor welded blanks were used in the automotive industry to consolidate parts, reduce weight, and increase safety. In recent years, this technology was developing rapidly in China. In Chinese car models, tailor welded blanks had been applied in a lot of automobile parts such as rail, door inner, bumper, floor panel, etc. Concerns on the properties of tailor welded blanks had become more and more important for automobile industry. A lot of research had shown that the strength of the welded seam was higher than that of the base metal, such that the weld failure in the aspect of strength was not a critical issue. However, formability of tailor welded blanks in the stamping process was complex. Among them, the metal flow of tailor welded blanks in the stamping process must be investigated thoroughly in order to reduce the scrap rate during the stamping process in automobile factories. In this paper, the behavior of metal flow for tailor welded blanks made by the laser welding process with two types of different thickness combinations were studied in the deep drawing process. Simulations and experiment verification of the movement of weld line for tailor welded blanks were discussed in detail. Results showed that the control on the movement of welded seam during stamping process by taking some measures in the aspect of blank holder was effective.

  2. Hybrid Pluggable Processing Pipeline (HyP3): A cloud-based infrastructure for generic processing of SAR data

    Science.gov (United States)

    Hogenson, K.; Arko, S. A.; Buechler, B.; Hogenson, R.; Herrmann, J.; Geiger, A.

    2016-12-01

    A problem often faced by Earth science researchers is how to scale algorithms that were developed against few datasets and take them to regional or global scales. One significant hurdle can be the processing and storage resources available for such a task, not to mention the administration of those resources. As a processing environment, the cloud offers nearly unlimited potential for compute and storage, with limited administration required. The goal of the Hybrid Pluggable Processing Pipeline (HyP3) project was to demonstrate the utility of the Amazon cloud to process large amounts of data quickly and cost effectively, while remaining generic enough to incorporate new algorithms with limited administration time or expense. Principally built by three undergraduate students at the ASF DAAC, the HyP3 system relies on core Amazon services such as Lambda, the Simple Notification Service (SNS), Relational Database Service (RDS), Elastic Compute Cloud (EC2), Simple Storage Service (S3), and Elastic Beanstalk. The HyP3 user interface was written using elastic beanstalk, and the system uses SNS and Lamdba to handle creating, instantiating, executing, and terminating EC2 instances automatically. Data are sent to S3 for delivery to customers and removed using standard data lifecycle management rules. In HyP3 all data processing is ephemeral; there are no persistent processes taking compute and storage resources or generating added cost. When complete, HyP3 will leverage the automatic scaling up and down of EC2 compute power to respond to event-driven demand surges correlated with natural disaster or reprocessing efforts. Massive simultaneous processing within EC2 will be able match the demand spike in ways conventional physical computing power never could, and then tail off incurring no costs when not needed. This presentation will focus on the development techniques and technologies that were used in developing the HyP3 system. Data and process flow will be shown

  3. Upper Meter Processes: Short Wind Waves, Surface Flow, and Micro-Turbulence

    National Research Council Canada - National Science Library

    Jaehne, Bernd

    2000-01-01

    The primary goal of this project was to advance the knowledge of small-scale air-sea interaction processes at the ocean surface, focussing on the dynamics of short waves, the surface flow field and the micro-turbulence...

  4. Development process of muzzle flows including a gun-launched missile

    OpenAIRE

    Zhuo Changfei; Feng Feng; Wu Xiaosong

    2015-01-01

    Numerical investigations on the launch process of a gun-launched missile from the muzzle of a cannon to the free-flight stage have been performed in this paper. The dynamic overlapped grids approach are applied to dealing with the problems of a moving gun-launched missile. The high-resolution upwind scheme (AUSMPW+) and the detailed reaction kinetics model are adopted to solve the chemical non-equilibrium Euler equations for dynamic grids. The development process and flow field structure of m...

  5. Nuclear medicine imaging and data processing

    International Nuclear Information System (INIS)

    Bell, P.R.; Dillon, R.S.

    1978-01-01

    The Oak Ridge Imaging System (ORIS) is a software operating system structure around the Digital Equipment Corporation's PDP-8 minicomputer which provides a complete range of image manipulation procedures. Through its modular design it remains open-ended for easy expansion to meet future needs. Already included in the system are image access routines for use with the rectilinear scanner or gamma camera (both static and flow studies); display hardware design and corresponding software; archival storage provisions; and, most important, many image processing techniques. The image processing capabilities include image defect removal, smoothing, nonlinear bounding, preparation of functional images, and transaxial emission tomography reconstruction from a limited number of views

  6. Advancements in Big Data Processing

    CERN Document Server

    Vaniachine, A; The ATLAS collaboration

    2012-01-01

    The ever-increasing volumes of scientific data present new challenges for Distributed Computing and Grid-technologies. The emerging Big Data revolution drives new discoveries in scientific fields including nanotechnology, astrophysics, high-energy physics, biology and medicine. New initiatives are transforming data-driven scientific fields by pushing Bid Data limits enabling massive data analysis in new ways. In petascale data processing scientists deal with datasets, not individual files. As a result, a task (comprised of many jobs) became a unit of petascale data processing on the Grid. Splitting of a large data processing task into jobs enabled fine-granularity checkpointing analogous to the splitting of a large file into smaller TCP/IP packets during data transfers. Transferring large data in small packets achieves reliability through automatic re-sending of the dropped TCP/IP packets. Similarly, transient job failures on the Grid can be recovered by automatic re-tries to achieve reliable Six Sigma produc...

  7. Digital Archiving of People Flow by Recycling Large-Scale Social Survey Data of Developing Cities

    Science.gov (United States)

    Sekimoto, Y.; Watanabe, A.; Nakamura, T.; Horanont, T.

    2012-07-01

    Data on people flow has become increasingly important in the field of business, including the areas of marketing and public services. Although mobile phones enable a person's position to be located to a certain degree, it is a challenge to acquire sufficient data from people with mobile phones. In order to grasp people flow in its entirety, it is important to establish a practical method of reconstructing people flow from various kinds of existing fragmentary spatio-temporal data such as social survey data. For example, despite typical Person Trip Survey Data collected by the public sector showing the fragmentary spatio-temporal positions accessed, the data are attractive given the sufficiently large sample size to estimate the entire flow of people. In this study, we apply our proposed basic method to Japan International Cooperation Agency (JICA) PT data pertaining to developing cities around the world, and we propose some correction methods to resolve the difficulties in applying it to many cities and stably to infrastructure data.

  8. Use FlowRepository to share your clinical data upon study publication.

    Science.gov (United States)

    Spidlen, Josef; Brinkman, Ryan R

    2018-01-01

    A fundamental tenet of scientific research is that published results including underlying data should be open to independent validation and refutation. Data sharing encourages collaboration, facilitates quality and reduces redundancy in data production. Authors submitting manuscripts to several journals have already adopted the habit of sharing their underlying flow cytometry data by deposition to FlowRepository-a data repository that is jointly supported by the International Society for Advancement of Cytometry, the International Clinical Cytometry Society and the European Society for Clinical Cell Analysis. De-identification is required for publishing data from clinical studies and we discuss ways to satisfy data sharing requirements and patient privacy requirements simultaneously. Scientific communities in the fields of microarray, proteomics, and sequencing have been benefiting from reuse and re-exploration of data in public repositories for over decade. We believe it is time that clinicians follow suit and that de-identified clinical data also become routinely available along with published cytometry-based findings. © 2016 International Clinical Cytometry Society. © 2016 International Clinical Cytometry Society.

  9. Amination of Aryl Halides and Esters Using Intensified Continuous Flow Processing

    Directory of Open Access Journals (Sweden)

    Thomas M. Kohl

    2015-09-01

    Full Text Available Significant process intensification of the amination reactions of aryl halides and esters has been demonstrated using continuous flow processing. Using this technology traditionally difficult amination reactions have been performed safely at elevated temperatures. These reactions were successfully conducted on laboratory scale coil reactor modules with 1 mm internal diameter (ID and on a preparatory scale tubular reactor with 6 mm ID containing static mixers.

  10. PGAS in-memory data processing for the Processing Unit of the Upgraded Electronics of the Tile Calorimeter of the ATLAS Detector

    International Nuclear Information System (INIS)

    Ohene-Kwofie, Daniel; Otoo, Ekow

    2015-01-01

    The ATLAS detector, operated at the Large Hadron Collider (LHC) records proton-proton collisions at CERN every 50ns resulting in a sustained data flow up to PB/s. The upgraded Tile Calorimeter of the ATLAS experiment will sustain about 5PB/s of digital throughput. These massive data rates require extremely fast data capture and processing. Although there has been a steady increase in the processing speed of CPU/GPGPU assembled for high performance computing, the rate of data input and output, even under parallel I/O, has not kept up with the general increase in computing speeds. The problem then is whether one can implement an I/O subsystem infrastructure capable of meeting the computational speeds of the advanced computing systems at the petascale and exascale level.We propose a system architecture that leverages the Partitioned Global Address Space (PGAS) model of computing to maintain an in-memory data-store for the Processing Unit (PU) of the upgraded electronics of the Tile Calorimeter which is proposed to be used as a high throughput general purpose co-processor to the sROD of the upgraded Tile Calorimeter. The physical memory of the PUs are aggregated into a large global logical address space using RDMA- capable interconnects such as PCI- Express to enhance data processing throughput. (paper)

  11. Detection of flow mixing processes using transmission methods in high-duty heat exchanging apparatus

    International Nuclear Information System (INIS)

    Seiffert, V.

    1981-01-01

    The COBRA-IIIC program modified by MIT has been further improved for verifying the experimental studies described in the thesis. This work has been accompanied by a review and modification of the relevant analytical equations. A mathematical relationship has been set up for the cross-mixing phenomenon of shearing flow in narrowest cross-section between two heating rods, the relationship being taken into account in the sub-channel analysis. Despite the very complex and superposing processes of the problem studied, the results obtained by the improved sub-channel analysis program using the nearly derived cross-mixing approach are quantitatively well confirmed by comparison with experimental data. Applying the improved sub-channel analysis program to describing the author's two-phase flow experiments (air-water and water-steam) with rod bundle geometries to be found in the literature, the cross-mixing approach presented in the thesis is shown to be reliable (orig./GL) [de

  12. Flow processes in electric discharge drivers

    Science.gov (United States)

    Baganoff, D.

    1975-01-01

    The performance of an electric discharge shock tube is discussed from the point of view that the conditions at the sonic station are the primary controlling variables (likewise in comparing designs), and that the analysis of the flow on either side of the sonic station should be done separately. The importance of considering mass-flow rate in matching a given driver design to the downstream flow required for a particular shock-wave speed is stressed. It is shown that a driver based on the principle of liquid injection (of H2) is superior to one based on the Ludwieg tube, because of the greater mass-flow rate and the absence of a massive diaphragm.

  13. A Pythonic Approach for Computational Geosciences and Geo-Data Processing

    Science.gov (United States)

    Morra, G.; Yuen, D. A.; Lee, S. M.

    2016-12-01

    Computational methods and data analysis play a constantly increasing role in Earth Sciences however students and professionals need to climb a steep learning curve before reaching a sufficient level that allows them to run effective models. Furthermore the recent arrival and new powerful machine learning tools such as Torch and Tensor Flow has opened new possibilities but also created a new realm of complications related to the completely different technology employed. We present here a series of examples entirely written in Python, a language that combines the simplicity of Matlab with the power and speed of compiled languages such as C, and apply them to a wide range of geological processes such as porous media flow, multiphase fluid-dynamics, creeping flow and many-faults interaction. We also explore ways in which machine learning can be employed in combination with numerical modelling. From immediately interpreting a large number of modeling results to optimizing a set of modeling parameters to obtain a desired optimal simulation. We show that by using Python undergraduate and graduate can learn advanced numerical technologies with a minimum dedicated effort, which in turn encourages them to develop more numerical tools and quickly progress in their computational abilities. We also show how Python allows combining modeling with machine learning as pieces of LEGO, therefore simplifying the transition towards a new kind of scientific geo-modelling. The conclusion is that Python is an ideal tool to create an infrastructure for geosciences that allows users to quickly develop tools, reuse techniques and encourage collaborative efforts to interpret and integrate geo-data in profound new ways.

  14. Data processing system for NBT experiments

    International Nuclear Information System (INIS)

    Takahashi, C.; Hosokawa, M.; Shoji, T.; Fujiwara, M.

    1981-07-01

    Data processing system for Nagoya Bumpy Torus (NBT) has been developed. Since plasmas are produced and heated in steady state by use of high power microwaves, sampling and processing data prevails in long time scale on the order of one minute. The system, which consists of NOVA 3/12 minicomputer and many data acquisition devices, is designed to sample and process large amount of data before the next discharge starts. Several features of such long time scale data processing system are described in detail. (author)

  15. Pilot points method for conditioning multiple-point statistical facies simulation on flow data

    Science.gov (United States)

    Ma, Wei; Jafarpour, Behnam

    2018-05-01

    We propose a new pilot points method for conditioning discrete multiple-point statistical (MPS) facies simulation on dynamic flow data. While conditioning MPS simulation on static hard data is straightforward, their calibration against nonlinear flow data is nontrivial. The proposed method generates conditional models from a conceptual model of geologic connectivity, known as a training image (TI), by strategically placing and estimating pilot points. To place pilot points, a score map is generated based on three sources of information: (i) the uncertainty in facies distribution, (ii) the model response sensitivity information, and (iii) the observed flow data. Once the pilot points are placed, the facies values at these points are inferred from production data and then are used, along with available hard data at well locations, to simulate a new set of conditional facies realizations. While facies estimation at the pilot points can be performed using different inversion algorithms, in this study the ensemble smoother (ES) is adopted to update permeability maps from production data, which are then used to statistically infer facies types at the pilot point locations. The developed method combines the information in the flow data and the TI by using the former to infer facies values at selected locations away from the wells and the latter to ensure consistent facies structure and connectivity where away from measurement locations. Several numerical experiments are used to evaluate the performance of the developed method and to discuss its important properties.

  16. Mathematical modelling of thermal and flow processes in vertical ground heat exchangers

    Directory of Open Access Journals (Sweden)

    Pater Sebastian

    2017-12-01

    Full Text Available The main task of mathematical modelling of thermal and flow processes in vertical ground heat exchanger (BHE-Borehole Heat Exchanger is to determine the unit of borehole depth heat flux obtainable or transferred during the operation of the installation. This assignment is indirectly associated with finding the circulating fluid temperature flowing out from the U-tube at a given inlet temperature of fluid in respect to other operational parameters of the installation.

  17. Processing of nuclear data - demonstrations

    International Nuclear Information System (INIS)

    Panini, G.C.

    1995-01-01

    Experimental data are not suitable for direct processing by computer codes. Data must be compiled in order to be compared, normalized, amended; gaps should be filled, differential data supplied. Evaluated data are given in a consistent computer readable format so as to facilitate the checking, the plotting, the comparison with the experimental source of the data. Processing codes have been developed for producing working libraries from the different sources. EXFOR is a complementary and essential tool for evaluators. It consists of a collection of experimental data in computer readable format. (R.P.)

  18. Quality comparison of continuous steam sterilization segmented-flow aseptic processing versus conventional canning of whole and sliced mushrooms.

    Science.gov (United States)

    Anderson, N M; Walker, P N

    2011-08-01

    This study was carried out to investigate segmented-flow aseptic processing of particle foods. A pilot-scale continuous steam sterilization unit capable of producing shelf stable aseptically processed whole and sliced mushrooms was developed. The system utilized pressurized steam as the heating medium to achieve high temperature-short time processing conditions with high and uniform heat transfer that will enable static temperature penetration studies for process development. Segmented-flow technology produced a narrower residence time distribution than pipe-flow aseptic processing; thus, whole and sliced mushrooms were processed only as long as needed to achieve the target F₀  = 7.0 min and were not overcooked. Continuous steam sterilization segmented-flow aseptic processing produced shelf stable aseptically processed mushrooms of superior quality to conventionally canned mushrooms. When compared to conventionally canned mushrooms, aseptically processed yield (weight basis) increased 6.1% (SD = 2.9%) and 6.6% (SD = 2.2%), whiteness (L) improved 3.1% (SD = 1.9%) and 4.7% (SD = 0.7%), color difference (ΔE) improved 6.0% (SD = 1.3%) and 8.5% (SD = 1.5%), and texture improved 3.9% (SD = 1.7%) and 4.6% (SD = 4.2%), for whole and sliced mushrooms, respectively. Segmented-flow aseptic processing eliminated a separate blanching step, eliminated the unnecessary packaging of water and promoted the use of bag-in-box and other versatile aseptic packaging methods. Segmented-flow aseptic processing is capable of producing shelf stable aseptically processed particle foods of superior quality to a conventionally canned product. This unique continuous steam sterilization process eliminates the need for a separate blanching step, reduces or eliminates the need for a liquid carrier, and promotes the use of bag-in-box and other versatile aseptic packaging methods. © 2011 Institute of Food Technologists®

  19. Flow processes at low temperatures in ultrafine-grained aluminum

    International Nuclear Information System (INIS)

    Chinh, Nguyen Q.; Szommer, Peter; Csanadi, Tamas; Langdon, Terence G.

    2006-01-01

    Experiments were conducted to evaluate the flow behavior of pure aluminum at low temperatures. Samples were processed by equal-channel angular pressing (ECAP) to give a grain size of ∼1.2 μm and compression samples were cut from the as-pressed billets and tested over a range of strain rates at temperatures up to 473 K. The results show the occurrence of steady-state flow in these highly deformed samples and a detailed analysis gives a low strain rate sensitivity and an activation energy similar to the value for grain boundary diffusion. By using depth-sensing indentation testing and atomic force microscopy, it is shown that grain boundary sliding occurs in this material at low temperatures. This result is attributed to the presence of high-energy non-equilibrium boundaries in the severely deformed samples

  20. Estimation of daily flow rate of photovoltaic water pumping systems using solar radiation data

    Science.gov (United States)

    Benghanem, M.; Daffallah, K. O.; Almohammedi, A.

    2018-03-01

    This paper presents a simple model which allows us to contribute in the studies of photovoltaic (PV) water pumping systems sizing. The nonlinear relation between water flow rate and solar power has been obtained experimentally in a first step and then used for performance prediction. The model proposed enables us to simulate the water flow rate using solar radiation data for different heads (50 m, 60 m, 70 m and 80 m) and for 8S × 3P PV array configuration. The experimental data are obtained with our pumping test facility located at Madinah site (Saudi Arabia). The performances are calculated using the measured solar radiation data of different locations in Saudi Arabia. Knowing the solar radiation data, we have estimated with a good precision the water flow rate Q in five locations (Al-Jouf, Solar Village, AL-Ahsa, Madinah and Gizan) in Saudi Arabia. The flow rate Q increases with the increase of pump power for different heads following the nonlinear model proposed.

  1. Quantitative image processing in fluid mechanics

    Science.gov (United States)

    Hesselink, Lambertus; Helman, James; Ning, Paul

    1992-01-01

    The current status of digital image processing in fluid flow research is reviewed. In particular, attention is given to a comprehensive approach to the extraction of quantitative data from multivariate databases and examples of recent developments. The discussion covers numerical simulations and experiments, data processing, generation and dissemination of knowledge, traditional image processing, hybrid processing, fluid flow vector field topology, and isosurface analysis using Marching Cubes.

  2. In Situ Field Testing of Processes

    International Nuclear Information System (INIS)

    Wang, J.

    2001-01-01

    The purpose of this Analysis/Model Report (AMR) is to update and document the data and subsequent analyses from ambient field-testing activities performed in underground drifts of the Yucca Mountain Site Characterization Project (YMP). This revision updates data and analyses presented in the initial issue of this AMR. This AMR was developed in accordance with the ''Technical Work Plan for Unsaturated Zone (UZ) Flow and Transport Process Model Report'' and ''Technical Work Plan for UZ Flow, Transport, and Coupled Processes Process Model Report. These activities were performed to investigate in situ flow and transport processes. The evaluations provide the necessary framework to: (1) refine and confirm the conceptual model of matrix and fracture processes in the unsaturated zone (UZ) and (2) analyze the impact of excavation (including use of construction water and effect of ventilation) on the UZ flow and transport processes. This AMR is intended to support revisions to ''Conceptual and Numerical Models for UZ Flow and Transport'' and ''Unsaturated Zone Flow and Transport Model Process Model Report''. In general, the results discussed in this AMR are from studies conducted using a combination or a subset of the following three approaches: (1) air-injection tests, (2) liquid-release tests, and (3) moisture monitoring using in-drift sensors or in-borehole sensors, to evaluate the impact of excavation, ventilation, and construction-water usage on the surrounding rocks. The liquid-release tests and air-injection tests provide an evaluation of in situ fracture flow and the competing processes of matrix imbibition. Only the findings from testing and data not covered in the ''Seepage Calibration Model and Seepage Testing Data'' are analyzed in detail in the AMR

  3. DIGITAL ARCHIVING OF PEOPLE FLOW BY RECYCLING LARGE-SCALE SOCIAL SURVEY DATA OF DEVELOPING CITIES

    Directory of Open Access Journals (Sweden)

    Y. Sekimoto

    2012-07-01

    Full Text Available Data on people flow has become increasingly important in the field of business, including the areas of marketing and public services. Although mobile phones enable a person's position to be located to a certain degree, it is a challenge to acquire sufficient data from people with mobile phones. In order to grasp people flow in its entirety, it is important to establish a practical method of reconstructing people flow from various kinds of existing fragmentary spatio-temporal data such as social survey data. For example, despite typical Person Trip Survey Data collected by the public sector showing the fragmentary spatio-temporal positions accessed, the data are attractive given the sufficiently large sample size to estimate the entire flow of people. In this study, we apply our proposed basic method to Japan International Cooperation Agency (JICA PT data pertaining to developing cities around the world, and we propose some correction methods to resolve the difficulties in applying it to many cities and stably to infrastructure data.

  4. Groundwater flow and sorption processes in fractured rocks (I)

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Won Young; Woo, Nam Chul; Yum, Byoung Woo; Choi, Young Sub; Chae, Byoung Kon; Kim, Jung Yul; Kim, Yoo Sung; Hyun, Hye Ja; Lee, Kil Yong; Lee, Seung Gu; Youn, Youn Yul; Choon, Sang Ki [Korea Institute of Geology Mining and Materials, Taejon (Korea, Republic of)

    1996-12-01

    This study is objected to characterize groundwater flow and sorption processes of the contaminants (ground-water solutes) along the fractured crystalline rocks in Korea. Considering that crystalline rock mass is an essential condition for using underground space cannot be overemphasized the significance of the characterizing fractured crystalline rocks. the behavior of the groundwater contaminants is studied in related to the subsurface structure, and eventually a quantitative technique will be developed to evaluate the impacts of the contaminants on the subsurface environments. The study has been carried at the Samkwang mine area in the Chung-Nam Province. The site has Pre-Cambrian crystalline gneiss as a bedrock and the groundwater flow system through the bedrock fractures seemed to be understandable with the study on the subsurface geologic structure through the mining tunnels. Borehole tests included core logging, televiewer logging, constant pressure fixed interval length tests and tracer tests. The results is summarized as follows; 1) To determine the hydraulic parameters of the fractured rock, the transient flow analysis produce better results than the steady - state flow analysis. 2) Based on the relationship between fracture distribution and transmissivities measured, the shallow part of the system could be considered as a porous and continuous medium due to the well developed fractures and weathering. However, the deeper part shows flow characteristics of the fracture dominant system, satisfying the assumptions of the Cubic law. 3) Transmissivities from the FIL test were averaged to be 6.12 x 10{sup -7}{sub m}{sup 2}{sub /s}. 4) Tracer tests result indicates groundwater flow in the study area is controlled by the connection, extension and geometry of fractures in the bedrock. 5) Hydraulic conductivity of the tracer-test interval was in maximum of 7.2 x 10{sup -6}{sub m/sec}, and the effective porosity of 1.8 %. 6) Composition of the groundwater varies

  5. The assessment of two-fluid models using critical flow data

    International Nuclear Information System (INIS)

    Shome, B.; Lahey, R.T. Jr.

    1992-01-01

    The behavior of two-phase flow is governed by the thermal-hydraulic transfers occurring across phasic interfaces. If correctly formulated, two-fluid models should yield all conceivable evolutions. Moreover, some experiments may be uniquely qualified for model assessment if they can isolate important closure models. This paper is primarily concerned with the possible assessment of the virtual mass force using air-water critical flow data, in which phase-change effects do not take place. The following conclusions can be drawn from this study: (1) The closure parameters, other than those for cirtual mass, were found to have an insignificant effect on critical flow. In contrast, the void fraction profile and the slip ratio were observed to be sensitive to the virtual mass model. (2) It appears that air-water critical flow experiments may be effectively used for the assessment of the virtual mass force used in two-fluid models. In fact, such experiments are unique in their ability to isolate the spatial gradients in a vm models. It is hoped that this study will help stimulate the conduct of further critical flow experiments for the assessment of two fluid models

  6. A coupled mechanical-hydrological methodology for modeling flow in jointed rock masses using laboratory data for the joint flow model

    International Nuclear Information System (INIS)

    Voss, C.F.; Bastian, R.J.; Shotwell, L.R.

    1986-01-01

    Pacific Northwest Laboratory (PNL) currently supports the U.S. Department of Energy's Office of Civilian Radioactive Waste Management in developing and evaluating analytical methods for assessing the suitability of sites for geologic disposal of high-level radioactive waste. The research includes consideration of hydrological, geomechanical, geochemical, and waste package components and the evaluation of the degree of coupling that can occur between two or more of these components. The PNL effort and those of other research groups investing potential waste sites in the U.S. and abroad are producing a suite of computer codes to analyze the long-term performance of the proposed repository sites. This paper summarizes the ongoing research in rock mechanics at PNL involving flow through jointed rock. The objective of this research is to develop a methodology for modeling the coupled mechanical-hydrological process of flow through joints and then attempt to validate a ''simple'' model using small-scale laboratory test data as a basis for judging whether the approach has merit. This paper discusses the laboratory tests being conducted to develop a joint behavioral constitutive model for the numerical method under development and the modeling approach being considered

  7. Prediction of periodically correlated processes by wavelet transform and multivariate methods with applications to climatological data

    Science.gov (United States)

    Ghanbarzadeh, Mitra; Aminghafari, Mina

    2015-05-01

    This article studies the prediction of periodically correlated process using wavelet transform and multivariate methods with applications to climatological data. Periodically correlated processes can be reformulated as multivariate stationary processes. Considering this fact, two new prediction methods are proposed. In the first method, we use stepwise regression between the principal components of the multivariate stationary process and past wavelet coefficients of the process to get a prediction. In the second method, we propose its multivariate version without principal component analysis a priori. Also, we study a generalization of the prediction methods dealing with a deterministic trend using exponential smoothing. Finally, we illustrate the performance of the proposed methods on simulated and real climatological data (ozone amounts, flows of a river, solar radiation, and sea levels) compared with the multivariate autoregressive model. The proposed methods give good results as we expected.

  8. Continuous-Flow Processes in Heterogeneously Catalyzed Transformations of Biomass Derivatives into Fuels and Chemicals

    Directory of Open Access Journals (Sweden)

    Antonio A. Romero

    2012-07-01

    Full Text Available Continuous flow chemical processes offer several advantages as compared to batch chemistries. These are particularly relevant in the case of heterogeneously catalyzed transformations of biomass-derived platform molecules into valuable chemicals and fuels. This work is aimed to provide an overview of key continuous flow processes developed to date dealing with a series of transformations of platform chemicals including alcohols, furanics, organic acids and polyols using a wide range of heterogeneous catalysts based on supported metals, solid acids and bifunctional (metal + acidic materials.

  9. Incorporation of sedimentological data into a calibrated groundwater flow and transport model

    International Nuclear Information System (INIS)

    Williams, N.J.; Young, S.C.; Barton, D.H.; Hurst, B.T.

    1997-01-01

    Analysis suggests that a high hydraulic conductivity (K) zone is associated with a former river channel at the Portsmouth Gaseous Diffusion Plant (PORTS). A two-dimensional (2-D) and three-dimensional (3-D) groundwater flow model was developed base on a sedimentological model to demonstrate the performance of a horizontal well for plume capture. The model produced a flow field with magnitudes and directions consistent with flow paths inferred from historical trichloroethylene (TCE) plume data. The most dominant feature affecting the well's performance was preferential high- and low-K zones. Based on results from the calibrated flow and transport model, a passive groundwater collection system was designed and built. Initial flow rates and concentrations measured from a gravity-drained horizontal well agree closely to predicted values

  10. Modeling of multiphase flow with solidification and chemical reaction in materials processing

    Science.gov (United States)

    Wei, Jiuan

    Understanding of multiphase flow and related heat transfer and chemical reactions are the keys to increase the productivity and efficiency in industrial processes. The objective of this thesis is to utilize the computational approaches to investigate the multiphase flow and its application in the materials processes, especially in the following two areas: directional solidification, and pyrolysis and synthesis. In this thesis, numerical simulations will be performed for crystal growth of several III-V and II-VI compounds. The effects of Prandtl and Grashof numbers on the axial temperature profile, the solidification interface shape, and melt flow are investigated. For the material with high Prandtl and Grashof numbers, temperature field and growth interface will be significantly influenced by melt flow, resulting in the complicated temperature distribution and curved interface shape, so it will encounter tremendous difficulty using a traditional Bridgman growth system. A new design is proposed to reduce the melt convection. The geometric configuration of top cold and bottom hot in the melt will dramatically reduce the melt convection. The new design has been employed to simulate the melt flow and heat transfer in crystal growth with large Prandtl and Grashof numbers and the design parameters have been adjusted. Over 90% of commercial solar cells are made from silicon and directional solidification system is the one of the most important method to produce multi-crystalline silicon ingots due to its tolerance to feedstock impurities and lower manufacturing cost. A numerical model is developed to simulate the silicon ingot directional solidification process. Temperature distribution and solidification interface location are presented. Heat transfer and solidification analysis are performed to determine the energy efficiency of the silicon production furnace. Possible improvements are identified. The silicon growth process is controlled by adjusting heating power and

  11. Beyond the Black Box: Coupling x-ray tomographic imaging of multi-phase flow processes to numerical models and traditional laboratory measurements

    DEFF Research Database (Denmark)

    Wildenschild, Dorthe; Porter, M.L.; Schaap, M.G.

    Quantitative non-invasive imaging has evolved rapidly in the last decade, and is now being used to assess a variety of problems in vadose zone research, including unsaturated flow and transport of water and contaminants, macropore-dominated processes, soil-water-root interactions, more recent work...... on colloidal processes, and significant work on NAPL-water interactions . We are now able to use non-invasive imaging to probe processes that could not previously be quantified because of lack of opacity, resolution, or accurate techniques for quantitative measurement. This work presents an overview of recent...... advances in x-ray microtomography techniques that can generate high-resolution image-based data for (1) validation of pore-scale multi-phase flow models such as the lattice-Boltzmann technique and pore network models (with respect to fluid saturations, fluid distribution, and relationships among capillary...

  12. Rainfall threshold calculation for debris flow early warning in areas with scarcity of data

    Science.gov (United States)

    Pan, Hua-Li; Jiang, Yuan-Jun; Wang, Jun; Ou, Guo-Qiang

    2018-05-01

    Debris flows are natural disasters that frequently occur in mountainous areas, usually accompanied by serious loss of lives and properties. One of the most commonly used approaches to mitigate the risk associated with debris flows is the implementation of early warning systems based on well-calibrated rainfall thresholds. However, many mountainous areas have little data regarding rainfall and hazards, especially in debris-flow-forming regions. Therefore, the traditional statistical analysis method that determines the empirical relationship between rainstorms and debris flow events cannot be effectively used to calculate reliable rainfall thresholds in these areas. After the severe Wenchuan earthquake, there were plenty of deposits deposited in the gullies, which resulted in several debris flow events. The triggering rainfall threshold has decreased obviously. To get a reliable and accurate rainfall threshold and improve the accuracy of debris flow early warning, this paper developed a quantitative method, which is suitable for debris flow triggering mechanisms in meizoseismal areas, to identify rainfall threshold for debris flow early warning in areas with a scarcity of data based on the initiation mechanism of hydraulic-driven debris flow. First, we studied the characteristics of the study area, including meteorology, hydrology, topography and physical characteristics of the loose solid materials. Then, the rainfall threshold was calculated by the initiation mechanism of the hydraulic debris flow. The comparison with other models and with alternate configurations demonstrates that the proposed rainfall threshold curve is a function of the antecedent precipitation index (API) and 1 h rainfall. To test the proposed method, we selected the Guojuanyan gully, a typical debris flow valley that during the 2008-2013 period experienced several debris flow events, located in the meizoseismal areas of the Wenchuan earthquake, as a case study. The comparison with other

  13. A Neuroeconomics Analysis of Investment Process with Money Flow Information: The Error-Related Negativity

    Directory of Open Access Journals (Sweden)

    Cuicui Wang

    2015-01-01

    Full Text Available This investigation is among the first ones to analyze the neural basis of an investment process with money flow information of financial market, using a simplified task where volunteers had to choose to buy or not to buy stocks based on the display of positive or negative money flow information. After choosing “to buy” or “not to buy,” participants were presented with feedback. At the same time, event-related potentials (ERPs were used to record investor’s brain activity and capture the event-related negativity (ERN and feedback-related negativity (FRN components. The results of ERN suggested that there might be a higher risk and more conflict when buying stocks with negative net money flow information than positive net money flow information, and the inverse was also true for the “not to buy” stocks option. The FRN component evoked by the bad outcome of a decision was more negative than that by the good outcome, which reflected the difference between the values of the actual and expected outcome. From the research, we could further understand how investors perceived money flow information of financial market and the neural cognitive effect in investment process.

  14. A Neuroeconomics Analysis of Investment Process with Money Flow Information: The Error-Related Negativity

    Science.gov (United States)

    Wang, Cuicui; Vieito, João Paulo; Ma, Qingguo

    2015-01-01

    This investigation is among the first ones to analyze the neural basis of an investment process with money flow information of financial market, using a simplified task where volunteers had to choose to buy or not to buy stocks based on the display of positive or negative money flow information. After choosing “to buy” or “not to buy,” participants were presented with feedback. At the same time, event-related potentials (ERPs) were used to record investor's brain activity and capture the event-related negativity (ERN) and feedback-related negativity (FRN) components. The results of ERN suggested that there might be a higher risk and more conflict when buying stocks with negative net money flow information than positive net money flow information, and the inverse was also true for the “not to buy” stocks option. The FRN component evoked by the bad outcome of a decision was more negative than that by the good outcome, which reflected the difference between the values of the actual and expected outcome. From the research, we could further understand how investors perceived money flow information of financial market and the neural cognitive effect in investment process. PMID:26557139

  15. Standardization of GPS data processing

    International Nuclear Information System (INIS)

    Park, Pil Ho

    2001-06-01

    A nationwide GPS network has been constructed with about 60 permanent GPS stations after late 1990s in Korea. For using the GPS in variety of application area like crustal deformation, positioning, or monitoring upper atmosphere, it is necessary to have ability to process the data precisely. Now Korea Astronomy Observatory has the precise GPS data processing technique in Korea because it is difficult to understand characteristics of the parameters we want to estimate, resolve the integer ambiguity, and analyze many errors. There are three reliable GPS data processing software in the world ; Bernese(University of Berne), GIPSY-OASIS(JPL), GAMIT(MIT). These software allow us to achieve millimeter accuracy in the horizontal position and about 1 cm accuracy vertically even for regional networks with a diameter of several thousand kilometers. But we established the standard of GPS data processing using Bernese as main tool and GIPSY O ASIS as side

  16. Optimization of the ultrasonic processing in a melt flow

    OpenAIRE

    Tzanakis, I; Lebon, GSB; Eskin, DG; Pericleous, K

    2016-01-01

    Ultrasonic cavitation treatment of melt significantly improves the downstream properties and quality of conventional and advanced metallic materials. However, the transfer of this technology to treating large melt volumes has been hindered by a lack of fundamental knowledge, allowing for the ultrasonic processing in the melt flow. In this study, we present the results of experimental validation of an advanced numerical model applied to the acoustic cavitation treatment of liquid aluminum duri...

  17. Processing Terrain Point Cloud Data

    KAUST Repository

    DeVore, Ronald

    2013-01-10

    Terrain point cloud data are typically acquired through some form of Light Detection And Ranging sensing. They form a rich resource that is important in a variety of applications including navigation, line of sight, and terrain visualization. Processing terrain data has not received the attention of other forms of surface reconstruction or of image processing. The goal of terrain data processing is to convert the point cloud into a succinct representation system that is amenable to the various application demands. The present paper presents a platform for terrain processing built on the following principles: (i) measuring distortion in the Hausdorff metric, which we argue is a good match for the application demands, (ii) a multiscale representation based on tree approximation using local polynomial fitting. The basic elements held in the nodes of the tree can be efficiently encoded, transmitted, visualized, and utilized for the various target applications. Several challenges emerge because of the variable resolution of the data, missing data, occlusions, and noise. Techniques for identifying and handling these challenges are developed. © 2013 Society for Industrial and Applied Mathematics.

  18. Digital Data Processing of Stilbene

    International Nuclear Information System (INIS)

    AMIRI, Moslem; MATEJ, Zdenek; MRAVEC, Filip; PRENOSIL, Vaclav; CVACHOVEC, Frantisek

    2013-06-01

    Stilbene is a proven spectrometric detector for mixed fields of neutrons and gamma rays. By digital processing of shape output pulses from the detector it is possible to obtain information about the energy of the interacting neutron / photon and distinguish which of these two particles interacts in the detector. Another numerical processing of digital data can finalize the energy spectrum of both components of the mixed field. The quality of the digitized data is highly dependent on the parameters of the hardware used for digitization and on the quality of software processing. Our results also show how the quality of the particle type identification depends on the sampling rate and as well as the method of processing of the sampled data. (authors)

  19. Investigation of Multiscale and Multiphase Flow, Transport and Reaction in Heavy Oil Recovery Processes

    Energy Technology Data Exchange (ETDEWEB)

    Yortsos, Yanis C.

    2002-10-08

    In this report, the thrust areas include the following: Internal drives, vapor-liquid flows, combustion and reaction processes, fluid displacements and the effect of instabilities and heterogeneities and the flow of fluids with yield stress. These find respective applications in foamy oils, the evolution of dissolved gas, internal steam drives, the mechanics of concurrent and countercurrent vapor-liquid flows, associated with thermal methods and steam injection, such as SAGD, the in-situ combustion, the upscaling of displacements in heterogeneous media and the flow of foams, Bingham plastics and heavy oils in porous media and the development of wormholes during cold production.

  20. Development of data acquisition system for tracing fast flow using GPIB

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jong Bum; Jung, Sung Hee

    2003-12-01

    Radiotracer data acquisition system which was developed by radiotracer group in Korea Atomic Energy Research Institute has been used for field experiment since the late of 1990s. It has RS232C and 24 channel x10 sample. It was useful for radiotracer experiment whose period span from few second to few ten days. But the demand for tracing fast flow has been increasing. For example, to trace a fast gas flow in petro chemical industry or for application like computer aided radioactive particle tracking, high speed data acquisition system is necessary. The current commercial data acquisition systems have limited count channels. To solve above mentioned problem upgraded high speed data acquisition system using GPIB is developed. GPIB system is one method of control and communication. The data transmit speed of GPIB is 200k byte/sec which is faster then 20k bps of RS232C. In addition to that, GPIB has advantage of expanding up to 15 devices. It mean that 24 x15 channel is possible if one device has 24channel. It make possible to trace high fast flow or large number of channel. This technical report include details of the design of high speed multi-channel radiation detecting system which is consist of ratemeter module, display unit, mainboard with GPIB system and 24 channel counter board. The design of GPIB system developed here is also applicable to other measurement system for various applications.

  1. Development of data acquisition system for tracing fast flow using GPIB

    International Nuclear Information System (INIS)

    Kim, Jong Bum; Jung, Sung Hee

    2003-12-01

    Radiotracer data acquisition system which was developed by radiotracer group in Korea Atomic Energy Research Institute has been used for field experiment since the late of 1990s. It has RS232C and 24 channel x10 sample. It was useful for radiotracer experiment whose period span from few second to few ten days. But the demand for tracing fast flow has been increasing. For example, to trace a fast gas flow in petro chemical industry or for application like computer aided radioactive particle tracking, high speed data acquisition system is necessary. The current commercial data acquisition systems have limited count channels. To solve above mentioned problem upgraded high speed data acquisition system using GPIB is developed. GPIB system is one method of control and communication. The data transmit speed of GPIB is 200k byte/sec which is faster then 20k bps of RS232C. In addition to that, GPIB has advantage of expanding up to 15 devices. It mean that 24 x15 channel is possible if one device has 24channel. It make possible to trace high fast flow or large number of channel. This technical report include details of the design of high speed multi-channel radiation detecting system which is consist of ratemeter module, display unit, mainboard with GPIB system and 24 channel counter board. The design of GPIB system developed here is also applicable to other measurement system for various applications

  2. Autoclave processing for composite material fabrication. 1: An analysis of resin flows and fiber compactions for thin laminate

    Science.gov (United States)

    Hou, T. H.

    1985-01-01

    High quality long fiber reinforced composites, such as those used in aerospace and industrial applications, are commonly processed in autoclaves. An adequate resin flow model for the entire system (laminate/bleeder/breather), which provides a description of the time-dependent laminate consolidation process, is useful in predicting the loss of resin, heat transfer characteristics, fiber volume fraction and part dimension, etc., under a specified set of processing conditions. This could be accomplished by properly analyzing the flow patterns and pressure profiles inside the laminate during processing. A newly formulated resin flow model for composite prepreg lamination process is reported. This model considers viscous resin flows in both directions perpendicular and parallel to the composite plane. In the horizontal direction, a squeezing flow between two nonporous parallel plates is analyzed, while in the vertical direction, a poiseuille type pressure flow through porous media is assumed. Proper force and mass balances have been made and solved for the whole system. The effects of fiber-fiber interactions during lamination are included as well. The unique features of this analysis are: (1) the pressure gradient inside the laminate is assumed to be generated from squeezing action between two adjacent approaching fiber layers, and (2) the behavior of fiber bundles is simulated by a Finitely Extendable Nonlinear Elastic (FENE) spring.

  3. Digital Data Processing of Images

    African Journals Online (AJOL)

    Digital data processing was investigated to perform image processing. Image smoothing and restoration were explored and promising results obtained. The use of the computer, not only as a data management device, but as an important tool to render quantitative information, was illustrated by lung function determination.

  4. Investigation of column flotation process on sulphide ore using 2-electrode capacitance sensor: The effect of air flow rate and solid percentage

    Science.gov (United States)

    Haryono, Didied; Harjanto, Sri; Wijaya, Rifky; Oediyani, Soesaptri; Nugraha, Harisma; Huda, Mahfudz Al; Taruno, Warsito Purwo

    2018-04-01

    Investigation of column flotation process on sulphide ore using 2-electrode capacitance sensor is presented in this paper. The effect of air flow rate and solid percentage on column flotation process has been experimentally investigated. The purpose of this paper is to understand the capacitance signal characteristic affected by the air flow rate and the solid percentage which can be used to determine the metallurgical performance. Experiments were performed using a laboratory column flotation cell which has a diameter of 5 cm and the total height of 140 cm. The sintered ceramic sparger and wash water were installed at the bottom and above of the column. Two-electrode concave type capacitance sensor was also installed at a distance of 50 cm from the sparger. The sensor was attached to the outer wall of the column, connected to data acquisition system, manufactured by CTECH Labs Edwar Technology and personal computer for further data processing. Feed consisting ZnS and SiO2 with the ratio of 3:2 was mixed with some reagents to make 1 litre of slurry. The slurry was fed into the aerated column at 100 cm above the sparger with a constant rate and the capacitance signals were captured during the process. In this paper, 7.5 and 10% of solid and 2-4 L/min of air flow rate with 0.5 L/min intervals were used as independent variables. The results show that the capacitance signal characteristics between the 7.5 and 10% of solid are different at any given air flow rate in which the 10% solid produced signals higher than those of 7.5%. Metallurgical performance and capacitance signal exhibit a good correlation.

  5. Probability Distribution for Flowing Interval Spacing

    International Nuclear Information System (INIS)

    Kuzio, S.

    2001-01-01

    The purpose of this analysis is to develop a probability distribution for flowing interval spacing. A flowing interval is defined as a fractured zone that transmits flow in the Saturated Zone (SZ), as identified through borehole flow meter surveys (Figure 1). This analysis uses the term ''flowing interval spacing'' as opposed to fractured spacing, which is typically used in the literature. The term fracture spacing was not used in this analysis because the data used identify a zone (or a flowing interval) that contains fluid-conducting fractures but does not distinguish how many or which fractures comprise the flowing interval. The flowing interval spacing is measured between the midpoints of each flowing interval. Fracture spacing within the SZ is defined as the spacing between fractures, with no regard to which fractures are carrying flow. The Development Plan associated with this analysis is entitled, ''Probability Distribution for Flowing Interval Spacing'', (CRWMS M and O 2000a). The parameter from this analysis may be used in the TSPA SR/LA Saturated Zone Flow and Transport Work Direction and Planning Documents: (1) ''Abstraction of Matrix Diffusion for SZ Flow and Transport Analyses'' (CRWMS M and O 1999a) and (2) ''Incorporation of Heterogeneity in SZ Flow and Transport Analyses'', (CRWMS M and O 1999b). A limitation of this analysis is that the probability distribution of flowing interval spacing may underestimate the effect of incorporating matrix diffusion processes in the SZ transport model because of the possible overestimation of the flowing interval spacing. Larger flowing interval spacing results in a decrease in the matrix diffusion processes. This analysis may overestimate the flowing interval spacing because the number of fractures that contribute to a flowing interval cannot be determined from the data. Because each flowing interval probably has more than one fracture contributing to a flowing interval, the true flowing interval spacing could be

  6. Sample and data management process description

    International Nuclear Information System (INIS)

    Kessner, J.H.

    2000-01-01

    The sample and data management process was initiated in 1994 as a result of a process improvement workshop. The purpose of the workshop was to develop a sample and data management process that would reduce cycle time and costs, simplify systems and procedures, and improve customer satisfaction for sampling, analytical services, and data management activities

  7. INVESTIGATION OF MULTISCALE AND MULTIPHASE FLOW, TRANSPORT AND REACTION IN HEAVY OIL RECOVERY PROCESSES

    Energy Technology Data Exchange (ETDEWEB)

    Yannis C. Yortsos

    2003-02-01

    This is final report for contract DE-AC26-99BC15211. The report describes progress made in the various thrust areas of the project, which include internal drives for oil recovery, vapor-liquid flows, combustion and reaction processes and the flow of fluids with yield stress. The report consists mainly of a compilation of various topical reports, technical papers and research reports published produced during the three-year project, which ended on May 6, 2002 and was no-cost extended to January 5, 2003. Advances in multiple processes and at various scales are described. In the area of internal drives, significant research accomplishments were made in the modeling of gas-phase growth driven by mass transfer, as in solution-gas drive, and by heat transfer, as in internal steam drives. In the area of vapor-liquid flows, we studied various aspects of concurrent and countercurrent flows, including stability analyses of vapor-liquid counterflow, and the development of novel methods for the pore-network modeling of the mobilization of trapped phases and liquid-vapor phase changes. In the area of combustion, we developed new methods for the modeling of these processes at the continuum and pore-network scales. These models allow us to understand a number of important aspects of in-situ combustion, including steady-state front propagation, multiple steady-states, effects of heterogeneity and modes of combustion (forward or reverse). Additional aspects of reactive transport in porous media were also studied. Finally, significant advances were made in the flow and displacement of non-Newtonian fluids with Bingham plastic rheology, which is characteristic of various heavy oil processes. Various accomplishments in generic displacements in porous media and corresponding effects of reservoir heterogeneity are also cited.

  8. Physically consistent data assimilation method based on feedback control for patient-specific blood flow analysis.

    Science.gov (United States)

    Ii, Satoshi; Adib, Mohd Azrul Hisham Mohd; Watanabe, Yoshiyuki; Wada, Shigeo

    2018-01-01

    This paper presents a novel data assimilation method for patient-specific blood flow analysis based on feedback control theory called the physically consistent feedback control-based data assimilation (PFC-DA) method. In the PFC-DA method, the signal, which is the residual error term of the velocity when comparing the numerical and reference measurement data, is cast as a source term in a Poisson equation for the scalar potential field that induces flow in a closed system. The pressure values at the inlet and outlet boundaries are recursively calculated by this scalar potential field. Hence, the flow field is physically consistent because it is driven by the calculated inlet and outlet pressures, without any artificial body forces. As compared with existing variational approaches, although this PFC-DA method does not guarantee the optimal solution, only one additional Poisson equation for the scalar potential field is required, providing a remarkable improvement for such a small additional computational cost at every iteration. Through numerical examples for 2D and 3D exact flow fields, with both noise-free and noisy reference data as well as a blood flow analysis on a cerebral aneurysm using actual patient data, the robustness and accuracy of this approach is shown. Moreover, the feasibility of a patient-specific practical blood flow analysis is demonstrated. Copyright © 2017 John Wiley & Sons, Ltd.

  9. Modelling stock order flows with non-homogeneous intensities from high-frequency data

    Science.gov (United States)

    Gorshenin, Andrey K.; Korolev, Victor Yu.; Zeifman, Alexander I.; Shorgin, Sergey Ya.; Chertok, Andrey V.; Evstafyev, Artem I.; Korchagin, Alexander Yu.

    2013-10-01

    A micro-scale model is proposed for the evolution of such information system as the limit order book in financial markets. Within this model, the flows of orders (claims) are described by doubly stochastic Poisson processes taking account of the stochastic character of intensities of buy and sell orders that determine the price discovery mechanism. The proposed multiplicative model of stochastic intensities makes it possible to analyze the characteristics of the order flows as well as the instantaneous proportion of the forces of buyers and sellers, that is, the imbalance process, without modelling the external information background. The proposed model gives the opportunity to link the micro-scale (high-frequency) dynamics of the limit order book with the macro-scale models of stock price processes of the form of subordinated Wiener processes by means of limit theorems of probability theory and hence, to use the normal variance-mean mixture models of the corresponding heavy-tailed distributions. The approach can be useful in different areas with similar properties (e.g., in plasma physics).

  10. Self Cleaning HEPA Filtration without Interrupting Process Flow

    International Nuclear Information System (INIS)

    Wylde, M.

    2009-01-01

    The strategy of protecting the traditional glass fibre HEPA filtration train from it's blinding contamination and the recovery of dust by the means of self cleaning, pre-filtration is a proven means in the reduction of ultimate disposal volumes and has been used within the Fuel Production Industry. However, there is an increasing demand in nuclear applications requiring elevated operating temperatures, fire resistance, moisture resistance and chemical composition that the existing glass fibre HEPA filtration cannot accommodate, which can be remedied by the use of a metallic HEPA filter media. Previous research (Bergman et al 1997, Moore et al 1992) suggests that the then costs to the DOE, based on a five year life cycle, was $29.5 million for the installation, testing, removal and disposal of glass fibre HEPA filtration trains. Within these costs, $300 was the value given to the filter and $4,450 was given to the peripheral activity. Development of a low cost, cleanable, metallic, direct replacement of the traditional filter train will the clear solution. The Bergman et al work has suggested that a 1000 ft 3 /min, cleanable, stainless HEPA could be commercially available for $5,000 each, whereas the industry has determined that the truer cost of such an item in isolation would be closer to $15,000. This results in a conflict within the requirement between 'low cost' and 'stainless HEPA'. By proposing a system that combines metallic HEPA filtration with the ability to self clean without interrupting the process flow, the need for a tradition HEPA filtration train will be eliminated and this dramatically reduces the resources required for cleaning or disposal, thus presenting a route to reducing ultimate costs. The paper will examine the performance characteristics, filtration efficiency, flow verses differential pressure and cleanability of a self cleaning HEPA grade sintered metal filter element, together with data to prove the contention. (authors)

  11. A method for the interpretation of flow cytometry data using genetic algorithms

    Directory of Open Access Journals (Sweden)

    Cesar Angeletti

    2018-01-01

    Full Text Available Background: Flow cytometry analysis is the method of choice for the differential diagnosis of hematologic disorders. It is typically performed by a trained hematopathologist through visual examination of bidimensional plots, making the analysis time-consuming and sometimes too subjective. Here, a pilot study applying genetic algorithms to flow cytometry data from normal and acute myeloid leukemia subjects is described. Subjects and Methods: Initially, Flow Cytometry Standard files from 316 normal and 43 acute myeloid leukemia subjects were transformed into multidimensional FITS image metafiles. Training was performed through introduction of FITS metafiles from 4 normal and 4 acute myeloid leukemia in the artificial intelligence system. Results: Two mathematical algorithms termed 018330 and 025886 were generated. When tested against a cohort of 312 normal and 39 acute myeloid leukemia subjects, both algorithms combined showed high discriminatory power with a receiver operating characteristic (ROC curve of 0.912. Conclusions: The present results suggest that machine learning systems hold a great promise in the interpretation of hematological flow cytometry data.

  12. Two-Phase Flow in Packed Columns and Generation of Bubbly Suspensions for Chemical Processing in Space

    Science.gov (United States)

    Motil, Brian J.; Green, R. D.; Nahra, H. K.; Sridhar, K. R.

    2000-01-01

    For long-duration space missions, the life support and In-Situ Resource Utilization (ISRU) systems necessary to lower the mass and volume of consumables carried from Earth will require more sophisticated chemical processing technologies involving gas-liquid two-phase flows. This paper discusses some preliminary two-phase flow work in packed columns and generation of bubbly suspensions, two types of flow systems that can exist in a number of chemical processing devices. The experimental hardware for a co-current flow, packed column operated in two ground-based low gravity facilities (two-second drop tower and KC- 135 low-gravity aircraft) is described. The preliminary results of this experimental work are discussed. The flow regimes observed and the conditions under which these flow regimes occur are compared with the available co-current packed column experimental work performed in normal gravity. For bubbly suspensions, the experimental hardware for generation of uniformly sized bubbles in Couette flow in microgravity conditions is described. Experimental work was performed on a number of bubbler designs, and the capillary bubble tube was found to produce the most consistent size bubbles. Low air flow rates and low Couette flow produce consistent 2-3 mm bubbles, the size of interest for the "Behavior of Rapidly Sheared Bubbly Suspension" flight experiment. Finally the mass transfer implications of these two-phase flows is qualitatively discussed.

  13. Runtime Modifications of Spark Data Processing Pipelines

    NARCIS (Netherlands)

    Lazovik, E.; Medema, M.; Albers, T.; Langius, E.A.F.; Lazovik, A.

    2017-01-01

    Distributed data processing systems are the standard means for large-scale data analysis in the Big Data field. These systems are based on processing pipelines where the processing is done via a composition of multiple elements or steps. In current distributed data processing systems, the code and

  14. In Situ Field Testing of Processes

    Energy Technology Data Exchange (ETDEWEB)

    J. Wang

    2001-12-14

    The purpose of this Analysis/Model Report (AMR) is to update and document the data and subsequent analyses from ambient field-testing activities performed in underground drifts of the Yucca Mountain Site Characterization Project (YMP). This revision updates data and analyses presented in the initial issue of this AMR. This AMR was developed in accordance with the ''Technical Work Plan for Unsaturated Zone (UZ) Flow and Transport Process Model Report'' and ''Technical Work Plan for UZ Flow, Transport, and Coupled Processes Process Model Report. These activities were performed to investigate in situ flow and transport processes. The evaluations provide the necessary framework to: (1) refine and confirm the conceptual model of matrix and fracture processes in the unsaturated zone (UZ) and (2) analyze the impact of excavation (including use of construction water and effect of ventilation) on the UZ flow and transport processes. This AMR is intended to support revisions to ''Conceptual and Numerical Models for UZ Flow and Transport'' and ''Unsaturated Zone Flow and Transport Model Process Model Report''. In general, the results discussed in this AMR are from studies conducted using a combination or a subset of the following three approaches: (1) air-injection tests, (2) liquid-release tests, and (3) moisture monitoring using in-drift sensors or in-borehole sensors, to evaluate the impact of excavation, ventilation, and construction-water usage on the surrounding rocks. The liquid-release tests and air-injection tests provide an evaluation of in situ fracture flow and the competing processes of matrix imbibition. Only the findings from testing and data not covered in the ''Seepage Calibration Model and Seepage Testing Data'' are analyzed in detail in the AMR.

  15. The `Henry Problem' of `density-driven' groundwater flow versus Tothian `groundwater flow systems' with variable density: A review of the influential Biscayne aquifer data.

    Science.gov (United States)

    Weyer, K. U.

    2017-12-01

    Coastal groundwater flow investigations at the Biscayne Bay, south of Miami, Florida, gave rise to the concept of density-driven flow of seawater into coastal aquifers creating a saltwater wedge. Within that wedge, convection-driven return flow of seawater and a dispersion zone were assumed by Cooper et al. (1964) to be the cause of the Biscayne aquifer `sea water wedge'. This conclusion was based on the chloride distribution within the aquifer and on an analytical model concept assuming convection flow within a confined aquifer without taking non-chemical field data into consideration. This concept was later labelled the `Henry Problem', which any numerical variable density flow program must be able to simulate to be considered acceptable. Both, `density-driven flow' and Tothian `groundwater flow systems' (with or without variable density conditions) are driven by gravitation. The difference between the two are the boundary conditions. 'Density-driven flow' occurs under hydrostatic boundary conditions while Tothian `groundwater flow systems' occur under hydrodynamic boundary conditions. Revisiting the Cooper et al. (1964) publication with its record of piezometric field data (heads) showed that the so-called sea water wedge has been caused by discharging deep saline groundwater driven by gravitational flow and not by denser sea water. Density driven flow of seawater into the aquifer was not found reflected in the head measurements for low and high tide conditions which had been taken contemporaneously with the chloride measurements. These head measurements had not been included in the flow interpretation. The very same head measurements indicated a clear dividing line between shallow local fresh groundwater flow and saline deep groundwater flow without the existence of a dispersion zone or a convection cell. The Biscayne situation emphasizes the need for any chemical interpretation of flow pattern to be supported by head data as energy indicators of flow fields

  16. Examples of data processing systems. Data processing system for JT-60

    International Nuclear Information System (INIS)

    Aoyagi, Tetsuo

    1996-01-01

    JT-60 data processing system is a large computer complex system including a lot of micro-computers, several mini-computers, and a main-frame computer. As general introduction of the original system configuration has been published previously, some improvements are described here. Transient mass data storage system, network database server, a data acquisition system using engineering workstations, and a graphic terminal emulator for X-Window are presented. These new features are realized by utilizing recent progress in computer and network technology and carefully designed user interface specification of the original system. (author)

  17. Continuous-flow processes for the catalytic partial hydrogenation reaction of alkynes

    Directory of Open Access Journals (Sweden)

    Carmen Moreno-Marrodan

    2017-04-01

    Full Text Available The catalytic partial hydrogenation of substituted alkynes to alkenes is a process of high importance in the manufacture of several market chemicals. The present paper shortly reviews the heterogeneous catalytic systems engineered for this reaction under continuous flow and in the liquid phase. The main contributions appeared in the literature from 1997 up to August 2016 are discussed in terms of reactor design. A comparison with batch and industrial processes is provided whenever possible.

  18. Empirical Correlations and CFD Simulations of Vertical Two-Phase Gas-Liquid (Newtonian and Non-Newtonian) Slug Flow Compared Against Experimental Data of Void Fraction

    DEFF Research Database (Denmark)

    Ratkovich, Nicolas Rios; Majumder, S.K.; Bentzen, Thomas Ruby

    2013-01-01

    Gas-Newtonian liquid two-phase flows (TPFs) are presented in several industrial processes (e.g. oil-gas industry). In spite of the common occurrence of these TPFs, the understanding of them is limited compared to single-phase flows. Various studies on TPF focus on developing empirical correlations...... based on large sets of experimental data for void fraction, which have proven accurate for specific conditions for which they were developed limiting their applicability. On the other hand, few studies focus on gas-non-Newtonian liquids TPFs, which are very common in chemical processes. The main reason...... is due to the characterization of the viscosity, which determines the hydraulic regime and flow behaviours of the system. The focus of this study is the analysis of the TPF (slug flow) for Newtonian and non-Newtonian liquids in a vertical pipe in terms of void fraction using computational fluid dynamics...

  19. Advisory processes and their descriptive data

    Directory of Open Access Journals (Sweden)

    Milan Mišovič

    2005-01-01

    Full Text Available Processes are regarded as a representative of all firm activities what is valid for Web-based Advisory Systems, too. Interpretation of processes from the both sides managers and informaticians is naturally different what is given by their scientific platforms and observed objectives. Managers have connected all firm processes with the firm prosperity and firm competition ability. Therefore they have followed understanding, modeling and regular improving of all processes what should stimulate and evoke using of process revisions (reengineering. The main role in such process understanding is thus committed to the firm management.The most professional computer process implementations are dominant objectives of Informaticians. In this conception all processes have been understood as real sequences of partial transactions (elementary firm activities and data processed by them regardless of using of a structural or object process approach modeling. The process and transaction models, submitted by informaticians, are connected with process content orientation. This content has to be programmed. The firm management represents the main resource of the process knowledge used by informaticians.In addition to these two process conceptions there is a different approach based on a process description by a descriptive data. The descriptive data are not oriented to a process content but to its theoretical conception and real implementation. The descriptive data processing inside special algebra operations can bring a lot of very important and easily economically interpreted results.

  20. A Data-Driven Approach to Develop Physically Sound Predictors: Application to Depth-Averaged Velocities and Drag Coefficients on Vegetated Flows

    Science.gov (United States)

    Tinoco, R. O.; Goldstein, E. B.; Coco, G.

    2016-12-01

    We use a machine learning approach to seek accurate, physically sound predictors, to estimate two relevant flow parameters for open-channel vegetated flows: mean velocities and drag coefficients. A genetic programming algorithm is used to find a robust relationship between properties of the vegetation and flow parameters. We use data published from several laboratory experiments covering a broad range of conditions to obtain: a) in the case of mean flow, an equation that matches the accuracy of other predictors from recent literature while showing a less complex structure, and b) for drag coefficients, a predictor that relies on both single element and array parameters. We investigate different criteria for dataset size and data selection to evaluate their impact on the resulting predictor, as well as simple strategies to obtain only dimensionally consistent equations, and avoid the need for dimensional coefficients. The results show that a proper methodology can deliver physically sound models representative of the processes involved, such that genetic programming and machine learning techniques can be used as powerful tools to study complicated phenomena and develop not only purely empirical, but "hybrid" models, coupling results from machine learning methodologies into physics-based models.

  1. Analysis of transient and hysteresis behavior of cross-flow heat exchangers under variable fluid mass flow rate for data center cooling applications

    International Nuclear Information System (INIS)

    Gao, Tianyi; Murray, Bruce; Sammakia, Bahgat

    2015-01-01

    Effective thermal management of data centers is an important aspect of reducing the energy required for the reliable operation of data processing and communications equipment. Liquid and hybrid (air/liquid) cooling approaches are becoming more widely used in today's large and complex data center facilities. Examples of these approaches include rear door heat exchangers, in-row and overhead coolers and direct liquid cooled servers. Heat exchangers are primary components of liquid and hybrid cooling systems, and the effectiveness of a heat exchanger strongly influences the thermal performance of a cooling system. Characterizing and modeling the dynamic behavior of heat exchangers is important for the design of cooling systems, especially for control strategies to improve energy efficiency. In this study, a dynamic thermal model is solved numerically in order to predict the transient response of an unmixed–unmixed crossflow heat exchanger, of the type that is widely used in data center cooling equipment. The transient response to step and ramp changes in the mass flow rate of both the hot and cold fluid is investigated. Five model parameters are varied over specific ranges to characterize the transient performance. The parameter range investigated is based on available heat exchanger data. The thermal response to the magnitude, time period and initial and final conditions of the transient input functions is studied in detail. Also, the hysteresis associated with the fluid mass flow rate variation is investigated. The modeling results and performance data are used to analyze specific dynamic performance of heat exchangers used in practical data center cooling applications. - Highlights: • The transient performance of a crossflow heat exchanger was modeled and studied. • This study provides design information for data center thermal management. • The time constant metric was used to study the impacts of many variable inputs. • The hysteresis behavior

  2. The on-line graph processing study on phase separation of two-phase flow in T-tube

    International Nuclear Information System (INIS)

    Qian Yong; Xu Jijun; Yang Zhilin; Chen Yifen

    1997-01-01

    The on-line graph processing measure system is equipped with and experimental study of phase separation of air-water bubbly flow in the horizontal T-junction is carried out. For the first time, the author have found and defined the new type of complete phase separation, by the visual experiment, which shows that under certain conditions, the air flow entering the T junction will flow into the run outlet completely, which had never been reported in the literature Also, the pressure wave feed back effect and the branch bubble flow reorganization effect were found and analyzed. The complexity of this phase separation phenomenon in the T junction has been further revealed via the on-line graph processing technology. Meanwhile the influences of the inlet mass flow rate W1, the inlet mass quality X1, and the mass extraction rate G3/G1 on phase separation were analyzed

  3. Thermal/chemical degradation of ceramic cross-flow filter materials

    Energy Technology Data Exchange (ETDEWEB)

    Alvin, M.A.; Lane, J.E.; Lippert, T.E.

    1989-11-01

    This report summarizes the 14-month, Phase 1 effort conducted by Westinghouse on the Thermal/Chemical Degradation of Ceramic Cross-Flow Filter Materials program. In Phase 1 expected filter process conditions were identified for a fixed-bed, fluid-bed, and entrained-bed gasification, direct coal fired turbine, and pressurized fluidized-bed combustion system. Ceramic cross-flow filter materials were also selected, procured, and subjected to chemical and physical characterization. The stability of each of the ceramic cross-flow materials was assessed in terms of potential reactions or phase change as a result of process temperature, and effluent gas compositions containing alkali and fines. In addition chemical and physical characterization was conducted on cross-flow filters that were exposed to the METC fluid-bed gasifier and the New York University pressurized fluidized-bed combustor. Long-term high temperature degradation mechanisms were proposed for each ceramic cross-flow material at process operating conditions. An experimental bench-scale test program is recommended to be conducted in Phase 2, generating data that support the proposed cross-flow filter material thermal/chemical degradation mechanisms. Papers on the individual subtasks have been processed separately for inclusion on the data base.

  4. Pengalaman Flow dalam Belajar

    Directory of Open Access Journals (Sweden)

    Lucky Purwantini

    2017-08-01

    Full Text Available Flow is a condition when individual merges within his/her activity. When a person in flow state, he/she can develop his/her abilities and more success in learning. The purpose of the study is to understand flow experience in learning among undergraduate student. The study used case study qualitative approach. Informant of this research was an undergraduate student which had flow experience. Data was collected by an interview. According to the result, the subject did not experience flow in the learning process, as likes he was in meditation. It happened because when he learned something, he felt be pressed by tasks. It’s important for individual to relax when they are learning.

  5. The boundary data immersion method for compressible flows with application to aeroacoustics

    Energy Technology Data Exchange (ETDEWEB)

    Schlanderer, Stefan C., E-mail: stefan.schlanderer@unimelb.edu.au [Faculty for Engineering and the Environment, University of Southampton, SO17 1BJ Southampton (United Kingdom); Weymouth, Gabriel D., E-mail: G.D.Weymouth@soton.ac.uk [Faculty for Engineering and the Environment, University of Southampton, SO17 1BJ Southampton (United Kingdom); Sandberg, Richard D., E-mail: richard.sandberg@unimelb.edu.au [Department of Mechanical Engineering, University of Melbourne, Melbourne VIC 3010 (Australia)

    2017-03-15

    This paper introduces a virtual boundary method for compressible viscous fluid flow that is capable of accurately representing moving bodies in flow and aeroacoustic simulations. The method is the compressible extension of the boundary data immersion method (BDIM, Maertens & Weymouth (2015), ). The BDIM equations for the compressible Navier–Stokes equations are derived and the accuracy of the method for the hydrodynamic representation of solid bodies is demonstrated with challenging test cases, including a fully turbulent boundary layer flow and a supersonic instability wave. In addition we show that the compressible BDIM is able to accurately represent noise radiation from moving bodies and flow induced noise generation without any penalty in allowable time step.

  6. Seamless integration of dose-response screening and flow chemistry: efficient generation of structure-activity relationship data of β-secretase (BACE1) inhibitors.

    Science.gov (United States)

    Werner, Michael; Kuratli, Christoph; Martin, Rainer E; Hochstrasser, Remo; Wechsler, David; Enderle, Thilo; Alanine, Alexander I; Vogel, Horst

    2014-02-03

    Drug discovery is a multifaceted endeavor encompassing as its core element the generation of structure-activity relationship (SAR) data by repeated chemical synthesis and biological testing of tailored molecules. Herein, we report on the development of a flow-based biochemical assay and its seamless integration into a fully automated system comprising flow chemical synthesis, purification and in-line quantification of compound concentration. This novel synthesis-screening platform enables to obtain SAR data on b-secretase (BACE1) inhibitors at an unprecedented cycle time of only 1 h instead of several days. Full integration and automation of industrial processes have always led to productivity gains and cost reductions, and this work demonstrates how applying these concepts to SAR generation may lead to a more efficient drug discovery process. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Extending the LWS Data Environment: Distributed Data Processing and Analysis

    Science.gov (United States)

    Narock, Thomas

    2005-01-01

    The final stages of this work saw changes to the original framework, as well as the completion and integration of several data processing services. Initially, it was thought that a peer-to-peer architecture was necessary to make this work possible. The peer-to-peer architecture provided many benefits including the dynamic discovery of new services that would be continually added. A prototype example was built and while it showed promise, a major disadvantage was seen in that it was not easily integrated into the existing data environment. While the peer-to-peer system worked well for finding and accessing distributed data processing services, it was found that its use was limited by the difficulty in calling it from existing tools and services. After collaborations with members of the data community, it was determined that our data processing system was of high value and that a new interface should be pursued in order for the community to take full advantage of it. As such; the framework was modified from a peer-to-peer architecture to a more traditional web service approach. Following this change multiple data processing services were added. These services include such things as coordinate transformations and sub setting of data. Observatory (VHO), assisted with integrating the new architecture into the VHO. This allows anyone using the VHO to search for data, to then pass that data through our processing services prior to downloading it. As a second attempt at demonstrating the new system, a collaboration was established with the Collaborative Sun Earth Connector (CoSEC) group at Lockheed Martin. This group is working on a graphical user interface to the Virtual Observatories and data processing software. The intent is to provide a high-level easy-to-use graphical interface that will allow access to the existing Virtual Observatories and data processing services from one convenient application. Working with the CoSEC group we provided access to our data

  8. Data for Regional Heat flow Studies in and around Japan and its relationship to seismogenic layer

    Science.gov (United States)

    Tanaka, A.

    2017-12-01

    Heat flow is a fundamental parameter to constrain the thermal structure of the lithosphere. It also provides a constraint to lithospheric rheology, which is sensitive to temperature. General features of the heat flow distribution in and around Japan had been revealed by the early 1970's, and heat flow data have been continuously updated by further data compilation from mainly published data and investigations. These include additional data, which were not published individually, but were included in site-specific reports. Also, thermal conductivity measurements were conducted on cores from boreholes using a line-source device with a half-space type box probe and an optical scanning device, and previously unpublished thermal conductivities were compiled. It has been more than 10 years since the last published compilation and analysis of heat flow data of Tanaka et al. (2004), which published all of the heat flow data in the northwestern Pacific area (from 0 to 60oN and from 120 to 160oE) and geothermal gradient data in and around Japan. Because these added data and information are drawn from various sources, the updated database is compiled in each datasets: heat flow, geothermal gradient, and thermal conductivity. The updated and improved database represents considerable improvement to past updates and presents an opportunity to revisit the thermal state of the lithosphere along with other geophysical/geochemical constraints on heat flow extrapolation. The spatial distribution of the cut-off depth of shallow seismicity of Japan using relocated hypocentres during the last decade (Omuralieva et al., 2012) and this updated database are used to quantify the concept of temperature as a fundamental parameter for determining the seismogenic thickness.

  9. Harnessing Data Flow and Modelling Potentials for Sustainable Development

    Directory of Open Access Journals (Sweden)

    Kassim S Mwitondi

    2012-12-01

    Full Text Available Tackling the global challenges relating to health, poverty, business, and the environment is heavily dependent on the flow and utilisation of data. However, while enhancements in data generation, storage, modelling, dissemination, and the related integration of global economies and societies are fast transforming the way we live and interact, the resulting dynamic, globalised, information society remains digitally divided. On the African continent in particular, this division has resulted in a gap between the knowledge generation and its transformation into tangible products and services. This paper proposes some fundamental approaches for a sustainable transformation of data into knowledge for the purpose of improving the people's quality of life. Its main strategy is based on a generic data sharing model providing access to data utilising and generating entities in a multi-disciplinary environment. It highlights the great potentials in using unsupervised and supervised modelling in tackling the typically predictive-in-nature challenges we face. Using both simulated and real data, the paper demonstrates how some of the key parameters may be generated and embedded in models to enhance their predictive power and reliability. The paper's conclusions include a proposed implementation framework setting the scene for the creation of decision support systems capable of addressing the key issues in society. It is expected that a sustainable data flow will forge synergies among the private sector, academic, and research institutions within and among countries. It is also expected that the paper's findings will help in the design and development of knowledge extraction from data in the wake of cloud computing and, hence, contribute towards the improvement in the people's overall quality of life. To avoid running high implementation costs, selected open source tools are recommended for developing and sustaining the system.

  10. Programmable level-1 trigger with 3D-Flow processor array

    International Nuclear Information System (INIS)

    Crosetto, D.

    1994-01-01

    The 3D-Flow parallel processing system is a new concept in processor architecture, system architecture, and assembly architecture. Compared to the electronics used in present systems, this approach reduces the cost and complexity of the hardware and allows easy assembly, disassembly, incremental upgrading, and maintenance of different interconnection topologies. The 3D-Flow parallel-processing system benefits high energy physics (HEP) by allowing: (1) common less costly hardware to be used in different experiments. (2) new uses of existing installations. (3) tuning of trigger based on the first analyzed data, and (4) selection of desired events directly from raw data. The goal of this parallel-processing architecture is to acquire multiple data in parallel (up to 100 million frames per second) and to process them at high speed, accomplishing digital filtering on the input data, pattern recognition (particle identification), data moving, and data formatting. The main features of the system are its programmability, scalability, high-speed communication, and low cost. The compactness of the 3D-Flow parallel-processing system in concert with the processor architecture allows processor interconnections to be mapped into the geometry of sensors (detectors in HEP) without large interconnection signal delay, enabling real-time pattern recognition. The overall 3D-Flow project has passed a major design review at Fermilab (Reviewers included experts in computers, triggering, system assembly, and electronics)

  11. Management of complex data flows in the ASDEX Upgrade plasma control system

    International Nuclear Information System (INIS)

    Treutterer, Wolfgang; Neu, Gregor; Raupp, Gerhard; Zasche, Dieter; Zehetbauer, Thomas; Cole, Richard; Lüddecke, Klaus

    2012-01-01

    Highlights: ► Control system architectures with data-driven workflows are efficient, flexible and maintainable. ► Signal groups provide coherence of interrelated signals and increase the efficiency of process synchronisation. ► Sample tags indicating sample quality form the fundament of a local event handling strategy. ► A self-organising workflow benefits from sample tags consisting of time stamp and stream activity. - Abstract: Establishing adequate technical and physical boundary conditions for a sustained nuclear fusion reaction is a challenging task. Phased feedback control and monitoring for heating, fuelling and magnetic shaping is mandatory, especially for fusion devices aiming at high performance plasmas. Technical and physical interrelations require close collaboration of many components in sequential as well as in parallel processing flows. Moreover, handling of asynchronous, off-normal events has become a key element of modern plasma performance optimisation and machine protection recipes. The manifoldness of plasma states and events, the variety of plant system operation states and the diversity in diagnostic data sampling rates can hardly be mastered with a rigid control scheme. Rather, an adaptive system topology in combination with sophisticated synchronisation and process scheduling mechanisms is suited for such an environment. Moreover, the system is subject to real-time control constraints: response times must be deterministic and adequately short. Therefore, the experimental tokamak device ASDEX Upgrade employs a discharge control system DCS, whose core has been designed to meet these requirements. In the paper we will compare the scheduling schemes for the parallelised realisation of a control workflow and show the advantage of a data-driven workflow over a managed workflow. The data-driven workflow as used in DCS is based on signals connecting process outputs and inputs. These are implemented as real-time streams of data samples

  12. Parallelizing flow-accumulation calculations on graphics processing units—From iterative DEM preprocessing algorithm to recursive multiple-flow-direction algorithm

    Science.gov (United States)

    Qin, Cheng-Zhi; Zhan, Lijun

    2012-06-01

    As one of the important tasks in digital terrain analysis, the calculation of flow accumulations from gridded digital elevation models (DEMs) usually involves two steps in a real application: (1) using an iterative DEM preprocessing algorithm to remove the depressions and flat areas commonly contained in real DEMs, and (2) using a recursive flow-direction algorithm to calculate the flow accumulation for every cell in the DEM. Because both algorithms are computationally intensive, quick calculation of the flow accumulations from a DEM (especially for a large area) presents a practical challenge to personal computer (PC) users. In recent years, rapid increases in hardware capacity of the graphics processing units (GPUs) provided in modern PCs have made it possible to meet this challenge in a PC environment. Parallel computing on GPUs using a compute-unified-device-architecture (CUDA) programming model has been explored to speed up the execution of the single-flow-direction algorithm (SFD). However, the parallel implementation on a GPU of the multiple-flow-direction (MFD) algorithm, which generally performs better than the SFD algorithm, has not been reported. Moreover, GPU-based parallelization of the DEM preprocessing step in the flow-accumulation calculations has not been addressed. This paper proposes a parallel approach to calculate flow accumulations (including both iterative DEM preprocessing and a recursive MFD algorithm) on a CUDA-compatible GPU. For the parallelization of an MFD algorithm (MFD-md), two different parallelization strategies using a GPU are explored. The first parallelization strategy, which has been used in the existing parallel SFD algorithm on GPU, has the problem of computing redundancy. Therefore, we designed a parallelization strategy based on graph theory. The application results show that the proposed parallel approach to calculate flow accumulations on a GPU performs much faster than either sequential algorithms or other parallel GPU

  13. Process air quality data

    Science.gov (United States)

    Butler, C. M.; Hogge, J. E.

    1978-01-01

    Air quality sampling was conducted. Data for air quality parameters, recorded on written forms, punched cards or magnetic tape, are available for 1972 through 1975. Computer software was developed to (1) calculate several daily statistical measures of location, (2) plot time histories of data or the calculated daily statistics, (3) calculate simple correlation coefficients, and (4) plot scatter diagrams. Computer software was developed for processing air quality data to include time series analysis and goodness of fit tests. Computer software was developed to (1) calculate a larger number of daily statistical measures of location, and a number of daily monthly and yearly measures of location, dispersion, skewness and kurtosis, (2) decompose the extended time series model and (3) perform some goodness of fit tests. The computer program is described, documented and illustrated by examples. Recommendations are made for continuation of the development of research on processing air quality data.

  14. Real-time data processing and inflow forecasting

    International Nuclear Information System (INIS)

    Olason, T.; Lafreniere, M.

    1998-01-01

    One of the key inputs into the short-term scheduling of hydroelectric generation is inflow forecasting which is needed for natural or unregulated inflows into various lakes, reservoirs and river sections. The forecast time step and time horizon are determined by the time step and the scheduling horizon. Acres International Ltd. has developed the Vista Decision Support System (DSS) in which the time step is one hour and the scheduling can be done up to two weeks into the future. This paper presents the basis of the operational flow-forecasting module of the Vista DSS software and its application to flow forecasting for 16 basins within Nova Scotia Power's hydroelectric system. Among the tasks performed by the software are collection and treatment of data (in real time) regarding meteorological forecasts, reviews and monitoring of hydro-meteorological data, updating of the state variables in the module, and the review and adjustment of sub-watershed forecasts

  15. Informed Decision Making Process for Managing Environmental Flows in Small River Basins

    Science.gov (United States)

    Padikkal, S.; Rema, K. P.

    2013-03-01

    Numerous examples exist worldwide of partial or complete alteration to the natural flow regime of river systems as a consequence of large scale water abstraction from upstream reaches. The effects may not be conspicuous in the case of very large rivers, but the ecosystems of smaller rivers or streams may be completely destroyed over a period of time. While restoration of the natural flow regime may not be possible, at present there is increased effort to implement restoration by regulating environmental flow. This study investigates the development of an environmental flow management model at an icon site in the small river basin of Bharathapuzha, west India. To determine optimal environmental flow regimes, a historic flow model based on data assimilated since 1978 indicated a satisfactory minimum flow depth for river ecosystem sustenance is 0.907 m (28.8 m3/s), a value also obtained from the hydraulic model; however, as three of the reservoirs were already operational at this time a flow depth of 0.922 m is considered a more viable estimate. Analysis of daily stream flow in 1997-2006, indicated adequate flow regimes during the monsoons in June-November, but that sections of the river dried out in December-May with alarming water quality conditions near the river mouth. Furthermore, the preferred minimum `dream' flow regime expressed by stakeholders of the region is a water depth of 1.548 m, which exceeds 50 % of the flood discharge in July. Water could potentially be conserved for environmental flow purposes by (1) the de-siltation of existing reservoirs or (2) reducing water spillage in the transfer between river basins. Ultimately environmental flow management of the region requires the establishment of a co-ordinated management body and the regular assimilation of water flow information from which science based decisions are made, to ensure both economic and environmental concerns are adequately addressed.

  16. Taming hazardous chemistry in flow: The continuous processing of diazo and diazonium compounds

    OpenAIRE

    Deadman, Benjamin J.; Collins, Stuart G.; Maguire, Anita R.

    2014-01-01

    The synthetic utilities of the diazo and diazonium groups are matched only by their reputation for explosive decomposition. Continuous processing technology offers new opportunities to make and use these versatile intermediates at a range of scales with improved safety over traditional batch processes. In this minireview, the state of the art in the continuous flow processing of reactive diazo and diazonium species is discussed.

  17. Site-Scale Saturated Zone Flow Model

    International Nuclear Information System (INIS)

    G. Zyvoloski

    2003-01-01

    The purpose of this model report is to document the components of the site-scale saturated-zone flow model at Yucca Mountain, Nevada, in accordance with administrative procedure (AP)-SIII.lOQ, ''Models''. This report provides validation and confidence in the flow model that was developed for site recommendation (SR) and will be used to provide flow fields in support of the Total Systems Performance Assessment (TSPA) for the License Application. The output from this report provides the flow model used in the ''Site-Scale Saturated Zone Transport'', MDL-NBS-HS-000010 Rev 01 (BSC 2003 [162419]). The Site-Scale Saturated Zone Transport model then provides output to the SZ Transport Abstraction Model (BSC 2003 [164870]). In particular, the output from the SZ site-scale flow model is used to simulate the groundwater flow pathways and radionuclide transport to the accessible environment for use in the TSPA calculations. Since the development and calibration of the saturated-zone flow model, more data have been gathered for use in model validation and confidence building, including new water-level data from Nye County wells, single- and multiple-well hydraulic testing data, and new hydrochemistry data. In addition, a new hydrogeologic framework model (HFM), which incorporates Nye County wells lithology, also provides geologic data for corroboration and confidence in the flow model. The intended use of this work is to provide a flow model that generates flow fields to simulate radionuclide transport in saturated porous rock and alluvium under natural or forced gradient flow conditions. The flow model simulations are completed using the three-dimensional (3-D), finite-element, flow, heat, and transport computer code, FEHM Version (V) 2.20 (software tracking number (STN): 10086-2.20-00; LANL 2003 [161725]). Concurrently, process-level transport model and methodology for calculating radionuclide transport in the saturated zone at Yucca Mountain using FEHM V 2.20 are being

  18. Hybrid Pluggable Processing Pipeline (HyP3): Programmatic Access to Cloud-Based Processing of SAR Data

    Science.gov (United States)

    Weeden, R.; Horn, W. B.; Dimarchi, H.; Arko, S. A.; Hogenson, K.

    2017-12-01

    A problem often faced by Earth science researchers is the question of how to scale algorithms that were developed against few datasets and take them to regional or global scales. This problem only gets worse as we look to a future with larger and larger datasets becoming available. One significant hurdle can be having the processing and storage resources available for such a task, not to mention the administration of those resources. As a processing environment, the cloud offers nearly unlimited potential for compute and storage, with limited administration required. The goal of the Hybrid Pluggable Processing Pipeline (HyP3) project was to demonstrate the utility of the Amazon cloud to process large amounts of data quickly and cost effectively. Principally built by three undergraduate students at the ASF DAAC, the HyP3 system relies on core Amazon cloud services such as Lambda, Relational Database Service (RDS), Elastic Compute Cloud (EC2), Simple Storage Service (S3), and Elastic Beanstalk. HyP3 provides an Application Programming Interface (API) through which users can programmatically interface with the HyP3 system; allowing them to monitor and control processing jobs running in HyP3, and retrieve the generated HyP3 products when completed. This presentation will focus on the development techniques and enabling technologies that were used in developing the HyP3 system. Data and process flow, from new subscription through to order completion will be shown, highlighting the benefits of the cloud for each step. Because the HyP3 system can be accessed directly from a user's Python scripts, powerful applications leveraging SAR products can be put together fairly easily. This is the true power of HyP3; allowing people to programmatically leverage the power of the cloud.

  19. Direct test of a nonlinear constitutive equation for simple turbulent shear flows using DNS data

    Science.gov (United States)

    Schmitt, François G.

    2007-10-01

    Several nonlinear constitutive equations have been proposed to overcome the limitations of the linear eddy-viscosity models to describe complex turbulent flows. These nonlinear equations have often been compared to experimental data through the outputs of numerical models. Here we perform a priori analysis of nonlinear eddy-viscosity models using direct numerical simulation (DNS) of simple shear flows. In this paper, the constitutive equation is directly checked using a tensor projection which involves several invariants of the flow. This provides a 3 terms development which is exact for 2D flows, and a best approximation for 3D flows. We provide the quadratic nonlinear constitutive equation for the near-wall region of simple shear flows using DNS data, and estimate their coefficients. We show that these coefficients have several common properties for the different simple shear flow databases considered. We also show that in the central region of pipe flows, where the shear rate is very small, the coefficients of the constitutive equation diverge, indicating the failure of this representation for vanishing shears.

  20. Soft computing in big data processing

    CERN Document Server

    Park, Seung-Jong; Lee, Jee-Hyong

    2014-01-01

    Big data is an essential key to build a smart world as a meaning of the streaming, continuous integration of large volume and high velocity data covering from all sources to final destinations. The big data range from data mining, data analysis and decision making, by drawing statistical rules and mathematical patterns through systematical or automatically reasoning. The big data helps serve our life better, clarify our future and deliver greater value. We can discover how to capture and analyze data. Readers will be guided to processing system integrity and implementing intelligent systems. With intelligent systems, we deal with the fundamental data management and visualization challenges in effective management of dynamic and large-scale data, and efficient processing of real-time and spatio-temporal data. Advanced intelligent systems have led to managing the data monitoring, data processing and decision-making in realistic and effective way. Considering a big size of data, variety of data and frequent chan...

  1. Groundwater flow processes and mixing in active volcanic systems: the case of Guadalajara (Mexico)

    Science.gov (United States)

    Hernández-Antonio, A.; Mahlknecht, J.; Tamez-Meléndez, C.; Ramos-Leal, J.; Ramírez-Orozco, A.; Parra, R.; Ornelas-Soto, N.; Eastoe, C. J.

    2015-09-01

    Groundwater chemistry and isotopic data from 40 production wells in the Atemajac and Toluquilla valleys, located in and around the Guadalajara metropolitan area, were determined to develop a conceptual model of groundwater flow processes and mixing. Stable water isotopes (δ2H, δ18O) were used to trace hydrological processes and tritium (3H) to evaluate the relative contribution of modern water in samples. Multivariate analysis including cluster analysis and principal component analysis were used to elucidate distribution patterns of constituents and factors controlling groundwater chemistry. Based on this analysis, groundwater was classified into four groups: cold groundwater, hydrothermal groundwater, polluted groundwater and mixed groundwater. Cold groundwater is characterized by low temperature, salinity, and Cl and Na concentrations and is predominantly of Na-HCO3-type. It originates as recharge at "La Primavera" caldera and is found predominantly in wells in the upper Atemajac Valley. Hydrothermal groundwater is characterized by high salinity, temperature, Cl, Na and HCO3, and the presence of minor elements such as Li, Mn and F. It is a mixed-HCO3 type found in wells from Toluquilla Valley and represents regional flow circulation through basaltic and andesitic rocks. Polluted groundwater is characterized by elevated nitrate and sulfate concentrations and is usually derived from urban water cycling and subordinately from agricultural return flow. Mixed groundwaters between cold and hydrothermal components are predominantly found in the lower Atemajac Valley. Twenty-seven groundwater samples contain at least a small fraction of modern water. The application of a multivariate mixing model allowed the mixing proportions of hydrothermal fluids, polluted waters and cold groundwater in sampled water to be evaluated. This study will help local water authorities to identify and dimension groundwater contamination, and act accordingly. It may be broadly applicable to

  2. [CFD numerical simulation onto the gas-liquid two-phase flow behavior during vehicle refueling process].

    Science.gov (United States)

    Chen, Jia-Qing; Zhang, Nan; Wang, Jin-Hui; Zhu, Ling; Shang, Chao

    2011-12-01

    With the gradual improvement of environmental regulations, more and more attentions are attracted to the vapor emissions during the process of vehicle refueling. Research onto the vehicle refueling process by means of numerical simulation has been executed abroad since 1990s, while as it has never been involved so far domestically. Through reasonable simplification about the physical system of "Nozzle + filler pipe + gasoline storage tank + vent pipe" for vehicle refueling, and by means of volume of fluid (VOF) model for gas-liquid two-phase flow and Re-Normalization Group kappa-epsilon turbulence flow model provided in commercial computational fluid dynamics (CFD) software Fluent, this paper determined the proper mesh discretization scheme and applied the proper boundary conditions based on the Gambit software, then established the reasonable numerical simulation model for the gas-liquid two-phase flow during the refueling process. Through discussing the influence of refueling velocity on the static pressure of vent space in gasoline tank, the back-flowing phenomenon has been revealed in this paper. It has been demonstrated that, the more the flow rate and the refueling velocity of refueling nozzle is, the higher the gross static pressure in the vent space of gasoline tank. In the meanwhile, the variation of static pressure in the vent space of gasoline tank can be categorized into three obvious stages. When the refueling flow rate becomes higher, the back-flowing phenomenon of liquid gasoline can sometimes be induced in the head section of filler pipe, thus making the gasoline nozzle pre-shut-off. Totally speaking, the theoretical work accomplished in this paper laid some solid foundation for self-researching and self-developing the technology and apparatus for the vehicle refueling and refueling emissions control domestically.

  3. RELIGIUSITAS DENGAN FLOW AKADEMIK PADA SISWA

    Directory of Open Access Journals (Sweden)

    Putri Saraswati

    2018-02-01

    Full Text Available Problems in education are students who experience boredom in the learning process, whereas in the learning required concentration, interest and motivation, which required students to experience flow. Flow itself blends in total concentration which refers to the solemn concept in religiosity. The purpose of this research is to know religiosity relationship with academic flow. The design of this study is non-experimental correlation type. The data retrieval technique uses cluster sampling technique. Totals subjects as many as 222 students in city of Malang. Data collection method used in this research is the scale of religiosity made by the researcher. Then the scale of academic flow using the scale of LIS (The flow inventory for student, the researchers add some items in the scale of LIS. Data analysis method used product moment. The results of data analysis obtained r = 0.508 p = 0.000 (sig <0.01 means that this study shows a significant positive relationship between religiosity and academic flow. While the effective contribution of religiosity to academic flow is 25.8% and the rest equal to 74.2%, influenced by other factors.

  4. Detection and quantification of flow consistency in business process models

    DEFF Research Database (Denmark)

    Burattin, Andrea; Bernstein, Vered; Neurauter, Manuel

    2017-01-01

    , to show how such features can be quantified into computational metrics, which are applicable to business process models. We focus on one particular feature, consistency of flow direction, and show the challenges that arise when transforming it into a precise metric. We propose three different metrics......Business process models abstract complex business processes by representing them as graphical models. Their layout, as determined by the modeler, may have an effect when these models are used. However, this effect is currently not fully understood. In order to systematically study this effect......, a basic set of measurable key visual features is proposed, depicting the layout properties that are meaningful to the human user. The aim of this research is thus twofold: first, to empirically identify key visual features of business process models which are perceived as meaningful to the user and second...

  5. AUTOMATED CONTROL AND REAL-TIME DATA PROCESSING OF WIRE SCANNER/HALO SCRAPER MEASUREMENTS

    International Nuclear Information System (INIS)

    Day, L.A.; Gilpatrick, J.D.

    2001-01-01

    The Low-Energy Demonstration Accelerator (LEDA), assembled and operating at Los Alamos National Laboratory, provides the platform for obtaining measurements of high-power proton beam-halo formation. Control system software and hardware have been integrated and customized to enable the production of real-time beam-halo profiles. The Experimental Physics and Industrial Control System (EPICS) hosted on a VXI platform, Interactive Data Language (IDL) programs hosted on UNIX platforms, and LabVIEW (LV) Virtual Instruments hosted on a PC platform have been integrated and customized to provide real-time, synchronous motor control, data acquisition, and data analysis of data acquired through specialized DSP instrumentation. These modules communicate through EPICS Channel Access (CA) communication protocol extensions to control and manage execution flow ensuring synchronous data acquisition and real-time processing of measurement data. This paper describes the software integration and management scheme implemented to produce these real-time beam profiles

  6. Three-dimensional investigation of the two-phase flow structure in a bubbly pipe flow

    International Nuclear Information System (INIS)

    Hassan, Y.A.; Schmidl, W.D.; Ortiz-Villafuerte, J.

    1997-01-01

    Particle Image Velocimetry (PIV) is a non-intrusive measurement technique, which can be used to study the structure of various fluid flows. PIV is used to measure the time varying full field velocity data of a particle-seeded flow field within either a two-dimensional plane or three-dimensional volume. PIV is a very efficient measurement technique since it can obtain both qualitative and quantitative spatial information about the flow field being studied. This information can be further processed into information such as vorticity and pathlines. Other flow measurement techniques (Laser Doppler Velocimetry, Hot Wire Anemometry, etc...) only provide quantitative information at a single point. PIV can be used to study turbulence structures if a sufficient amount of data can be acquired and analyzed, and it can also be extended to study two-phase flows if both phases can be distinguished. In this study, the flow structure around a bubble rising in a pipe filled with water was studied in three-dimensions. The velocity of the rising bubble and the velocity field of the surrounding water was measured. Then the turbulence intensities and Reynolds stresses were calculated from the experimental data. (author)

  7. 7Q10 flows for SRS streams

    International Nuclear Information System (INIS)

    Chen, K.F.

    1996-01-01

    The Environmental Transport Group of the Environmental Technology Section was requested to predict the seven-day ten-year low flow (7Q10 flow) for the SRS streams based on historical stream flow records. Most of the historical flow records for the SRS streams include reactor coolant water discharged from the reactors and process water released from the process facilities. The most straight forward way to estimate the stream daily natural flow is to subtract the measured upstream reactor and/or facility daily effluents from the measured downstream daily flow. Unfortunately, this method does not always work, as indicated by the fact that sometimes the measured downstream volumetric flow rates are lower than the reactor effluent volumetric flow rates. For those cases that cannot be analyzed with the simple subtracting method, an alternative method was used to estimate the stream natural flows by statistically separating reactor coolant and process water flow data. The correlation between the calculated 7Q10 flows and the watershed areas for Four Mile Branch and Pen Branch agrees with that calculated by the USGS for Upper Three Runs and Lower Three Runs Creeks. The agreement between these two independent calculations lends confidence to the 7Q10 flow calculations presented in this report

  8. On image pre-processing for PIV of sinlge- and two-phase flows over reflecting objects

    NARCIS (Netherlands)

    Deen, N.G.; Willems, P.; van Sint Annaland, M.; Kuipers, J.A.M.; Lammertink, Rob G.H.; Kemperman, Antonius J.B.; Wessling, Matthias; van der Meer, Walterus Gijsbertus Joseph

    2010-01-01

    A novel image pre-processing scheme for PIV of single- and two-phase flows over reflecting objects which does not require the use of additional hardware is discussed. The approach for single-phase flow consists of image normalization and intensity stretching followed by background subtraction. For

  9. Theoretical modelling of nuclear waste flows - 16377

    International Nuclear Information System (INIS)

    Adams, J.F.; Biggs, S.R.; Fairweather, M.; Njobuenwu, D.; Yao, J.

    2009-01-01

    A large amount of nuclear waste is stored in tailings ponds as a solid-liquid slurry, and liquid flows containing suspensions of solid particles are encountered in the treatment and disposal of this waste. In processing this waste, it is important to understand the behaviour of particles within the flow in terms of their settling characteristics, their propensity to form solid beds, and the re-suspension characteristics of particles from a bed. A clearer understanding of such behaviour would allow the refinement of current approaches to waste management, potentially leading to reduced uncertainties in radiological impact assessments, smaller waste volumes and lower costs, accelerated clean-up, reduced worker doses, enhanced public confidence and diminished grounds for objection to waste disposal. Mathematical models are of significant value in nuclear waste processing since the extent of characterisation of wastes is in general low. Additionally, waste processing involves a diverse range of flows, within vessels, ponds and pipes. To investigate experimentally all waste form characteristics and potential flows of interest would be prohibitively expensive, whereas the use of mathematical models can help to focus experimental studies through the more efficient use of existing data, the identification of data requirements, and a reduction in the need for process optimisation in full-scale experimental trials. Validated models can also be used to predict waste transport behaviour to enable cost effective process design and continued operation, to provide input to process selection, and to allow the prediction of operational boundaries that account for the different types and compositions of particulate wastes. In this paper two mathematical modelling techniques, namely Reynolds-averaged Navier-Stokes (RANS) and large eddy simulation (LES), have been used to investigate particle-laden flows in a straight square duct and a duct with a bend. The flow solutions provided by

  10. Development of DUMAS data processing system

    International Nuclear Information System (INIS)

    Sakamoto, Hiroshi

    1982-01-01

    In the field of nuclear experiments, the speed-up of data processing has been required recently along with the increase of the amount of data per event or the rate of event occurrence per unit time. In the DUMAS project of RCNP, the development of data processing system has been required, which can perform the high speed transfer and processing. The system should transfer the data of 5 multiwire proportional counters and other counters from the laboratory to the counting room at the rate of 1000 events every second, and also should perform considerably complex processes such as histogramming, particle identification, calculation of various polarizations as well as dumping to the secondary memory in the counting room. Furthermore, easy start-up, adjustment, inspection and maintenance and non-special hardware and software should be considered. A system presently being investigated for satisfying the above requirements is described. The main points are as follows: to employ CAMAC system for the interface with readout circuit, to transfer data between the laboratory and the counting room by converting the byte-serial transfer to the bit-serial optical fiber communication, and to unify the data processing computers to the PDP-11 family by connecting two miniature computers. Development of such a data processing system seems to be useful as an preparatory research for the development of NUMATRON measuring instruments. (Wakatsuki, Y.)

  11. Big Data Analysis of Manufacturing Processes

    Science.gov (United States)

    Windmann, Stefan; Maier, Alexander; Niggemann, Oliver; Frey, Christian; Bernardi, Ansgar; Gu, Ying; Pfrommer, Holger; Steckel, Thilo; Krüger, Michael; Kraus, Robert

    2015-11-01

    The high complexity of manufacturing processes and the continuously growing amount of data lead to excessive demands on the users with respect to process monitoring, data analysis and fault detection. For these reasons, problems and faults are often detected too late, maintenance intervals are chosen too short and optimization potential for higher output and increased energy efficiency is not sufficiently used. A possibility to cope with these challenges is the development of self-learning assistance systems, which identify relevant relationships by observation of complex manufacturing processes so that failures, anomalies and need for optimization are automatically detected. The assistance system developed in the present work accomplishes data acquisition, process monitoring and anomaly detection in industrial and agricultural processes. The assistance system is evaluated in three application cases: Large distillation columns, agricultural harvesting processes and large-scale sorting plants. In this paper, the developed infrastructures for data acquisition in these application cases are described as well as the developed algorithms and initial evaluation results.

  12. Big Data Analysis of Manufacturing Processes

    International Nuclear Information System (INIS)

    Windmann, Stefan; Maier, Alexander; Niggemann, Oliver; Frey, Christian; Bernardi, Ansgar; Gu, Ying; Pfrommer, Holger; Steckel, Thilo; Krüger, Michael; Kraus, Robert

    2015-01-01

    The high complexity of manufacturing processes and the continuously growing amount of data lead to excessive demands on the users with respect to process monitoring, data analysis and fault detection. For these reasons, problems and faults are often detected too late, maintenance intervals are chosen too short and optimization potential for higher output and increased energy efficiency is not sufficiently used. A possibility to cope with these challenges is the development of self-learning assistance systems, which identify relevant relationships by observation of complex manufacturing processes so that failures, anomalies and need for optimization are automatically detected. The assistance system developed in the present work accomplishes data acquisition, process monitoring and anomaly detection in industrial and agricultural processes. The assistance system is evaluated in three application cases: Large distillation columns, agricultural harvesting processes and large-scale sorting plants. In this paper, the developed infrastructures for data acquisition in these application cases are described as well as the developed algorithms and initial evaluation results. (paper)

  13. Experimental and numerical study of granular flow characteristics of absorber sphere pneumatic conveying process

    International Nuclear Information System (INIS)

    Zhang He; Li Tianjin; Qi Weiwei; Huang Zhiyong; Bo Hanliang

    2014-01-01

    Absorber sphere pneumatic conveying system is the main part of absorber sphere shutdown system and closely related to granular flow. Granular flow characteristics, such as mass flow rate, angle of repose, contact forces, etc., are crucial important for the optimization of absorber sphere pneumatic conveying process. Mass flow rate of granular flow through the sphere discharge valve and the bend tube are significant for the time of ball dropping and the time of conveying back rate, respectively. Experiments and DEM simulations have been conducted to investigate the granular flow characteristics. Experimental results showed that the relation between average mass flow rate through the sphere discharge valve and the valve stroke was composed of three zones i. e. the idle stroke zone, linearly zone and orifice restriction zone. The Beverloo's law was suitable for the granular flow through the multi-orifice during the orifice restriction zone. The variation of average mass flow rate with the valve stroke could be described by modified Beverloo's law based on the valve stroke. DEM simulation results showed that the drained angle of repose remained 23° at different valve strokes. Mass flow rate during steady granular flow through the sphere discharge valve at different valve strokes kept stable. The variation of mass flow rate through a bend tube was different from that through a circular orifice. (author)

  14. Analysis of material flow in metal forming processes by using computer simulation and experiment with model material

    International Nuclear Information System (INIS)

    Kim, Heon Young; Kim, Dong Won

    1993-01-01

    The objective of the present study is to analyze material flow in the metal forming processes by using computer simulation and experiment with model material, plasticine. A UBET program is developed to analyze the bulk flow behaviour of various metal forming problems. The elemental strain-hardening effect is considered in an incremental manner and the element system is automatically regenerated at every deforming step in the program. The material flow behaviour in closed-die forging process with rib-web type cavity are analyzed by UBET and elastic-plastic finite element method, and verified by experiments with plasticine. There were good agreements between simulation and experiment. The effect of corner rounding on material flow behavior is investigated in the analysis of backward extrusion with square die. Flat punch indentation process is simulated by UBET, and the results are compared with that of elastic-plastic finite element method. (Author)

  15. Evaluation and processing of covariance data

    International Nuclear Information System (INIS)

    Wagner, M.

    1993-01-01

    These proceedings of a specialists'meeting on evaluation and processing of covariance data is divided into 4 parts bearing on: part 1- Needs for evaluated covariance data (2 Papers), part 2- generation of covariance data (15 Papers), part 3- Processing of covariance files (2 Papers), part 4-Experience in the use of evaluated covariance data (2 Papers)

  16. Interface flow process audit: using the patient's career as a tracer of quality of care and of system organisation

    Directory of Open Access Journals (Sweden)

    Jean-Pierre Unger

    2004-05-01

    Full Text Available Objectives: This case study aims to demonstrate the method's feasibility and capacity to improve quality of care. Several drawbacks attached to tracer condition and selected procedure audits oblige clinicians to rely on external evaluators. Interface flow process audit is an alternative method, which also favours integration of health care across institutions divide. Methods: An action research study was carried out to test the feasibility of interface flow process audit and its impact on quality improvement. An anonymous questionnaire was carried out to assess the participants' perception of the process. Results: In this study, interface flow process audit brought together general practitioners and hospital doctors to analyse the co-ordination of their activities across the primary-secondary interface. Human factors and organisational characteristics had a clear influence on implementation of the solutions. In general, the participants confirmed that the interface flow process audit helped them to analyse the quality of case management both at primary and secondary care level. Conclusions: The interface flow process audit appears a useful method for regular in-service self-evaluation. Its practice enabled to address a wide scope of clinical, managerial and economical problems. Bridging the primary-secondary care gap, interface flow process audit's focus on the patient's career combined with the broad scope of problems that can be analysed are particularly powerful features. The methodology would benefit from an evaluation of its practice on larger scale.

  17. Role of submerged vegetation in the retention processes of three plant protection products in flow-through stream mesocosms.

    Science.gov (United States)

    Stang, Christoph; Wieczorek, Matthias Valentin; Noss, Christian; Lorke, Andreas; Scherr, Frank; Goerlitz, Gerhard; Schulz, Ralf

    2014-07-01

    Quantitative information on the processes leading to the retention of plant protection products (PPPs) in surface waters is not available, particularly for flow-through systems. The influence of aquatic vegetation on the hydraulic- and sorption-mediated mitigation processes of three PPPs (triflumuron, pencycuron, and penflufen; logKOW 3.3-4.9) in 45-m slow-flowing stream mesocosms was investigated. Peak reductions were 35-38% in an unvegetated stream mesocosm, 60-62% in a sparsely vegetated stream mesocosm (13% coverage with Elodea nuttallii), and in a similar range of 57-69% in a densely vegetated stream mesocosm (100% coverage). Between 89% and 93% of the measured total peak reductions in the sparsely vegetated stream can be explained by an increase of vegetation-induced dispersion (estimated with the one-dimensional solute transport model OTIS), while 7-11% of the peak reduction can be attributed to sorption processes. However, dispersion contributed only 59-71% of the peak reductions in the densely vegetated stream mesocosm, where 29% to 41% of the total peak reductions can be attributed to sorption processes. In the densely vegetated stream, 8-27% of the applied PPPs, depending on the logKOW values of the compounds, were temporarily retained by macrophytes. Increasing PPP recoveries in the aqueous phase were accompanied by a decrease of PPP concentrations in macrophytes indicating kinetic desorption over time. This is the first study to provide quantitative data on how the interaction of dispersion and sorption, driven by aquatic macrophytes, influences the mitigation of PPP concentrations in flowing vegetated stream systems. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Reactor mass flow data base prepared for the nonproliferation alternative systems assessment program

    International Nuclear Information System (INIS)

    Primm III, R.T.C.

    1981-02-01

    This report presents charge and discharge mass flow data for reactors judged to have received sufficient technical development to enable them to be demonstrated or commercially available by the year 2000. Brief descriptions of the reactors and fuel cycles evaluated are presented. A discussion of the neutronics methods used to produce the mass flow data is provided. Detailed charge and discharge fuel isotopics are presented. U 3 O 8 , separative work, and fissile material requirements are computed and provided for each fuel cycle

  19. Field Ground Truthing Data Collector - a Mobile Toolkit for Image Analysis and Processing

    Science.gov (United States)

    Meng, X.

    2012-07-01

    Field Ground Truthing Data Collector is one of the four key components of the NASA funded ICCaRS project, being developed in Southeast Michigan. The ICCaRS ground truthing toolkit entertains comprehensive functions: 1) Field functions, including determining locations through GPS, gathering and geo-referencing visual data, laying out ground control points for AEROKAT flights, measuring the flight distance and height, and entering observations of land cover (and use) and health conditions of ecosystems and environments in the vicinity of the flight field; 2) Server synchronization functions, such as, downloading study-area maps, aerial photos and satellite images, uploading and synchronizing field-collected data with the distributed databases, calling the geospatial web services on the server side to conduct spatial querying, image analysis and processing, and receiving the processed results in field for near-real-time validation; and 3) Social network communication functions for direct technical assistance and pedagogical support, e.g., having video-conference calls in field with the supporting educators, scientists, and technologists, participating in Webinars, or engaging discussions with other-learning portals. This customized software package is being built on Apple iPhone/iPad and Google Maps/Earth. The technical infrastructures, data models, coupling methods between distributed geospatial data processing and field data collector tools, remote communication interfaces, coding schema, and functional flow charts will be illustrated and explained at the presentation. A pilot case study will be also demonstrated.

  20. Data acquisition and processing system of the electron cyclotron emission imaging system of the KSTAR tokamak

    International Nuclear Information System (INIS)

    Kim, J. B.; Lee, W.; Yun, G. S.; Park, H. K.; Domier, C. W.; Luhmann, N. C. Jr.

    2010-01-01

    A new innovative electron cyclotron emission imaging (ECEI) diagnostic system for the Korean Superconducting Tokamak Advanced Research (KSTAR) produces a large amount of data. The design of the data acquisition and processing system of the ECEI diagnostic system should consider covering the large data production and flow. The system design is based on the layered structure scalable to the future extension to accommodate increasing data demands. Software architecture that allows a web-based monitoring of the operation status, remote experiment, and data analysis is discussed. The operating software will help machine operators and users validate the acquired data promptly, prepare next discharge, and enhance the experiment performance and data analysis in a distributed environment.

  1. Bridging the Divide between Manual Gating and Bioinformatics with the Bioconductor Package flowFlowJo

    Directory of Open Access Journals (Sweden)

    John J. Gosink

    2009-01-01

    Full Text Available In flow cytometry, different cell types are usually selected or “gated” by a series of 1- or 2-dimensional geometric subsets of the measurements made on each cell. This is easily accomplished in commercial flow cytometry packages but it is difficult to work computationally with the results of this process. The ability to retrieve the results and work with both them and the raw data is critical; our experience points to the importance of bioinformatics tools that will allow us to examine gating robustness, combine manual and automated gating, and perform exploratory data analysis. To provide this capability, we have developed a Bioconductor package called flowFlowJo that can import gates defined by the commercial package FlowJo and work with them in a manner consistent with the other flow packages in Bioconductor. We present this package and illustrate some of the ways in which it can be used.

  2. On the impact of the elastic-plastic flow upon the process of destruction of the solenoid in a super strong pulsed magnetic field

    Science.gov (United States)

    Krivosheev, S. I.; Magazinov, S. G.; Alekseev, D. I.

    2018-01-01

    At interaction of super strong magnetic fields with a solenoid material, a specific mode of the material flow forms. To describe this process, magnetohydrodynamic approximation is traditionally used. The formation of plastic shock-waves in material in a rapidly increasing pressure of 100 GPa/μs, can significantly alter the distribution of the physical parameters in the medium and affect the flow modes. In this paper, an analysis of supporting results of numerical simulations in comparison with available experimental data is presented.

  3. A Power Load Distribution Algorithm to Optimize Data Center Electrical Flow

    Directory of Open Access Journals (Sweden)

    Paulo Maciel

    2013-07-01

    Full Text Available Energy consumption is a matter of common concern in the world today. Research demonstrates that as a consequence of the constantly evolving and expanding field of information technology, data centers are now major consumers of electrical energy. Such high electrical energy consumption emphasizes the issues of sustainability and cost. Against this background, the present paper proposes a power load distribution algorithm (PLDA to optimize energy distribution of data center power infrastructures. The PLDA, which is based on the Ford-Fulkerson algorithm, is supported by an environment called ASTRO, capable of performing the integrated evaluation of dependability, cost and sustainability. More specifically, the PLDA optimizes the flow distribution of the energy flow model (EFM. EFMs are responsible for estimating sustainability and cost issues of data center infrastructures without crossing the restrictions of the power capacity that each device can provide (power system or extract (cooling system. Additionally, a case study is presented that analyzed seven data center power architectures. Significant results were observed, achieving a reduction in power consumption of up to 15.5%.

  4. Overview of the tool-flow for the Montium Processing Tile

    NARCIS (Netherlands)

    Smit, Gerardus Johannes Maria; Rosien, M.A.J.; Guo, Y.; Heysters, P.M.

    This paper presents an overview of a tool chain to support a transformational design methodology. The tool can be used to compile code written in a high level source language, like C, to a coarse grain reconfigurable architecture. The source code is first translated into a Control Data Flow Graph

  5. Process efficiency. Redesigning social networks to improve surgery patient flow.

    Science.gov (United States)

    Samarth, Chandrika N; Gloor, Peter A

    2009-01-01

    We propose a novel approach to improve throughput of the surgery patient flow process of a Boston area teaching hospital. A social network analysis was conducted in an effort to demonstrate that process efficiency gains could be achieved through redesign of social network patterns at the workplace; in conjunction with redesign of organization structure and the implementation of workflow over an integrated information technology system. Key knowledge experts and coordinators in times of crisis were identified and a new communication structure more conducive to trust and knowledge sharing was suggested. The new communication structure is scalable without compromising on coordination required among key roles in the network for achieving efficiency gains.

  6. Online data processing system

    International Nuclear Information System (INIS)

    Nakahara, Yoshinori; Yagi, Hideyuki; Yamada, Takayuki

    1979-02-01

    A pulse height analyzer terminal system PHATS has been developed for online data processing via JAERI-TOKAI computer network. The system is controled by using a micro-computer MICRO-8 which was developed for the JAERI-TOKAI network. The system program consists of two subprograms, online control system ONLCS and pulse height analyzer control system PHACS. ONLCS links the terminal with the conversational programming system of FACOM 230/75 through the JAERI-TOKAI network and controls data processing in TSS and remote batch modes. PHACS is used to control INPUT/OUTPUT of data between pulse height analyzer and cassette-MT or typewriter. This report describes the hardware configuration and the system program in detail. In the appendix, explained are real time monitor, type of message, PEX to PEX protocol and Host to Host protocol, required for the system programming. (author)

  7. Data processing device

    International Nuclear Information System (INIS)

    Kita, Yoshio.

    1994-01-01

    A data processing device for use in a thermonuclear testing device comprises a frequency component judging section for analog signals, a sample time selection section based on the result of the judgement, a storing memory section for selecting digital data memorized in the sampling time. Namely, the frequency components of the analog signals are detected by the frequency component judging section, and one of a plurality of previously set sampling times is selected by the sampling time selection section based on the result of the judgement of the frequency component judging section. Then, digital data obtained by A/D conversion are read and preliminarily memorized in the storing memory section. Subsequently, the digital data memorized in the sampling time selected by the sampling time selection section are selected and transmitted to a superior computer. The amount of data to be memorized can greatly reduced, to reduce the cost. (N.H.)

  8. Toward a Unified Modeling of Learner's Growth Process and Flow Theory

    Science.gov (United States)

    Challco, Geiser C.; Andrade, Fernando R. H.; Borges, Simone S.; Bittencourt, Ig I.; Isotani, Seiji

    2016-01-01

    Flow is the affective state in which a learner is so engaged and involved in an activity that nothing else seems to matter. In this sense, to help students in the skill development and knowledge acquisition (referred to as learners' growth process) under optimal conditions, the instructional designers should create learning scenarios that favor…

  9. Optimizing access to conditions data in ATLAS event data processing

    CERN Document Server

    Rinaldi, Lorenzo; The ATLAS collaboration

    2018-01-01

    The processing of ATLAS event data requires access to conditions data which is stored in database systems. This data includes, for example alignment, calibration, and configuration information that may be characterized by large volumes, diverse content, and/or information which evolves over time as refinements are made in those conditions. Additional layers of complexity are added by the need to provide this information across the world-wide ATLAS computing grid and the sheer number of simultaneously executing processes on the grid, each demanding a unique set of conditions to proceed. Distributing this data to all the processes that require it in an efficient manner has proven to be an increasing challenge with the growing needs and number of event-wise tasks. In this presentation, we briefly describe the systems in which we have collected information about the use of conditions in event data processing. We then proceed to explain how this information has been used to refine not only reconstruction software ...

  10. A review of concentrated flow erosion processes on rangelands: fundamental understanding and knowledge gaps

    Science.gov (United States)

    Concentrated flow erosion processes are distinguished from splash and sheetflow processes in their enhanced ability to mobilize and transport large amounts of soil, water and dissolved elements. On rangelands, soil, nutrients and water are scarce and only narrow margins of resource losses are tolera...

  11. CFD and experimental data of closed-loop wind tunnel flow

    Directory of Open Access Journals (Sweden)

    John Kaiser Calautit

    2016-06-01

    Full Text Available The data presented in this article were the basis for the study reported in the research articles entitled ‘A validated design methodology for a closed loop subsonic wind tunnel’ (Calautit et al., 2014 [1], which presented a systematic investigation into the design, simulation and analysis of flow parameters in a wind tunnel using Computational Fluid Dynamics (CFD. The authors evaluated the accuracy of replicating the flow characteristics for which the wind tunnel was designed using numerical simulation. Here, we detail the numerical and experimental set-up for the analysis of the closed-loop subsonic wind tunnel with an empty test section.

  12. Effect of internal flow and evaporation on hydrogel assembly process at droplet interface

    Science.gov (United States)

    Kang, Giho; Seong, Baekhoon; Gim, Yeonghyeon; Ko, Han Seo; Byun, Doyoung

    2017-11-01

    Recently, controlling the behavior of nanoparticles inside liquid droplet has been widely studied. There have been many reports about the mechanism of the nanoparticles assembly and fabrication of a thin film on a substrate. However, the assembly mechanism at a liquid-air interface has not been clearly understood to form polymer chains into films. Herein, we investigated the role of internal flow on the thin film assembly process at the interface of the hydrogel droplet. The internal fluid flow during the formation of the hydrogel film was visualized systematically using micro-PIV (Particle image velocimetry) technique at various temperatures. We show that the buoyancy effect and convection flow induced by heat can affect the film morphology and its mechanical characteristics. Due to the accelerated fluid flow inside the droplet and evaporation flux, densely assembled hydrogel film was able to be formed. Film strength was increased 24% with temperature increase from 40 to 80 degrees Celsius. We expect our investigations could be applied to many applications such as self-assembly of planar structures at the interface in coating and printing process. The support from the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (NRF-2015R1A2A1A05001829) is acknowledged.

  13. An NRTA data processing system: PROMAC-J

    International Nuclear Information System (INIS)

    Ikawa, Koji; Ihara, Hitoshi; Nishimura, Hideo

    1993-09-01

    Study of the application of Near-Real-Time Materials Accountancy has been done as an advanced safeguards measure for a spent nuclear fuel reprocessing plant. Also, from the viewpoint of practical application of NRTA concept to a real plant, a data processing system for the NRTA has been developed in consideration of effectiveness and promptness of data processing of NRTA data obtained in the field, so that a user can easily handle the analysis of time sequential MUF data based on the decision analyses in the field. The NRTA data processing system was used for processes and analyses of the NRTA data obtained during the period from September to December, 1985, a full scale field test of the proposed NRTA model for the PNC Tokai reprocessing plant. The result of this field test showed that the NRTA data processing system would be useful to provide sufficient information under the real plant circumstance. The data processing system was improved reflecting the experiences obtained in the field test. This report describes hardwares and softwares of the JAERI NRTA data processing system that was developed as an improvement of the previous system that had been developed and transferred to the PNC Tokai reprocessing plant. Improvements were made on both hardware components and softwares. (author)

  14. An NRTA data processing system: PROMAC-J

    International Nuclear Information System (INIS)

    Ihara, Hitoshi; Nishimura, Hideo; Ikawa, Koji

    1991-03-01

    Study of the application of Near-Real-Time Materials accountancy has been done as an advanced safeguards measure for a spent nuclear fuel reprocessing plant. Also, from the viewpoint of practical application of NRTA concept to a real plant, a data processing system for the NRTA has been developed in consideration of effectiveness and promptness of data processing of NRTA data obtained in the field, so that a user can easily handle the analysis of time sequential MUF data based on the decision analyses in the field. The NRTA data processing system was used for process and analyses of the NRTA data obtained during the period from September to December, 1985, a full scale field test of the proposed NRTA model for the PNC Tokai reprocessing plant. The result of this field test showed that the NRTA data processing system would be useful to provide sufficient information under the real plant circumstance. The data processing system was improved reflecting the experiences obtained in the field test. This report describes hardwares and softwares of the JAERI NRTA data processing system that was developed as an improvement of the previous system that had been developed and transferred to the PNC Tokai reprocessing plant. Improvements were made on both hardware components and softwares. (author)

  15. The GC computer code for flow sheet simulation of pyrochemical processing of spent nuclear fuels

    International Nuclear Information System (INIS)

    Ahluwalia, R.K.; Geyer, H.K.

    1996-01-01

    The GC computer code has been developed for flow sheet simulation of pyrochemical processing of spent nuclear fuel. It utilizes a robust algorithm SLG for analyzing simultaneous chemical reactions between species distributed across many phases. Models have been developed for analysis of the oxide fuel reduction process, salt recovery by electrochemical decomposition of lithium oxide, uranium separation from the reduced fuel by electrorefining, and extraction of fission products into liquid cadmium. The versatility of GC is demonstrated by applying the code to a flow sheet of current interest

  16. Taming hazardous chemistry in flow: the continuous processing of diazo and diazonium compounds.

    Science.gov (United States)

    Deadman, Benjamin J; Collins, Stuart G; Maguire, Anita R

    2015-02-02

    The synthetic utilities of the diazo and diazonium groups are matched only by their reputation for explosive decomposition. Continuous processing technology offers new opportunities to make and use these versatile intermediates at a range of scales with improved safety over traditional batch processes. In this minireview, the state of the art in the continuous flow processing of reactive diazo and diazonium species is discussed. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Processing data base information having nonwhite noise

    Science.gov (United States)

    Gross, Kenneth C.; Morreale, Patricia

    1995-01-01

    A method and system for processing a set of data from an industrial process and/or a sensor. The method and system can include processing data from either real or calculated data related to an industrial process variable. One of the data sets can be an artificial signal data set generated by an autoregressive moving average technique. After obtaining two data sets associated with one physical variable, a difference function data set is obtained by determining the arithmetic difference between the two pairs of data sets over time. A frequency domain transformation is made of the difference function data set to obtain Fourier modes describing a composite function data set. A residual function data set is obtained by subtracting the composite function data set from the difference function data set and the residual function data set (free of nonwhite noise) is analyzed by a statistical probability ratio test to provide a validated data base.

  18. Synchronized renal blood flow dynamics mapped with wavelet analysis of laser speckle flowmetry data

    DEFF Research Database (Denmark)

    Brazhe, Alexey R; Marsh, Donald J; von Holstein-Rathlou, Niels-Henrik

    2014-01-01

    of rat kidneys. The regulatory mechanism in the renal microcirculation generates oscillations in arterial blood flow at several characteristic frequencies. Our approach to laser speckle image processing allows detection of frequency and phase entrainments, visualization of their patterns, and estimation......Full-field laser speckle microscopy provides real-time imaging of superficial blood flow rate. Here we apply continuous wavelet transform to time series of speckle-estimated blood flow from each pixel of the images to map synchronous patterns in instantaneous frequency and phase on the surface...... of the extent of synchronization in renal cortex dynamics....

  19. Nonlinear transport processes and fluid dynamics: Cylindrical Couette flow of Lennard-Jones fluids

    International Nuclear Information System (INIS)

    Khayat, R.E.; Eu, B.C.

    1988-01-01

    In this paper we report on calculations of flow profiles for cylindrical Couette flow of a Lennard-Jones fluid. The flow is subjected to a temperature gradient and thermoviscous effects are taken into consideration. We apply the generalized fluid dynamic equations which are provided by the modified moment method for the Boltzmann equation reported previously. The results of calculations are in good agreement with the Monte Carlo direct simulation method by K. Nanbu [Phys. Fluids 27, 2632 (1984)] for most of Knudsen numbers for which the simulation data are available

  20. Batch Scheduling for Hybrid Assembly Differentiation Flow Shop to Minimize Total Actual Flow Time

    Science.gov (United States)

    Maulidya, R.; Suprayogi; Wangsaputra, R.; Halim, A. H.

    2018-03-01

    A hybrid assembly differentiation flow shop is a three-stage flow shop consisting of Machining, Assembly and Differentiation Stages and producing different types of products. In the machining stage, parts are processed in batches on different (unrelated) machines. In the assembly stage, each part of the different parts is assembled into an assembly product. Finally, the assembled products will further be processed into different types of final products in the differentiation stage. In this paper, we develop a batch scheduling model for a hybrid assembly differentiation flow shop to minimize the total actual flow time defined as the total times part spent in the shop floor from the arrival times until its due date. We also proposed a heuristic algorithm for solving the problems. The proposed algorithm is tested using a set of hypothetic data. The solution shows that the algorithm can solve the problems effectively.

  1. Fluid mechanics experiments in oscillatory flow. Volume 2: Tabulated data

    Science.gov (United States)

    Seume, J.; Friedman, G.; Simon, T. W.

    1992-01-01

    Results of a fluid mechanics measurement program in oscillating flow within a circular duct are presented. The program began with a survey of transition behavior over a range of oscillation frequency and magnitude and continued with a detailed study at a single operating point. Such measurements were made in support of Stirling engine development. Values of three dimensionless parameters, Re sub max, Re sub w, and A sub R, embody the velocity amplitude, frequency of oscillation, and mean fluid displacement of the cycle, respectively. Measurements were first made over a range of these parameters that are representative of the heat exchanger tubes in the heater section of NASA's Stirling cycle Space Power Research Engine (SPRE). Measurements were taken of the axial and radial components of ensemble-averaged velocity and rms velocity fluctuation and the dominant Reynolds shear stress, at various radial positions for each of four axial stations. In each run, transition from laminar to turbulent flow, and its reverse, were identified and sufficient data was gathered to propose the transition mechanism. Volume 2 contains data reduction program listings and tabulated data (including its graphics).

  2. The process flow and structure of an integrated stroke strategy

    Directory of Open Access Journals (Sweden)

    Emma F. van Bussel

    2013-06-01

    Full Text Available Introduction: In the Canadian province of Alberta access and quality of stroke care were suboptimal, especially in remote areas. The government introduced the Alberta Provincial Stroke Strategy (APSS in 2005, an integrated strategy to improve access to stroke care, quality and efficiency which utilizes telehealth. Research question: What is the process flow and the structure of the care pathways of the APSS? Methodology: Information for this article was obtained using documentation, archival APSS records, interviews with experts, direct observation and participant observation. Results: The process flow is described. The APSS integrated evidence-based practice, multidisciplinary communication, and telestroke services. It includes regular quality evaluation and improvement. Conclusion: Access, efficiency and quality of care improved since the start of the APSS across many domains, through improvement of expertise and equipment in small hospitals, accessible consultation of stroke specialists using telestroke, enhancing preventive care, enhancing multidisciplinary collaboration, introducing uniform best practice protocols and bypass-protocols for the emergency medical services. Discussion: The APSS overcame substantial obstacles to decrease discrepancies and to deliver integrated higher quality care. Telestroke has proven itself to be safe and feasible. The APSS works efficiently, which is in line to other projects worldwide, and is, based on limited results, cost effective. Further research on cost-effectiveness is necessary.

  3. Analysis of nuclear material flow for experimental DUPIC fuel fabrication process at DFDF

    International Nuclear Information System (INIS)

    Lee, H. H.; Park, J. J.; Shin, J. M.; Lee, J. W.; Yang, M. S.; Baik, S. Y.; Lee, E. P.

    1999-08-01

    This report describes facilities necessary for manufacturing experiment for DUPIC fuel, manufacturing process and equipment. Nuclear material flows among facilities, in PIEF and IMEF, for irradiation test, for post examination of DUPIC fuel, for quality control, for chemical analysis and for treatment of radioactive waste have been analyzed in details. This may be helpful for DUPIC project participants and facility engineers working in related facilities to understand overall flow for nuclear material and radioactive waste. (Author). 14 refs., 15 tabs., 41 figs

  4. Analysis of nuclear material flow for experimental DUPIC fuel fabrication process at DFDF

    Energy Technology Data Exchange (ETDEWEB)

    Lee, H. H.; Park, J. J.; Shin, J. M.; Lee, J. W.; Yang, M. S.; Baik, S. Y.; Lee, E. P

    1999-08-01

    This report describes facilities necessary for manufacturing experiment for DUPIC fuel, manufacturing process and equipment. Nuclear material flows among facilities, in PIEF and IMEF, for irradiation test, for post examination of DUPIC fuel, for quality control, for chemical analysis and for treatment of radioactive waste have been analyzed in details. This may be helpful for DUPIC project participants and facility engineers working in related facilities to understand overall flow for nuclear material and radioactive waste. (Author). 14 refs., 15 tabs., 41 figs.

  5. Meridional flow in the solar convection zone. I. Measurements from gong data

    Energy Technology Data Exchange (ETDEWEB)

    Kholikov, S. [National Solar Observatories, Tucson, AZ 85719 (United States); Serebryanskiy, A. [Ulugh Beg Astronomical Institute, Uzbek Academy of Science, Tashkent 100052 (Uzbekistan); Jackiewicz, J., E-mail: kholikov@noao.edu [Department of Astronomy, New Mexico State University, Las Cruces, NM 88003 (United States)

    2014-04-01

    Large-scale plasma flows in the Sun's convection zone likely play a major role in solar dynamics on decadal timescales. In particular, quantifying meridional motions is a critical ingredient for understanding the solar cycle and the transport of magnetic flux. Because the signal of such features can be quite small in deep solar layers and be buried in systematics or noise, the true meridional velocity profile has remained elusive. We perform time-distance helioseismology measurements on several years worth of Global Oscillation Network Group Doppler data. A spherical harmonic decomposition technique is applied to a subset of acoustic modes to measure travel-time differences to try to obtain signatures of meridional flows throughout the solar convection zone. Center-to-limb systematics are taken into account in an intuitive yet ad hoc manner. Travel-time differences near the surface that are consistent with a poleward flow in each hemisphere and are similar to previous work are measured. Additionally, measurements in deep layers near the base of the convection zone suggest a possible equatorward flow, as well as partial evidence of a sign change in the travel-time differences at mid-convection zone depths. This analysis on an independent data set using different measurement techniques strengthens recent conclusions that the convection zone may have multiple 'cells' of meridional flow. The results may challenge the common understanding of one large conveyor belt operating in the solar convection zone. Further work with helioseismic inversions and a careful study of systematic effects are needed before firm conclusions of these large-scale flow structures can be made.

  6. Experimental quantification of the fluid dynamics in blood-processing devices through 4D-flow imaging: A pilot study on a real oxygenator/heat-exchanger module.

    Science.gov (United States)

    Piatti, Filippo; Palumbo, Maria Chiara; Consolo, Filippo; Pluchinotta, Francesca; Greiser, Andreas; Sturla, Francesco; Votta, Emiliano; Siryk, Sergii V; Vismara, Riccardo; Fiore, Gianfranco Beniamino; Lombardi, Massimo; Redaelli, Alberto

    2018-02-08

    The performance of blood-processing devices largely depends on the associated fluid dynamics, which hence represents a key aspect in their design and optimization. To this aim, two approaches are currently adopted: computational fluid-dynamics, which yields highly resolved three-dimensional data but relies on simplifying assumptions, and in vitro experiments, which typically involve the direct video-acquisition of the flow field and provide 2D data only. We propose a novel method that exploits space- and time-resolved magnetic resonance imaging (4D-flow) to quantify the complex 3D flow field in blood-processing devices and to overcome these limitations. We tested our method on a real device that integrates an oxygenator and a heat exchanger. A dedicated mock loop was implemented, and novel 4D-flow sequences with sub-millimetric spatial resolution and region-dependent velocity encodings were defined. Automated in house software was developed to quantify the complex 3D flow field within the different regions of the device: region-dependent flow rates, pressure drops, paths of the working fluid and wall shear stresses were computed. Our analysis highlighted the effects of fine geometrical features of the device on the local fluid-dynamics, which would be unlikely observed by current in vitro approaches. Also, the effects of non-idealities on the flow field distribution were captured, thanks to the absence of the simplifying assumptions that typically characterize numerical models. To the best of our knowledge, our approach is the first of its kind and could be extended to the analysis of a broad range of clinically relevant devices. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Man-made flows from a fish's perspective: autonomous classification of turbulent fishway flows with field data collected using an artificial lateral line.

    Science.gov (United States)

    Tuhtan, Jeffrey A; Fuentes-Perez, Juan Francisco; Toming, Gert; Schneider, Matthias; Schwarzenberger, Richard; Schletterer, Martin; Kruusmaa, Maarja

    2018-05-25

    The lateral line system provides fish with advanced mechanoreception over a wide range of flow conditions. Inspired by the abilities of their biological counterparts, artificial lateral lines have been developed and tested exclusively under laboratory settings. Motivated by the lack of flow measurements taken in the field which consider fluid-body interactions, we built a fish-shaped lateral line probe. The device is outfitted with 11 high-speed (2.5 kHz) time-synchronized pressure transducers, and designed to capture and classify flows in fish passage structures. A total of 252 field measurements, each with a sample size of 132 000 discrete sensor readings were recorded in the slots and across the pools of vertical slot fishways. These data were used to estimate the time-averaged flow velocity (R 2   =  0.952), which represents the most common metric to assess fishway flows. The significant contribution of this work is the creation and application of hydrodynamic signatures generated by the spatial distribution of pressure fluctuations on the fish-shaped body. The signatures are based on the collection of the pressure fluctuations' probability distributions, and it is shown that they can be used to automatically classify distinct flow regions within the pools of three different vertical slot fishways. For the first time, field data from operational fishway measurements are sampled and classified using an artificial lateral line, providing a completely new source of bioinspired flow information.

  8. Influence of core sand properties on flow dynamics of core shooting process based on experiment and multiphase simulation

    Directory of Open Access Journals (Sweden)

    Chang-jiang Ni

    2017-03-01

    Full Text Available The influence of core sand properties on flow dynamics was investigated synchronously with various core sands, transparent core-box and high-speed camera. To confirm whether the core shooting process has significant turbulence, the flow pattern of sand particles in the shooting head and core box was reproduced with colored core sands. By incorporating the kinetic theory of granular flow (KTGF, kinetic-frictional constitutive correlation and turbulence model, a two-fluid model (TFM was established to study the flow dynamics of the core shooting process. Two-fluid model (TFM simulations were then performed and a areasonable agreement was achieved between the simulation and experimental results. Based on the experimental and simulation results, the effects of turbulence, sand density, sand diameter and binder ratio were analyzed in terms of filling process, sand volume fraction (αs and sand velocity (Vs.

  9. Automation in high-content flow cytometry screening.

    Science.gov (United States)

    Naumann, U; Wand, M P

    2009-09-01

    High-content flow cytometric screening (FC-HCS) is a 21st Century technology that combines robotic fluid handling, flow cytometric instrumentation, and bioinformatics software, so that relatively large numbers of flow cytometric samples can be processed and analysed in a short period of time. We revisit a recent application of FC-HCS to the problem of cellular signature definition for acute graft-versus-host-disease. Our focus is on automation of the data processing steps using recent advances in statistical methodology. We demonstrate that effective results, on par with those obtained via manual processing, can be achieved using our automatic techniques. Such automation of FC-HCS has the potential to drastically improve diagnosis and biomarker identification.

  10. Reverse engineering of heavy-ion collisions: Unraveling initial conditions from anisotropic flow data

    International Nuclear Information System (INIS)

    Retinskaya, Ekaterina

    2014-01-01

    Ultra-Relativistic heavy-ion physics is a promising field of high energy physics connecting two fields: nuclear physics and elementary particle physics. Experimental achievements of the last years have provided an opportunity to study the properties of a new state of matter created in heavy-ion collisions called quark-gluon plasma. The initial state of two colliding nuclei is affected by fluctuations coming from wave- functions of nucleons. These fluctuations lead to the momentum anisotropy of the hadronic matter which is observed by the detectors. The system created in the collision behaves like a fluid, so the initial state is connected to the final state via hydrodynamic evolution. In this thesis we model the evolution with relativistic viscous hydrodynamics. Our results, combined with experimental data, give non trivial constraints on the initial state, thus achieving 'reverse engineering' of the heavy-ion collisions. The observable which characterizes the momentum anisotropy is the anisotropic flow v n . We present the first measurements of the first harmonic of the anisotropic flow called directed flow v 1 in Pb-Pb collisions at the LHC. We then perform the first viscous hydrodynamic modeling of directed flow and show that it is less sensitive to viscosity than higher harmonics. Comparison of these experimental data with the modeling allows to extract the values of the dipole asymmetry of the initial state, which provides constraints on the models of initial states. A prediction for directed flow v 1 in Au-Au collisions is also made for RHIC. We then perform a similar modeling of the second and third harmonics of the anisotropic flow, called respectively elliptic v 2 and triangular v 3 flow. A combined analysis of the elliptic and triangular flow data compared with viscous hydrodynamic calculations allows us to put constraints on initial ellipticity and triangularity of the system. These constraints are then used as a filter for different models of

  11. Features, Events, and Processes in SZ Flow and Transport

    International Nuclear Information System (INIS)

    Economy, K.

    2004-01-01

    This analysis report evaluates and documents the inclusion or exclusion of the saturated zone (SZ) features, events, and processes (FEPs) with respect to modeling used to support the total system performance assessment (TSPA) for license application (LA) of a nuclear waste repository at Yucca Mountain, Nevada. A screening decision, either ''Included'' or ''Excluded'', is given for each FEP along with the technical basis for the decision. This information is required by the U.S. Nuclear Regulatory Commission (NRC) at 10 CFR 63.114 (d), (e), (f) (DIRS 156605). This scientific report focuses on FEP analysis of flow and transport issues relevant to the SZ (e.g., fracture flow in volcanic units, anisotropy, radionuclide transport on colloids, etc.) to be considered in the TSPA model for the LA. For included FEPs, this analysis summarizes the implementation of the FEP in TSPA-LA (i.e., how the FEP is included). For excluded FEPs, this analysis provides the technical basis for exclusion from TSPA-LA (i.e., why the FEP is excluded)

  12. Features, Events, and Processes in SZ Flow and Transport

    International Nuclear Information System (INIS)

    S. Kuzio

    2005-01-01

    This analysis report evaluates and documents the inclusion or exclusion of the saturated zone (SZ) features, events, and processes (FEPs) with respect to modeling used to support the total system performance assessment (TSPA) for license application (LA) of a nuclear waste repository at Yucca Mountain, Nevada. A screening decision, either Included or Excluded, is given for each FEP along with the technical basis for the decision. This information is required by the U.S. Nuclear Regulatory Commission (NRC) at 10 CFR 63.11(d), (e), (f) [DIRS 173273]. This scientific report focuses on FEP analysis of flow and transport issues relevant to the SZ (e.g., fracture flow in volcanic units, anisotropy, radionuclide transport on colloids, etc.) to be considered in the TSPA model for the LA. For included FEPs, this analysis summarizes the implementation of the FEP in TSPA-LA (i.e., how the FEP is included). For excluded FEPs, this analysis provides the technical basis for exclusion from TSPA-LA (i.e., why the FEP is excluded)

  13. Features, Events, and Processes in SZ Flow and Transport

    Energy Technology Data Exchange (ETDEWEB)

    K. Economy

    2004-11-16

    This analysis report evaluates and documents the inclusion or exclusion of the saturated zone (SZ) features, events, and processes (FEPs) with respect to modeling used to support the total system performance assessment (TSPA) for license application (LA) of a nuclear waste repository at Yucca Mountain, Nevada. A screening decision, either ''Included'' or ''Excluded'', is given for each FEP along with the technical basis for the decision. This information is required by the U.S. Nuclear Regulatory Commission (NRC) at 10 CFR 63.114 (d), (e), (f) (DIRS 156605). This scientific report focuses on FEP analysis of flow and transport issues relevant to the SZ (e.g., fracture flow in volcanic units, anisotropy, radionuclide transport on colloids, etc.) to be considered in the TSPA model for the LA. For included FEPs, this analysis summarizes the implementation of the FEP in TSPA-LA (i.e., how the FEP is included). For excluded FEPs, this analysis provides the technical basis for exclusion from TSPA-LA (i.e., why the FEP is excluded).

  14. Features, Events, and Processes in SZ Flow and Transport

    Energy Technology Data Exchange (ETDEWEB)

    S. Kuzio

    2005-08-20

    This analysis report evaluates and documents the inclusion or exclusion of the saturated zone (SZ) features, events, and processes (FEPs) with respect to modeling used to support the total system performance assessment (TSPA) for license application (LA) of a nuclear waste repository at Yucca Mountain, Nevada. A screening decision, either Included or Excluded, is given for each FEP along with the technical basis for the decision. This information is required by the U.S. Nuclear Regulatory Commission (NRC) at 10 CFR 63.11(d), (e), (f) [DIRS 173273]. This scientific report focuses on FEP analysis of flow and transport issues relevant to the SZ (e.g., fracture flow in volcanic units, anisotropy, radionuclide transport on colloids, etc.) to be considered in the TSPA model for the LA. For included FEPs, this analysis summarizes the implementation of the FEP in TSPA-LA (i.e., how the FEP is included). For excluded FEPs, this analysis provides the technical basis for exclusion from TSPA-LA (i.e., why the FEP is excluded).

  15. Perancangan Model Data Flow Diagram Untuk Mengukur Kualitas Website Menggunakan Webqual 4.0

    Directory of Open Access Journals (Sweden)

    Karina Hapsari

    2017-05-01

    Full Text Available The more competition the e-commerce company and the development of technology company in Indonesia, website of Zalora Indonesia traffic rank declining. Measuring the quality of website by using WebQual 4.0 will help manage the web to be able to adjust the quality of the web with user perception.  The research aims to make design of Data Flow Diagram model to measure website quality using WebQual 4.0 based on user satisfaction variable. A case study was conducted on the Zalora Indonesia website. Data Flow model is used to make design of system model recommendation, while WebQual 4.0 method is used to measure website quality to user satisfaction. The research data using primary data in the form of questionnaires involving 384 respondents in the city of Bandung who had transacted on the website Zalora Indonesia. Data analysis technique applies descriptive analysis. Based on the research result on the quality of the website Zalora Indonesia, simultaneous positive and significant impact on user satisfaction Zalora Indonesia website. t test result showed that three variables partially have a posotive impact on user satisfaction Zalora Indonesia website is usability quality, information quality and service interaction quality, with Information quality variable has largest impact. Therefore, the modeling system using the Context Diagram-Data Flow Diagram focused on information quality variable.

  16. Flow chemistry vs. flow analysis.

    Science.gov (United States)

    Trojanowicz, Marek

    2016-01-01

    The flow mode of conducting chemical syntheses facilitates chemical processes through the use of on-line analytical monitoring of occurring reactions, the application of solid-supported reagents to minimize downstream processing and computerized control systems to perform multi-step sequences. They are exactly the same attributes as those of flow analysis, which has solid place in modern analytical chemistry in several last decades. The following review paper, based on 131 references to original papers as well as pre-selected reviews, presents basic aspects, selected instrumental achievements and developmental directions of a rapidly growing field of continuous flow chemical synthesis. Interestingly, many of them might be potentially employed in the development of new methods in flow analysis too. In this paper, examples of application of flow analytical measurements for on-line monitoring of flow syntheses have been indicated and perspectives for a wider application of real-time analytical measurements have been discussed. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Flow Rate Measurement in Multiphase Flow Rig: Radiotracer and Conventional

    International Nuclear Information System (INIS)

    Nazrul Hizam Yusoff; Noraishah Othman; Nurliyana Abdullah; Amirul Syafiq Mohd Yunos; Rasif Mohd Zain; Roslan Yahya

    2015-01-01

    Applications of radiotracer technology are prevalent throughout oil refineries worldwide, and this industry is one of the main users and beneficiaries of the technology. Radioactive tracers have been used to a great extent in many applications i.e. flow rate measurement, RTD, plant integrity evaluation and enhancing oil production in oil fields. Chemical and petrochemical plants are generally continuously operating and technically complex where the radiotracer techniques are very competitive and largely applied for troubleshooting inspection and process analysis. Flow rate measurement is a typical application of radiotracers. For flow measurements, tracer data are important, rather than the RTD models. Research is going on in refining the existing methods for single phase flow measurement, and in developing new methods for multiphase flow without sampling. The tracer techniques for single phase flow measurements are recognized as ISO standards. This paper presents technical aspect of laboratory experiments, which have been carried out using Molybdenum-99 - Mo99 (radiotracer) to study and determine the flow rate of liquid in multiphase flow rig. The multiphase flow rig consists of 58.7 m long and 20 cm diameter pipeline that can accommodate about 0.296 m 3 of liquid. Tap water was used as liquid flow in pipeline and conventional flow meters were also installed at the flow rig. The flow rate results; radiotracer and conventional flow meter were compared. The total count method was applied for radiotracer technique and showed the comparable results with conventional flow meter. (author)

  18. Impact of polymer film thickness and cavity size on polymer flow during embossing : towards process design rules for nanoimprint lithography.

    Energy Technology Data Exchange (ETDEWEB)

    Schunk, Peter Randall; King, William P. (Georgia Institute of Technology, Atlanta, GA); Sun, Amy Cha-Tien; Rowland, Harry D. (Georgia Institute of Technology, Atlanta, GA)

    2006-08-01

    This paper presents continuum simulations of polymer flow during nanoimprint lithography (NIL). The simulations capture the underlying physics of polymer flow from the nanometer to millimeter length scale and examine geometry and thermophysical process quantities affecting cavity filling. Variations in embossing tool geometry and polymer film thickness during viscous flow distinguish different flow driving mechanisms. Three parameters can predict polymer deformation mode: cavity width to polymer thickness ratio, polymer supply ratio, and Capillary number. The ratio of cavity width to initial polymer film thickness determines vertically or laterally dominant deformation. The ratio of indenter width to residual film thickness measures polymer supply beneath the indenter which determines Stokes or squeeze flow. The local geometry ratios can predict a fill time based on laminar flow between plates, Stokes flow, or squeeze flow. Characteristic NIL capillary number based on geometry-dependent fill time distinguishes between capillary or viscous driven flows. The three parameters predict filling modes observed in published studies of NIL deformation over nanometer to millimeter length scales. The work seeks to establish process design rules for NIL and to provide tools for the rational design of NIL master templates, resist polymers, and process parameters.

  19. Management of complex data flows in the ASDEX Upgrade plasma control system

    Energy Technology Data Exchange (ETDEWEB)

    Treutterer, Wolfgang, E-mail: Wolfgang.Treutterer@ipp.mpg.de [Max-Planck Institut fuer Plasmaphysik, EURATOM Association, Garching (Germany); Neu, Gregor; Raupp, Gerhard; Zasche, Dieter; Zehetbauer, Thomas [Max-Planck Institut fuer Plasmaphysik, EURATOM Association, Garching (Germany); Cole, Richard; Lueddecke, Klaus [Unlimited Computer Systems, Iffeldorf (Germany)

    2012-12-15

    Highlights: Black-Right-Pointing-Pointer Control system architectures with data-driven workflows are efficient, flexible and maintainable. Black-Right-Pointing-Pointer Signal groups provide coherence of interrelated signals and increase the efficiency of process synchronisation. Black-Right-Pointing-Pointer Sample tags indicating sample quality form the fundament of a local event handling strategy. Black-Right-Pointing-Pointer A self-organising workflow benefits from sample tags consisting of time stamp and stream activity. - Abstract: Establishing adequate technical and physical boundary conditions for a sustained nuclear fusion reaction is a challenging task. Phased feedback control and monitoring for heating, fuelling and magnetic shaping is mandatory, especially for fusion devices aiming at high performance plasmas. Technical and physical interrelations require close collaboration of many components in sequential as well as in parallel processing flows. Moreover, handling of asynchronous, off-normal events has become a key element of modern plasma performance optimisation and machine protection recipes. The manifoldness of plasma states and events, the variety of plant system operation states and the diversity in diagnostic data sampling rates can hardly be mastered with a rigid control scheme. Rather, an adaptive system topology in combination with sophisticated synchronisation and process scheduling mechanisms is suited for such an environment. Moreover, the system is subject to real-time control constraints: response times must be deterministic and adequately short. Therefore, the experimental tokamak device ASDEX Upgrade employs a discharge control system DCS, whose core has been designed to meet these requirements. In the paper we will compare the scheduling schemes for the parallelised realisation of a control workflow and show the advantage of a data-driven workflow over a managed workflow. The data-driven workflow as used in DCS is based on signals

  20. Data Preparation Process for the Buildings Performance Database

    Energy Technology Data Exchange (ETDEWEB)

    Walter, Travis; Dunn, Laurel; Mercado, Andrea; Brown, Richard E.; Mathew, Paul

    2014-06-30

    The Buildings Performance Database (BPD) includes empirically measured data from a variety of data sources with varying degrees of data quality and data availability. The purpose of the data preparation process is to maintain data quality within the database and to ensure that all database entries have sufficient data for meaningful analysis and for the database API. Data preparation is a systematic process of mapping data into the Building Energy Data Exchange Specification (BEDES), cleansing data using a set of criteria and rules of thumb, and deriving values such as energy totals and dominant asset types. The data preparation process takes the most amount of effort and time therefore most of the cleansing process has been automated. The process also needs to adapt as more data is contributed to the BPD and as building technologies over time. The data preparation process is an essential step between data contributed by providers and data published to the public in the BPD.