WorldWideScience

Sample records for preprocessing subsystem cps

  1. Scientific data products and the data pre-processing subsystem of the Chang'e-3 mission

    International Nuclear Information System (INIS)

    Tan Xu; Liu Jian-Jun; Li Chun-Lai; Feng Jian-Qing; Ren Xin; Wang Fen-Fei; Yan Wei; Zuo Wei; Wang Xiao-Qian; Zhang Zhou-Bin

    2014-01-01

    The Chang'e-3 (CE-3) mission is China's first exploration mission on the surface of the Moon that uses a lander and a rover. Eight instruments that form the scientific payloads have the following objectives: (1) investigate the morphological features and geological structures at the landing site; (2) integrated in-situ analysis of minerals and chemical compositions; (3) integrated exploration of the structure of the lunar interior; (4) exploration of the lunar-terrestrial space environment, lunar surface environment and acquire Moon-based ultraviolet astronomical observations. The Ground Research and Application System (GRAS) is in charge of data acquisition and pre-processing, management of the payload in orbit, and managing the data products and their applications. The Data Pre-processing Subsystem (DPS) is a part of GRAS. The task of DPS is the pre-processing of raw data from the eight instruments that are part of CE-3, including channel processing, unpacking, package sorting, calibration and correction, identification of geographical location, calculation of probe azimuth angle, probe zenith angle, solar azimuth angle, and solar zenith angle and so on, and conducting quality checks. These processes produce Level 0, Level 1 and Level 2 data. The computing platform of this subsystem is comprised of a high-performance computing cluster, including a real-time subsystem used for processing Level 0 data and a post-time subsystem for generating Level 1 and Level 2 data. This paper describes the CE-3 data pre-processing method, the data pre-processing subsystem, data classification, data validity and data products that are used for scientific studies

  2. A Selective CPS Transformation

    DEFF Research Database (Denmark)

    Nielsen, Lasse Riechstein

    2001-01-01

    characterize this involvement as a control effect and we present a selective CPS transformation that makes functions and expressions continuation-passing if they have a control effect, and that leaves the rest of the program in direct style. We formalize this selective CPS transformation with an operational...

  3. CA/CPS: A Communications ZEBRA implementation using CPS

    International Nuclear Information System (INIS)

    Roberts, L.A.

    1991-05-01

    CZ/CPS is an implementation of the Communications ZEBRA distributed computing environment utilizing the CPS communications protocol. CZ/CPS is intended for parallelization of high energy physics application programs using the CERN Program Library memory and data structure management features. CZ/CPS provides transparent communication of ZEBRA data structures among cooperative processes using standard interfaces for ZEBRA I/O. Examples of usage in a CPS HBOOK4 and GEANT3 application are provided

  4. CPS and the Fermilab farms

    International Nuclear Information System (INIS)

    Fausey, M.R.

    1992-06-01

    Cooperative Processes Software (CPS) is a parallel programming toolkit developed at the Fermi National Accelerator Laboratory. It is the most recent product in an evolution of systems aimed at finding a cost-effective solution to the enormous computing requirements in experimental high energy physics. Parallel programs written with CPS are large-grained, which means that the parallelism occurs at the subroutine level, rather than at the traditional single line of code level. This fits the requirements of high energy physics applications, such as event reconstruction, or detector simulations, quite well. It also satisfies the requirements of applications in many other fields. One example is in the pharmaceutical industry. In the field of computational chemistry, the process of drug design may be accelerated with this approach. CPS programs run as a collection of processes distributed over many computers. CPS currently supports a mixture of heterogeneous UNIX-based workstations which communicate over networks with TCP/IR CPS is most suited for jobs with relatively low I/O requirements compared to CPU. The CPS toolkit supports message passing remote subroutine calls, process synchronization, bulk data transfers, and a mechanism called process queues, by which one process can find another which has reached a particular state. The CPS software supports both batch processing and computer center operations. The system is currently running in production mode on two farms of processors at Fermilab. One farm consists of approximately 90 IBM RS/6000 model 320 workstations, and the other has 85 Silicon Graphics 4D/35 workstations. This paper first briefly describes the history of parallel processing at Fermilab which lead to the development of CPS. Then the CPS software and the CPS Batch queueing system are described. Finally, the experiences of using CPS in production on the Fermilab processor farms are described

  5. CPS and the Fermilab farms

    Energy Technology Data Exchange (ETDEWEB)

    Fausey, M.R.

    1992-06-01

    Cooperative Processes Software (CPS) is a parallel programming toolkit developed at the Fermi National Accelerator Laboratory. It is the most recent product in an evolution of systems aimed at finding a cost-effective solution to the enormous computing requirements in experimental high energy physics. Parallel programs written with CPS are large-grained, which means that the parallelism occurs at the subroutine level, rather than at the traditional single line of code level. This fits the requirements of high energy physics applications, such as event reconstruction, or detector simulations, quite well. It also satisfies the requirements of applications in many other fields. One example is in the pharmaceutical industry. In the field of computational chemistry, the process of drug design may be accelerated with this approach. CPS programs run as a collection of processes distributed over many computers. CPS currently supports a mixture of heterogeneous UNIX-based workstations which communicate over networks with TCP/IR CPS is most suited for jobs with relatively low I/O requirements compared to CPU. The CPS toolkit supports message passing remote subroutine calls, process synchronization, bulk data transfers, and a mechanism called process queues, by which one process can find another which has reached a particular state. The CPS software supports both batch processing and computer center operations. The system is currently running in production mode on two farms of processors at Fermilab. One farm consists of approximately 90 IBM RS/6000 model 320 workstations, and the other has 85 Silicon Graphics 4D/35 workstations. This paper first briefly describes the history of parallel processing at Fermilab which lead to the development of CPS. Then the CPS software and the CPS Batch queueing system are described. Finally, the experiences of using CPS in production on the Fermilab processor farms are described.

  6. BCD/CPS: An event-level GEANT3 parallelization via CPS

    International Nuclear Information System (INIS)

    Roberts, L.A.

    1991-04-01

    BCD/CPS is an implementation of the Bottom Collider Detector GEANT3 simulation for CPS processor ranches. BCD/CPS demonstrates some of the capabilities of event-parallel applications applicable to current SSC detector simulations using the CPS and CZ/CPS communications protocols. Design, implementation and usage of the BCD/CPS simulation are presented along with extensive source listings for novice GEANT3/CPS programmers. 11 refs

  7. CPS Transformation of Beta-Redexes

    DEFF Research Database (Denmark)

    Danvy, Olivier; Nielsen, Lasse

    2005-01-01

    The extra compaction of the most compacting CPS transformation in existence, which is due to Sabry and Felleisen, is generally attributed to (1) making continuations occur first in CPS terms and (2) classifying more redexes as administrative. We show that this extra compaction is actually...... independent of the relative positions of values and continuations and furthermore that it is solely due to a context-sensitive transformation of beta-redexes. We stage the more compact CPS transformation into a first-order uncurrying phase and a context-insensitive CPS transformation. We also define a context......-insensitive CPS transformation that provides the extra compaction. This CPS transformation operates in one pass and is dependently typed....

  8. CPS Transformation of Beta-Redexes

    DEFF Research Database (Denmark)

    Danvy, Olivier; Nielsen, Lasse R.

    2000-01-01

    The extra compaction of the most compacting CPS transformation in existence, which is due to Sabry and Felleisen, is generally attributed to (1) making continuations occur first in CPS terms and (2) classifying more redexes as administrative. We show that this extra compaction is actually...... independent of the relative positions of values and continuations and furthermore that it is solely due to a context-sensitive transformation of beta-redexes. We stage the more compact CPS transformation into a first-order uncurrying phase and a context-insensitive CPS transformation. We also define a context......-insensitive CPS transformation that provides the extra compaction. This CPS transformation operates in one pass and is dependently typed....

  9. An Operational Investigation of the CPS Hierarchy

    DEFF Research Database (Denmark)

    Danvy, Olivier; Yang, Zhe

    1999-01-01

    We explore the hierarchy of control induced by successive transformations into continuation-passing style (CPS) in the presence of “control delimiters ” and “composable continuations ”. Specifically, we investigate the structural operational semantics associated with the CPS hierarchy. To this end......, we characterize an operational notion of continuation semantics. We relate it to the traditional CPS transformation and we use it to account for the control operator shift and the control delimiter reset operationally. We then transcribe the resulting continuation semantics in ML, thus obtaining...

  10. An Operational Investigation of the CPS Hierarchy

    DEFF Research Database (Denmark)

    Danvy, Olivier; Yang, Zhe

    1998-01-01

    We explore the hierarchy of control induced by successive transformations into continuation-passing style (CPS) in the presence of “control delimiters ” and “composable continuations ”. Specifically, we investigate the structural operational semantics associated with the CPS hierarchy. To this end......, we characterize an operational notion of continuation semantics. We relate it to the traditional CPS transformation and we use it to account for the control operator shift and the control delimiter reset operationally. We then transcribe the resulting continuation semantics in ML, thus obtaining...

  11. CPS Transformation of Flow Information, Part II

    DEFF Research Database (Denmark)

    Damian, D.; Danvy, Olivier

    2003-01-01

    consider the administrative reductions of a Plotkin-style transformation into Continuation-Passing Style (CPS), and how they affect the result of a constraint-based control-flow analysis and, in particular, the least element in the space of solutions. We show that administrative reductions preserve...... the least solution. Preservation of least solutions solves a problem that was left open in Palsberg and Wand's article ‘CPS Transformation of Flow Information.’ Together, Palsberg and Wand's article and the present article show how to map in linear time the least solution of the flow constraints...... of a program into the least solution of the flow constraints of the CPS counterpart of this program, after administrative reductions. Furthermore, we show how to CPS transform control-flow information in one pass....

  12. On proving syntactic properties of CPS programs

    DEFF Research Database (Denmark)

    Danvy, Olivier; Dzafic, Belmina; Pfenning, Frank

    1999-01-01

    Higher-order program transformations raise new challenges for proving properties of their output, since they resist traditional, first-order proof techniques. In this work, we consider (1) the “one-pass” continuation-passing style (CPS) transformation, which is second-order, and (2) the occurrences...... of parameters of continuations in its output. To this end, we specify the one-pass CPS transformation relationally and we use the proof technique of logical relations....

  13. Normalization: A Preprocessing Stage

    OpenAIRE

    Patro, S. Gopal Krishna; Sahu, Kishore Kumar

    2015-01-01

    As we know that the normalization is a pre-processing stage of any type problem statement. Especially normalization takes important role in the field of soft computing, cloud computing etc. for manipulation of data like scale down or scale up the range of data before it becomes used for further stage. There are so many normalization techniques are there namely Min-Max normalization, Z-score normalization and Decimal scaling normalization. So by referring these normalization techniques we are ...

  14. An Extensional CPS Transform (Preliminary Report)

    DEFF Research Database (Denmark)

    Filinski, Andrzej

    2001-01-01

    We shoe that, in a language wihg general continuation-effects, the syntactic, or intensional, CPS transform is mirrored by a semantic, or extensional, functional term. In other words, form only the observable behavior any direct-style term (possibly containing the usual first-class continuation...... primitives), we can uniformly extract the observable behavior of its CPS counterpart. As a consequence of this result, we show that the computational lambda-calculus is complete for observational equivalence of pure, simply typed lambda-terms in Scheme-like contexts....

  15. Classical realizability in the CPS target language

    DEFF Research Database (Denmark)

    Frey, Jonas

    2016-01-01

    Motivated by considerations about Krivine's classical realizability, we introduce a term calculus for an intuitionistic logic with record types, which we call the CPS target language. We give a reformulation of the constructions of classical realizability in this language, using the categorical...... techniques of realizability triposes and toposes. We argue that the presentation of classical realizability in the CPS target language simplifies calculations in realizability toposes, in particular it admits a nice presentation of conjunction as intersection type which is inspired by Girard's ludics....

  16. On proving syntactic properties of CPS programs

    DEFF Research Database (Denmark)

    Danvy, Olivier; Dzafic, Belmina; Pfenning, Frank

    1999-01-01

    Higher-order program transformations raise new challenges for proving properties of their output, since they resist traditional, first-order proof techniques. In this work, we consider (1) the “one-pass” continuation-passing style (CPS) transformation, which is second-order, and (2) the occurrences...

  17. A Simple CPS Transformation of Control-Flow Information

    DEFF Research Database (Denmark)

    Damian, Daniel; Danvy, Olivier

    2002-01-01

    We build on Danvy and Nielsen's first-order program transformation into continuation-passing style (CPS) to design a new CPS transformation of flow information that is simpler and more efficient than what has been presented in previous work. The key to simplicity and efficiency is that our CPS tr...

  18. Effect of pre-processing on the physico-chemical properties of ...

    African Journals Online (AJOL)

    The findings indicated that the pre-processing treatments produced significant differences (p < 0.05) in protein (1.50 ± 0.18g/100g) and carbohydrate (1.09 ± 0.94g/100g) composition of the baking soda blanched milk sample. The viscosity of the baking soda blanched milk (18.91 ± 3.38cps) was significantly higher than that ...

  19. Data preprocessing in data mining

    CERN Document Server

    García, Salvador; Herrera, Francisco

    2015-01-01

    Data Preprocessing for Data Mining addresses one of the most important issues within the well-known Knowledge Discovery from Data process. Data directly taken from the source will likely have inconsistencies, errors or most importantly, it is not ready to be considered for a data mining process. Furthermore, the increasing amount of data in recent science, industry and business applications, calls to the requirement of more complex tools to analyze it. Thanks to data preprocessing, it is possible to convert the impossible into possible, adapting the data to fulfill the input demands of each data mining algorithm. Data preprocessing includes the data reduction techniques, which aim at reducing the complexity of the data, detecting or removing irrelevant and noisy elements from the data. This book is intended to review the tasks that fill the gap between the data acquisition from the source and the data mining process. A comprehensive look from a practical point of view, including basic concepts and surveying t...

  20. A First-Order One-Pass CPS Transformation

    DEFF Research Database (Denmark)

    Danvy, Olivier; Nielsen, Lasse Reichstein

    2001-01-01

    We present a new transformation of λ-terms into continuation-passing style (CPS). This transformation operates in one pass and is both compositional and first-order. Previous CPS transformations only enjoyed two out of the three properties of being first-order, one-pass, and compositional......, but the new transformation enjoys all three properties. It is proved correct directly by structural induction over source terms instead of indirectly with a colon translation, as in Plotkin's original proof. Similarly, it makes it possible to reason about CPS-transformed terms by structural induction over...... source terms, directly.The new CPS transformation connects separately published approaches to the CPS transformation. It has already been used to state a new and simpler correctness proof of a direct-style transformation, and to develop a new and simpler CPS transformation of control-flow information....

  1. A First-Order One-Pass CPS Transformation

    DEFF Research Database (Denmark)

    Danvy, Olivier; Nielsen, Lasse Reichstein

    2003-01-01

    We present a new transformation of λ-terms into continuation-passing style (CPS). This transformation operates in one pass and is both compositional and first-order. Previous CPS transformations only enjoyed two out of the three properties of being first-order, one-pass, and compositional......, but the new transformation enjoys all three properties. It is proved correct directly by structural induction over source terms instead of indirectly with a colon translation, as in Plotkin's original proof. Similarly, it makes it possible to reason about CPS-transformed terms by structural induction over...... source terms, directly.The new CPS transformation connects separately published approaches to the CPS transformation. It has already been used to state a new and simpler correctness proof of a direct-style transformation, and to develop a new and simpler CPS transformation of control-flow information....

  2. International Conference ML4CPS 2016

    CERN Document Server

    Niggemann, Oliver; Kühnert, Christian

    2017-01-01

    The work presents new approaches to Machine Learning for Cyber Physical Systems, experiences and visions. It contains some selected papers from the international Conference ML4CPS – Machine Learning for Cyber Physical Systems, which was held in Karlsruhe, September 29th, 2016. Cyber Physical Systems are characterized by their ability to adapt and to learn: They analyze their environment and, based on observations, they learn patterns, correlations and predictive models. Typical applications are condition monitoring, predictive maintenance, image processing and diagnosis. Machine Learning is the key technology for these developments. The Editors Prof. Dr.-Ing. Jürgen Beyerer is Professor at the Department for Interactive Real-Time Systems at the Karlsruhe Institute of Technology. In addition he manages the Fraunhofer Institute of Optronics, System Technologies and Image Exploitation IOSB. Prof. Dr. Oliver Niggemann is Professor for Embedded Software Engineering. His research interests are in the field of Di...

  3. Fluid Ability (Gf) and Complex Problem Solving (CPS)

    OpenAIRE

    Patrick Kyllonen; Cristina Anguiano Carrasco; Harrison J. Kell

    2017-01-01

    Complex problem solving (CPS) has emerged over the past several decades as an important construct in education and in the workforce. We examine the relationship between CPS and general fluid ability (Gf) both conceptually and empirically. A review of definitions of the two factors, prototypical tasks, and the information processing analyses of performance on those tasks suggest considerable conceptual overlap. We review three definitions of CPS: a general definition emerging from the human pr...

  4. Compact Circuit Preprocesses Accelerometer Output

    Science.gov (United States)

    Bozeman, Richard J., Jr.

    1993-01-01

    Compact electronic circuit transfers dc power to, and preprocesses ac output of, accelerometer and associated preamplifier. Incorporated into accelerometer case during initial fabrication or retrofit onto commercial accelerometer. Made of commercial integrated circuits and other conventional components; made smaller by use of micrologic and surface-mount technology.

  5. Fluid Ability (Gf and Complex Problem Solving (CPS

    Directory of Open Access Journals (Sweden)

    Patrick Kyllonen

    2017-07-01

    Full Text Available Complex problem solving (CPS has emerged over the past several decades as an important construct in education and in the workforce. We examine the relationship between CPS and general fluid ability (Gf both conceptually and empirically. A review of definitions of the two factors, prototypical tasks, and the information processing analyses of performance on those tasks suggest considerable conceptual overlap. We review three definitions of CPS: a general definition emerging from the human problem solving literature; a more specialized definition from the “German School” emphasizing performance in many-variable microworlds, with high domain-knowledge requirements; and a third definition based on performance in Minimal Complex Systems (MCS, with fewer variables and reduced knowledge requirements. We find a correlation of 0.86 between expert ratings of the importance of CPS and Gf across 691 occupations in the O*NET database. We find evidence that employers value both Gf and CPS skills, but CPS skills more highly, even after controlling for the importance of domain knowledge. We suggest that this may be due to CPS requiring not just cognitive ability but additionally skill in applying that ability in domains. We suggest that a fruitful future direction is to explore the importance of domain knowledge in CPS.

  6. Effective Feature Preprocessing for Time Series Forecasting

    DEFF Research Database (Denmark)

    Zhao, Junhua; Dong, Zhaoyang; Xu, Zhao

    2006-01-01

    Time series forecasting is an important area in data mining research. Feature preprocessing techniques have significant influence on forecasting accuracy, therefore are essential in a forecasting model. Although several feature preprocessing techniques have been applied in time series forecasting...... performance in time series forecasting. It is demonstrated in our experiment that, effective feature preprocessing can significantly enhance forecasting accuracy. This research can be a useful guidance for researchers on effectively selecting feature preprocessing techniques and integrating them with time...... series forecasting models....

  7. An Operational Foundation for Delimited Continuations in the CPS Hierarchy

    DEFF Research Database (Denmark)

    Biernacka, Malgorzata; Biernacki, Dariusz; Danvy, Olivier

    2004-01-01

    We present an abstract machine and a reduction semantics for the lambda-calculus extended with control operators that give access to delimited continuations in the CPS hierarchy. The abstract machine is derived from an evaluator in continuation-passing style (CPS); the reduction semantics (i.......e., a small-step operational semantics with an explicit representation of evaluation contexts) is constructed from the abstract machine; and the control operators are the shift and reset family. We also present new applications of delimited continuations in the CPS hierarchy: finding list prefixes...

  8. An Operational Foundation for Delimited Continuations in the CPS Hierarchy

    DEFF Research Database (Denmark)

    Biernacka, Malgorzata; Biernacki, Dariusz; Danvy, Olivier

    2005-01-01

    We present an abstract machine and a reduction semantics for the lambda-calculus extended with control operators that give access to delimited continuations in the CPS hierarchy. The abstract machine is derived from an evaluator in continuation-passing style (CPS); the reduction semantics (i.......e., a small-step operational semantics with an explicit representation of evaluation contexts) is constructed from the abstract machine; and the control operators are the shift and reset family. We also present new applications of delimited continuations in the CPS hierarchy: finding list prefixes...

  9. Targeting CPS1 in the treatment of Carbamoyl phosphate synthetase 1 (CPS1) deficiency, a urea cycle disorder.

    Science.gov (United States)

    Diez-Fernandez, Carmen; Häberle, Johannes

    2017-04-01

    Carbamoyl phosphate synthetase 1 (CPS1) deficiency (CPS1D) is a rare autosomal recessive urea cycle disorder (UCD), which can lead to life-threatening hyperammonemia. Unless promptly treated, it can result in encephalopathy, coma and death, or intellectual disability in surviving patients. Over recent decades, therapies for CPS1D have barely improved leaving the management of these patients largely unchanged. Additionally, in many cases, current management (protein-restriction and supplementation with citrulline and/or arginine and ammonia scavengers) is insufficient for achieving metabolic stability, highlighting the importance of developing alternative therapeutic approaches. Areas covered: After describing UCDs and CPS1D, we give an overview of the structure- function of CPS1. We then describe current management and potential novel treatments including N-carbamoyl-L-glutamate (NCG), pharmacological chaperones, and gene therapy to treat hyperammonemia. Expert opinion: Probably, the first novel CPS1D therapies to reach the clinics will be the already commercial substance NCG, which is the standard treatment for N-acetylglutamate synthase deficiency and has been proven to rescue specific CPS1D mutations. Pharmacological chaperones and gene therapy are under development too, but these two technologies still have key challenges to be overcome. In addition, current experimental therapies will hopefully add further treatment options.

  10. A First-Order One-Pass CPS Transformation

    DEFF Research Database (Denmark)

    Danvy, Olivier; Nielsen, Lasse Reichstein

    2002-01-01

    We present a new transformation of call-by-value lambdaterms into continuation-passing style (CPS). This transformation operates in one pass and is both compositional and first-order. Because it operates in one pass, it directly yields compact CPS programs that are comparable to what one would...... write by hand. Because it is compositional, it allows proofs by structural induction. Because it is first-order, reasoning about it does not require the use of a logical relation. This new CPS transformation connects two separate lines of research. It has already been used to state a new and simpler...... correctness proof of a direct-style transformation, and to develop a new and simpler CPS transformation of control-flow information....

  11. Resource-aware control and dynamic scheduling in CPS

    NARCIS (Netherlands)

    Heemels, W.P.M.H.

    2015-01-01

    Recent developments in computer and communication technologies are leading to an increasingly networked and wireless world. This raises new challenging questions in the context of control for cyberphysical systems (CPS), especially when the computation, communication, energy and actuation resources

  12. A hybrid intermediate language between SSA and CPS

    DEFF Research Database (Denmark)

    Torrens, Paulo; Vasconcellos, Cristiano; Gonçalves, Ju

    2017-01-01

    passing style (CPS) lambda calculus has been used as intermediate language for functional language compilers, they are (almost) equivalent and it is possible to draw syntactic translations between them. This short paper aims to present an untyped intermediate language which may be interpreted as both SSA...... and CPS, in order to provide a common language for both imperative and functional compilers, as well to take advantage of optimizations designed for either one of the approaches. Finally, potential variants and research opportunities are discussed....

  13. Space power subsystem sizing

    International Nuclear Information System (INIS)

    Geis, J.W.

    1992-01-01

    This paper discusses a Space Power Subsystem Sizing program which has been developed by the Aerospace Power Division of Wright Laboratory, Wright-Patterson Air Force Base, Ohio. The Space Power Subsystem program (SPSS) contains the necessary equations and algorithms to calculate photovoltaic array power performance, including end-of-life (EOL) and beginning-of-life (BOL) specific power (W/kg) and areal power density (W/m 2 ). Additional equations and algorithms are included in the spreadsheet for determining maximum eclipse time as a function of orbital altitude, and inclination. The Space Power Subsystem Sizing program (SPSS) has been used to determine the performance of several candidate power subsystems for both Air Force and SDIO potential applications. Trade-offs have been made between subsystem weight and areal power density (W/m 2 ) as influenced by orbital high energy particle flux and time in orbit

  14. Virtual Quantum Subsystems

    International Nuclear Information System (INIS)

    Zanardi, Paolo

    2001-01-01

    The physical resources available to access and manipulate the degrees of freedom of a quantum system define the set A of operationally relevant observables. The algebraic structure of A selects a preferred tensor product structure, i.e., a partition into subsystems. The notion of compoundness for quantum systems is accordingly relativized. Universal control over virtual subsystems can be achieved by using quantum noncommutative holonomies

  15. CPW to CPS transition for feeding UWB antennas

    DEFF Research Database (Denmark)

    Butrym, Alexander; Pivnenko, Sergey

    2004-01-01

    The paper considers a transition (balun) from Coplanar Waveguide (CPW) to Coplanar Stripline (CPS) which is non-resonant and suitable for feeding UWB antennas such as Tapered Slot Antennas (Vivaldi antennas in particular), bow-tie antennas, and other. Some numerical and experimental results...

  16. CPW to CPS transition for feeding UWB antennas

    DEFF Research Database (Denmark)

    Butrym, Alexander; Pivnenko, Sergey

    2006-01-01

    The paper considers a transition (balun) from Coplanar Waveguide (CPW) to Coplanar Stripline (CPS) which is non-resonant and suitable for feeding UWB antennas such as Tapered Slot Antennas (Vivaldi antennas, in particular), bow-tie antennas, and other. Some numerical and experimental results...

  17. SecureCPS: Defending a nanosatellite cyber-physical system

    Science.gov (United States)

    Forbes, Lance; Vu, Huy; Udrea, Bogdan; Hagar, Hamilton; Koutsoukos, Xenofon D.; Yampolskiy, Mark

    2014-06-01

    Recent inexpensive nanosatellite designs employ maneuvering thrusters, much as large satellites have done for decades. However, because a maneuvering nanosatellite can threaten HVAs on-­orbit, it must provide a level of security typically reserved for HVAs. Securing nanosatellites with maneuvering capability is challenging due to extreme cost, size, and power constraints. While still in the design process, our low-­cost SecureCPS architecture promises to dramatically improve security, to include preempting unknown binaries and detecting abnormal behavior. SecureCPS also applies to a broad class of cyber-­physical systems (CPS), such as aircraft, cars, and trains. This paper focuses on Embry-­Riddle's ARAPAIMA nanosatellite architecture, where we assume any off-­the-­shelf component could be compromised by a supply chain attack.1 Based on these assumptions, we have used Vanderbilt's Cyber Physical -­ Attack Description Language (CP-­ADL) to represent realistic attacks, analyze how these attacks propagate in the ARAPAIMA architecture, and how to defeat them using the combination of a low-­cost Root of Trust (RoT) Module, Global InfoTek's Advanced Malware Analysis System (GAMAS), and Anomaly Detection by Machine Learning (ADML).2 Our most recent efforts focus on refining and validating the design of SecureCPS.

  18. CPS Transformation of Flow Information, Part II: Administrative Reductions

    DEFF Research Database (Denmark)

    Damian, Daniel; Danvy, Olivier

    2001-01-01

    the least solution. Preservation of least solutions solves a problem that was left open in Palsberg and Wand's article ‘CPS Transformation of Flow Information.’ Together, Palsberg and Wand's article and the present article show how to map in linear time the least solution of the flow constraints...

  19. Retinal Image Preprocessing: Background and Noise Segmentation

    Directory of Open Access Journals (Sweden)

    Usman Akram

    2012-09-01

    Full Text Available Retinal images are used for the automated screening and diagnosis of diabetic retinopathy. The retinal image quality must be improved for the detection of features and abnormalities and for this purpose preprocessing of retinal images is vital. In this paper, we present a novel automated approach for preprocessing of colored retinal images. The proposed technique improves the quality of input retinal image by separating the background and noisy area from the overall image. It contains coarse segmentation and fine segmentation. Standard retinal images databases Diaretdb0, Diaretdb1, DRIVE and STARE are used to test the validation of our preprocessing technique. The experimental results show the validity of proposed preprocessing technique.

  20. Facilitating Watermark Insertion by Preprocessing Media

    Directory of Open Access Journals (Sweden)

    Matt L. Miller

    2004-10-01

    Full Text Available There are several watermarking applications that require the deployment of a very large number of watermark embedders. These applications often have severe budgetary constraints that limit the computation resources that are available. Under these circumstances, only simple embedding algorithms can be deployed, which have limited performance. In order to improve performance, we propose preprocessing the original media. It is envisaged that this preprocessing occurs during content creation and has no budgetary or computational constraints. Preprocessing combined with simple embedding creates a watermarked Work, the performance of which exceeds that of simple embedding alone. However, this performance improvement is obtained without any increase in the computational complexity of the embedder. Rather, the additional computational burden is shifted to the preprocessing stage. A simple example of this procedure is described and experimental results confirm our assertions.

  1. Role of Ontologies for CPS Implementation in Manufacturing

    Directory of Open Access Journals (Sweden)

    Garetti Marco

    2015-12-01

    Full Text Available Cyber Physical Systems are an evolution of embedded systems featuring a tight combination of collaborating computational elements that control physical entities. CPSs promise a great potential of innovation in many areas including manufacturing and production. This is because we obtain a very powerful, flexible, modular infrastructure allowing easy (re configurability and fast ramp-up of manufacturing applications by building a manufacturing system with modular mechatronic components (for machining, transportation and storage and embedded intelligence, by integrating them into a system, through a network connection. However, when building such kind of architectures, the way to supply the needed domain knowledge to real manufacturing applications arises as a problem to solve. In fact, a CPS based architecture for manufacturing is made of smart but independent manufacturing components without any knowledge of the role they have to play together in the real world of manufacturing applications. Ontologies can supply such kind of knowledge, playing a very important role in CPS for manufacturing. The paper deals with this intriguing theme, also presenting an implementation of this approach in a research project for the open automation of manufacturing systems, in which the power of CPS is complemented by the support of an ontology of the manufacturing domain.

  2. Spacecraft Design Thermal Control Subsystem

    Science.gov (United States)

    Miyake, Robert N.

    2008-01-01

    The Thermal Control Subsystem engineers task is to maintain the temperature of all spacecraft components, subsystems, and the total flight system within specified limits for all flight modes from launch to end-of-mission. In some cases, specific stability and gradient temperature limits will be imposed on flight system elements. The Thermal Control Subsystem of "normal" flight systems, the mass, power, control, and sensing systems mass and power requirements are below 10% of the total flight system resources. In general the thermal control subsystem engineer is involved in all other flight subsystem designs.

  3. Practical Secure Computation with Pre-Processing

    DEFF Research Database (Denmark)

    Zakarias, Rasmus Winther

    Secure Multiparty Computation has been divided between protocols best suited for binary circuits and protocols best suited for arithmetic circuits. With their MiniMac protocol in [DZ13], Damgård and Zakarias take an important step towards bridging these worlds with an arithmetic protocol tuned...... space for pre-processing material than computing the non-linear parts online (depends on the quality of circuit of course). Surprisingly, even for our optimized AES-circuit this is not the case. We further improve the design of the pre-processing material and end up with only 10 megabyes of pre...... a protocol for small field arithmetic to do fast large integer multipli- cations. This is achieved by devising pre-processing material that allows the Toom-Cook multiplication algorithm to run between the parties with linear communication complexity. With this result computation on the CPU by the parties...

  4. Environmental Control Subsystem Development

    Science.gov (United States)

    Laidlaw, Jacob; Zelik, Jonathan

    2017-01-01

    Kennedy Space Center's Launch Pad 39B, part of Launch Complex 39, is currently undergoing construction to prepare it for NASA's Space Launch System missions. The Environmental Control Subsystem, which provides the vehicle with an air or nitrogen gas environment, required development of its local and remote display screens. The remote displays, developed by NASA contractors and previous interns, were developed without complete functionality; the remote displays were revised, adding functionality to over 90 displays. For the local displays, multiple test procedures were developed to assess the functionality of the screens, as well as verify requirements. One local display screen was also developed.

  5. The 1989 ENDF pre-processing codes

    International Nuclear Information System (INIS)

    Cullen, D.E.; McLaughlin, P.K.

    1989-12-01

    This document summarizes the 1989 version of the ENDF pre-processing codes which are required for processing evaluated nuclear data coded in the format ENDF-4, ENDF-5, or ENDF-6. The codes are available from the IAEA Nuclear Data Section, free of charge upon request. (author)

  6. Preprocessing Moist Lignocellulosic Biomass for Biorefinery Feedstocks

    Energy Technology Data Exchange (ETDEWEB)

    Neal Yancey; Christopher T. Wright; Craig Conner; J. Richard Hess

    2009-06-01

    Biomass preprocessing is one of the primary operations in the feedstock assembly system of a lignocellulosic biorefinery. Preprocessing is generally accomplished using industrial grinders to format biomass materials into a suitable biorefinery feedstock for conversion to ethanol and other bioproducts. Many factors affect machine efficiency and the physical characteristics of preprocessed biomass. For example, moisture content of the biomass as received from the point of production has a significant impact on overall system efficiency and can significantly affect the characteristics (particle size distribution, flowability, storability, etc.) of the size-reduced biomass. Many different grinder configurations are available on the market, each with advantages under specific conditions. Ultimately, the capacity and/or efficiency of the grinding process can be enhanced by selecting the grinder configuration that optimizes grinder performance based on moisture content and screen size. This paper discusses the relationships of biomass moisture with respect to preprocessing system performance and product physical characteristics and compares data obtained on corn stover, switchgrass, and wheat straw as model feedstocks during Vermeer HG 200 grinder testing. During the tests, grinder screen configuration and biomass moisture content were varied and tested to provide a better understanding of their relative impact on machine performance and the resulting feedstock physical characteristics and uniformity relative to each crop tested.

  7. Evolution of magnetic disk subsystems

    Science.gov (United States)

    Kaneko, Satoru

    1994-06-01

    The higher recording density of magnetic disk realized today has brought larger storage capacity per unit and smaller form factors. If the required access performance per MB is constant, the performance of large subsystems has to be several times better. This article describes mainly the technology for improving the performance of the magnetic disk subsystems and the prospects of their future evolution. Also considered are 'crosscall pathing' which makes the data transfer channel more effective, 'disk cache' which improves performance coupling with solid state memory technology, and 'RAID' which improves the availability and integrity of disk subsystems by organizing multiple disk drives in a subsystem. As a result, it is concluded that since the performance of the subsystem is dominated by that of the disk cache, maximation of the performance of the disk cache subsystems is very important.

  8. Regional transmission subsystem planning

    Energy Technology Data Exchange (ETDEWEB)

    Costa Bortoni, Edson da [Quadrante Softwares Especializados Ltda., Itajuba, MG (Brazil); Bajay, Sergio Valdir; Barros Correia, Paulo de [Universidade Estadual de Campinas, SP (Brazil). Faculdade de Engenharia Mecanica; Santos, Afonso Henriques Moreira; Haddad, Jamil [Escola Federal de Engenharia de Itajuba, MG (Brazil)

    1994-12-31

    This work presents an approach for the planning of transmission systems by employing mixed--integer linear programming to obtain a cost and operating characteristics optimized system. The voltage loop equations are written in a modified form, so that, at the end of the analysis, the model behaves as a DC power flow, with the help of the two Kirchhoff`s laws, exempting the need of interaction with an external power flow program for analysis of the line loading. The model considers the occurrence of contingencies, so that the final result is a network robust to the most severe contingencies. This whole technique is adapted to the regional electric power transmission subsystems. (author) 9 refs., 4 figs.

  9. Isolation and identification of citrus psorosis virus Egyptian isolate (CPsV-EG).

    Science.gov (United States)

    Ghazal, S A; El-Dougdoug, Kh A; Mousa, A A; Fahmy, H; Sofy, A R

    2008-01-01

    Citrus psorosis ophiovirus (CPsV), is considered to be of the most serious and deter mental virus pathogen's citrus species trees in Egypt. CPsV-EG was isolated from infected citrus grapefruit (C. paradisi Macf.) at Agric. Res. Centre (ARC). The grapefruit which used for CPsV-EG isolate was found to be free from CTV, CEVd and Spiroplasma citri where as gave -ve results with DTBIA, tissue print hybridization and Diene's stain respectively. CPsV-EG was detected on the basis of biological indexing by graft inoculation which gave oak leaf pattern (OLP) on Dweet tangor and serological assay by DAS-ELISA using Mab specific CPsV. CPsV-EG was reacted with variable responses on 16 host plants belonging to 6 families. Only 8 host plants are susceptible and showed visible external symptoms which appeared as local, systemic and local followed by systemic infections. CPsV-EG isolate was transmitted from infected citrus to citrus by syringe and grafting and herbaceous plants by forefinger inoculation and syringe. The woody indicators and rootstocks were differed in response to CPsV-EG isolate which appeared as no-response, response, sensitivity and hypersensitivity. The serological characters represented as the antigenic determinants of CPsV-EG isolate related to monoclonal antibodies specific CPsV strain where as appeared precipitation reaction by DAS-ELISA and DTBIA. The partial fragment of RNA3 (coat protein gene) of CPsV-EG (-1140bp and -571bp) was amplified by reverse transcription-polymerase chain reaction (RT-PCR) from grapefruit tissues using two sets primers specific CPsV (CPV3 and CPV4) and (PS66 and PS65) respectively. The virus under study was identified as CPsV-EG isolate according to biological, serological and molecular characters.

  10. Reliable RANSAC Using a Novel Preprocessing Model

    Directory of Open Access Journals (Sweden)

    Xiaoyan Wang

    2013-01-01

    Full Text Available Geometric assumption and verification with RANSAC has become a crucial step for corresponding to local features due to its wide applications in biomedical feature analysis and vision computing. However, conventional RANSAC is very time-consuming due to redundant sampling times, especially dealing with the case of numerous matching pairs. This paper presents a novel preprocessing model to explore a reduced set with reliable correspondences from initial matching dataset. Both geometric model generation and verification are carried out on this reduced set, which leads to considerable speedups. Afterwards, this paper proposes a reliable RANSAC framework using preprocessing model, which was implemented and verified using Harris and SIFT features, respectively. Compared with traditional RANSAC, experimental results show that our method is more efficient.

  11. The 1996 ENDF pre-processing codes

    International Nuclear Information System (INIS)

    Cullen, D.E.

    1996-01-01

    The codes are named 'the Pre-processing' codes, because they are designed to pre-process ENDF/B data, for later, further processing for use in applications. This is a modular set of computer codes, each of which reads and writes evaluated nuclear data in the ENDF/B format. Each code performs one or more independent operations on the data, as described below. These codes are designed to be computer independent, and are presently operational on every type of computer from large mainframe computer to small personal computers, such as IBM-PC and Power MAC. The codes are available from the IAEA Nuclear Data Section, free of charge upon request. (author)

  12. Syntactic accidents in program analysis: on the impact of the CPS transformation

    DEFF Research Database (Denmark)

    Damian, Daniel; Danvy, Olivier

    2003-01-01

    We show that a non-duplicating transformation into Continuation-Passing Style (CPS) has no effect on control-flow analysis, a positive effect on binding-time analysis for traditional partial evaluation, and no effect on binding-time analysis for continuation-based partial evaluation: a monovariant...... control-flow analysis yields equivalent results on a direct-style program and on its CPS counterpart, a monovariant binding-time analysis yields less precise results on a direct-style program than on its CPS counterpart, and an enhanced monovariant binding-time analysis yields equivalent results...... on a direct-style program and on its CPS counterpart. Our proof technique amounts to constructing the CPS counterpart of flow information and of binding times. Our results formalize and confirm a folklore theorem about traditional binding-time analysis, namely that CPS has a positive effect on binding times...

  13. Syntactic Accidents in Program Analysis: On the Impact of the CPS Transformation

    DEFF Research Database (Denmark)

    Daniel, Damian; Danvy, Olivier

    2000-01-01

    We show that a non-duplicating transformation into Continuation-Passing Style (CPS) has no effect on control-flow analysis, a positive effect on binding-time analysis for traditional partial evaluation, and no effect on binding-time analysis for continuation-based partial evaluation: a monovariant...... control-flow analysis yields equivalent results on a direct-style program and on its CPS counterpart, a monovariant binding-time analysis yields less precise results on a direct-style program than on its CPS counterpart, and an enhanced monovariant binding-time analysis yields equivalent results...... on a direct-style program and on its CPS counterpart. Our proof technique amounts to constructing the CPS counterpart of flow information and of binding times. Our results formalize and confirm a folklore theorem about traditional binding-time analysis, namely that CPS has a positive effect on binding times...

  14. Boosting reversible pushdown machines by preprocessing

    DEFF Research Database (Denmark)

    Axelsen, Holger Bock; Kutrib, Martin; Malcher, Andreas

    2016-01-01

    languages, whereas for reversible pushdown automata the accepted family of languages lies strictly in between the reversible deterministic context-free languages and the real-time deterministic context-free languages. Moreover, it is shown that the computational power of both types of machines...... is not changed by allowing the preprocessing sequential transducer to work irreversibly. Finally, we examine the closure properties of the family of languages accepted by such machines....

  15. The Classroom Process Scale (CPS): An Approach to the Measurement of Teaching Effectiveness.

    Science.gov (United States)

    Anderson, Lorin W.; Scott, Corinne C.

    The purpose of this presentation is to describe the Classroom Process Scale (CPS) and its usefulness for the assessment of teaching effectiveness. The CPS attempts to ameliorate weaknesses in existing classroom process measures by including a coding of student involvement in learning, objectives being pursued, and methods used to pursue attainment…

  16. MMI design of K-CPS for preventing human errors and enhancing convenient operation

    International Nuclear Information System (INIS)

    Sung, Chan Ho; Jung, Yeon Sub; Oh, Eoung Se; Shin, Young Chul; Lee, Yong Kwan

    2001-01-01

    In order to supplement defects of paper procedure, reduce human errors and enhance convenient operation, computer-based procedure system is being developed. CPS (Computerized Procedure System) including human-factor engineering design concept for KNGR (Korean Next Generation Reactor) has been also developed with the same object. K-CPS(KNGR CPS) has higher level of automation than paper procedure. It is fully integrated with control and monitoring systems. Combining statements and relevant components, which changes dynamically according to plant status enhances readability of procedure. This paper shows general design criteria on computer-based procedure system, the MMI design characteristics of K-CPS and the results of suitability evaluation for K-CPS by operator

  17. Phylogenetic distribution and membrane topology of the LytR-CpsA-Psr protein family

    Directory of Open Access Journals (Sweden)

    Berger-Bächi Brigitte

    2008-12-01

    Full Text Available Abstract Background The bacterial cell wall is the target of many antibiotics and cell envelope constituents are critical to host-pathogen interactions. To combat resistance development and virulence, a detailed knowledge of the individual factors involved is essential. Members of the LytR-CpsA-Psr family of cell envelope-associated attenuators are relevant for β-lactam resistance, biofilm formation, and stress tolerance, and they are suggested to play a role in cell wall maintenance. However, their precise function is still unknown. This study addresses the occurrence as well as sequence-based characteristics of the LytR-CpsA-Psr proteins. Results A comprehensive list of LytR-CpsA-Psr proteins was established, and their phylogenetic distribution and clustering into subgroups was determined. LytR-CpsA-Psr proteins were present in all Gram-positive organisms, except for the cell wall-deficient Mollicutes and one strain of the Clostridiales. In contrast, the majority of Gram-negatives did not contain LytR-CpsA-Psr family members. Despite high sequence divergence, the LytR-CpsA-Psr domains of different subclusters shared a highly similar, predicted mixed a/β-structure, and conserved charged residues. PhoA fusion experiments, using MsrR of Staphylococcus aureus, confirmed membrane topology predictions and extracellular location of its LytR-CpsA-Psr domain. Conclusion The LytR-CpsA-Psr domain is unique to bacteria. The presence of diverse subgroups within the LytR-CpsA-Psr family might indicate functional differences, and could explain variations in phenotypes of respective mutants reported. The identified conserved structural elements and amino acids are likely to be important for the function of the domain and will help to guide future studies of the LytR-CpsA-Psr proteins.

  18. Spherical subsystem of galactic radiosources

    Energy Technology Data Exchange (ETDEWEB)

    Gorshkov, A G; Popov, M V [Moskovskij Gosudarstvennyj Univ. (USSR). Gosudarstvennyj Astronomicheskij Inst. ' ' GAISh' '

    1975-05-01

    The concentration of statistically complete sampling radiosources of the Ohiof scanning with plane spectra towards the Galaxy centre has been discovered. Quantitative calculations have showed that the sources form a spheric subsystem, which is close in parameters to such old formations in the Galaxy as globular clusters and the RRLsub(YR) type stars. The luminosity of the galaxy spheric subsystem object equals 10/sup 33/ erg/sec, the total number of objects being 7000. The existence of such a subsystem explains s the anomalously by low incline of statistics lgN-lgS in HF scanning PKS (..gamma..-2700Mgz) and the Michigan University scanning (..gamma..=8000Mgz) because the sources of galaxy spheric subsystem make up a considerable share in the total number of sources, especially at high frequencies (50% of sources with a flux greater than a unit of flux per 8000Mgz). It is very probable that the given subsystem consists of the representatives of one of the following class of objects: a) heat sources - the H2H regions with T=10/sup 40/K, Nsub(e)=10/sup 3/, l=1 ps b) supermass black holes with mass M/Mo approximately 10/sup 5/.

  19. Space power subsystem automation technology

    Science.gov (United States)

    Graves, J. R. (Compiler)

    1982-01-01

    The technology issues involved in power subsystem automation and the reasonable objectives to be sought in such a program were discussed. The complexities, uncertainties, and alternatives of power subsystem automation, along with the advantages from both an economic and a technological perspective were considered. Whereas most spacecraft power subsystems now use certain automated functions, the idea of complete autonomy for long periods of time is almost inconceivable. Thus, it seems prudent that the technology program for power subsystem automation be based upon a growth scenario which should provide a structured framework of deliberate steps to enable the evolution of space power subsystems from the current practice of limited autonomy to a greater use of automation with each step being justified on a cost/benefit basis. Each accomplishment should move toward the objectives of decreased requirement for ground control, increased system reliability through onboard management, and ultimately lower energy cost through longer life systems that require fewer resources to operate and maintain. This approach seems well-suited to the evolution of more sophisticated algorithms and eventually perhaps even the use of some sort of artificial intelligence. Multi-hundred kilowatt systems of the future will probably require an advanced level of autonomy if they are to be affordable and manageable.

  20. The 1992 ENDF Pre-processing codes

    International Nuclear Information System (INIS)

    Cullen, D.E.

    1992-01-01

    This document summarizes the 1992 version of the ENDF pre-processing codes which are required for processing evaluated nuclear data coded in the format ENDF-4, ENDF-5, or ENDF-6. Included are the codes CONVERT, MERGER, LINEAR, RECENT, SIGMA1, LEGEND, FIXUP, GROUPIE, DICTION, MIXER, VIRGIN, COMPLOT, EVALPLOT, RELABEL. Some of the functions of these codes are: to calculate cross-sections from resonance parameters; to calculate angular distributions, group average, mixtures of cross-sections, etc; to produce graphical plottings and data comparisons. The codes are designed to operate on virtually any type of computer including PC's. They are available from the IAEA Nuclear Data Section, free of charge upon request, on magnetic tape or a set of HD diskettes. (author)

  1. Block storage subsystem performance analysis

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    You feel that your service is slow because of the storage subsystem? But there are too many abstraction layers between your software and the raw block device for you to debug all this pile... Let's dive on the platters and check out how the block storage sees your I/Os! We can even figure out what those patterns are meaning.

  2. The Effect of Preprocessing on Arabic Document Categorization

    Directory of Open Access Journals (Sweden)

    Abdullah Ayedh

    2016-04-01

    Full Text Available Preprocessing is one of the main components in a conventional document categorization (DC framework. This paper aims to highlight the effect of preprocessing tasks on the efficiency of the Arabic DC system. In this study, three classification techniques are used, namely, naive Bayes (NB, k-nearest neighbor (KNN, and support vector machine (SVM. Experimental analysis on Arabic datasets reveals that preprocessing techniques have a significant impact on the classification accuracy, especially with complicated morphological structure of the Arabic language. Choosing appropriate combinations of preprocessing tasks provides significant improvement on the accuracy of document categorization depending on the feature size and classification techniques. Findings of this study show that the SVM technique has outperformed the KNN and NB techniques. The SVM technique achieved 96.74% micro-F1 value by using the combination of normalization and stemming as preprocessing tasks.

  3. SVM-Based Dynamic Reconfiguration CPS for Manufacturing System in Industry 4.0

    Directory of Open Access Journals (Sweden)

    Hyun-Jun Shin

    2018-01-01

    Full Text Available CPS is potential application in various fields, such as medical, healthcare, energy, transportation, and defense, as well as Industry 4.0 in Germany. Although studies on the equipment aging and prediction of problem have been done by combining CPS with Industry 4.0, such studies were based on small numbers and majority of the papers focused primarily on CPS methodology. Therefore, it is necessary to study active self-protection to enable self-management functions, such as self-healing by applying CPS in shop-floor. In this paper, we have proposed modeling of shop-floor and a dynamic reconfigurable CPS scheme that can predict the occurrence of anomalies and self-protection in the model. For this purpose, SVM was used as a machine learning technology and it was possible to restrain overloading in manufacturing process. In addition, we design CPS framework based on machine learning for Industry 4.0, simulate it, and perform. Simulation results show the simulation model autonomously detects the abnormal situation and it is dynamically reconfigured through self-healing.

  4. CSS Preprocessing: Tools and Automation Techniques

    Directory of Open Access Journals (Sweden)

    Ricardo Queirós

    2018-01-01

    Full Text Available Cascading Style Sheets (CSS is a W3C specification for a style sheet language used for describing the presentation of a document written in a markup language, more precisely, for styling Web documents. However, in the last few years, the landscape for CSS development has changed dramatically with the appearance of several languages and tools aiming to help developers build clean, modular and performance-aware CSS. These new approaches give developers mechanisms to preprocess CSS rules through the use of programming constructs, defined as CSS preprocessors, with the ultimate goal to bring those missing constructs to the CSS realm and to foster stylesheets structured programming. At the same time, a new set of tools appeared, defined as postprocessors, for extension and automation purposes covering a broad set of features ranging from identifying unused and duplicate code to applying vendor prefixes. With all these tools and techniques in hands, developers need to provide a consistent workflow to foster CSS modular coding. This paper aims to present an introductory survey on the CSS processors. The survey gathers information on a specific set of processors, categorizes them and compares their features regarding a set of predefined criteria such as: maturity, coverage and performance. Finally, we propose a basic set of best practices in order to setup a simple and pragmatic styling code workflow.

  5. MITS Data Acquisition Subsystem Acceptance Test procedure

    International Nuclear Information System (INIS)

    Allison, R.

    1980-01-01

    This is an acceptance procedure for the Data Acquisition Subsystem of the Machine Interface Test System (MITS). Prerequisites, requirements, and detailed step-by-step instruction are presented for inspecting and performance testing the subsystem

  6. Gravity gradient preprocessing at the GOCE HPF

    Science.gov (United States)

    Bouman, J.; Rispens, S.; Gruber, T.; Schrama, E.; Visser, P.; Tscherning, C. C.; Veicherts, M.

    2009-04-01

    One of the products derived from the GOCE observations are the gravity gradients. These gravity gradients are provided in the Gradiometer Reference Frame (GRF) and are calibrated in-flight using satellite shaking and star sensor data. In order to use these gravity gradients for application in Earth sciences and gravity field analysis, additional pre-processing needs to be done, including corrections for temporal gravity field signals to isolate the static gravity field part, screening for outliers, calibration by comparison with existing external gravity field information and error assessment. The temporal gravity gradient corrections consist of tidal and non-tidal corrections. These are all generally below the gravity gradient error level, which is predicted to show a 1/f behaviour for low frequencies. In the outlier detection the 1/f error is compensated for by subtracting a local median from the data, while the data error is assessed using the median absolute deviation. The local median acts as a high-pass filter and it is robust as is the median absolute deviation. Three different methods have been implemented for the calibration of the gravity gradients. All three methods use a high-pass filter to compensate for the 1/f gravity gradient error. The baseline method uses state-of-the-art global gravity field models and the most accurate results are obtained if star sensor misalignments are estimated along with the calibration parameters. A second calibration method uses GOCE GPS data to estimate a low degree gravity field model as well as gravity gradient scale factors. Both methods allow to estimate gravity gradient scale factors down to the 10-3 level. The third calibration method uses high accurate terrestrial gravity data in selected regions to validate the gravity gradient scale factors, focussing on the measurement band. Gravity gradient scale factors may be estimated down to the 10-2 level with this method.

  7. SLAE–CPS: Smart Lean Automation Engine Enabled by Cyber-Physical Systems Technologies

    Science.gov (United States)

    Ma, Jing; Wang, Qiang; Zhao, Zhibiao

    2017-01-01

    In the context of Industry 4.0, the demand for the mass production of highly customized products will lead to complex products and an increasing demand for production system flexibility. Simply implementing lean production-based human-centered production or high automation to improve system flexibility is insufficient. Currently, lean automation (Jidoka) that utilizes cyber-physical systems (CPS) is considered a cost-efficient and effective approach for improving system flexibility under shrinking global economic conditions. Therefore, a smart lean automation engine enabled by CPS technologies (SLAE–CPS), which is based on an analysis of Jidoka functions and the smart capacity of CPS technologies, is proposed in this study to provide an integrated and standardized approach to design and implement a CPS-based smart Jidoka system. A set of comprehensive architecture and standardized key technologies should be presented to achieve the above-mentioned goal. Therefore, a distributed architecture that joins service-oriented architecture, agent, function block (FB), cloud, and Internet of things is proposed to support the flexible configuration, deployment, and performance of SLAE–CPS. Then, several standardized key techniques are proposed under this architecture. The first one is for converting heterogeneous physical data into uniform services for subsequent abnormality analysis and detection. The second one is a set of Jidoka scene rules, which is abstracted based on the analysis of the operator, machine, material, quality, and other factors in different time dimensions. These Jidoka rules can support executive FBs in performing different Jidoka functions. Finally, supported by the integrated and standardized approach of our proposed engine, a case study is conducted to verify the current research results. The proposed SLAE–CPS can serve as an important reference value for combining the benefits of innovative technology and proper methodology. PMID:28657577

  8. SLAE-CPS: Smart Lean Automation Engine Enabled by Cyber-Physical Systems Technologies.

    Science.gov (United States)

    Ma, Jing; Wang, Qiang; Zhao, Zhibiao

    2017-06-28

    In the context of Industry 4.0, the demand for the mass production of highly customized products will lead to complex products and an increasing demand for production system flexibility. Simply implementing lean production-based human-centered production or high automation to improve system flexibility is insufficient. Currently, lean automation (Jidoka) that utilizes cyber-physical systems (CPS) is considered a cost-efficient and effective approach for improving system flexibility under shrinking global economic conditions. Therefore, a smart lean automation engine enabled by CPS technologies (SLAE-CPS), which is based on an analysis of Jidoka functions and the smart capacity of CPS technologies, is proposed in this study to provide an integrated and standardized approach to design and implement a CPS-based smart Jidoka system. A set of comprehensive architecture and standardized key technologies should be presented to achieve the above-mentioned goal. Therefore, a distributed architecture that joins service-oriented architecture, agent, function block (FB), cloud, and Internet of things is proposed to support the flexible configuration, deployment, and performance of SLAE-CPS. Then, several standardized key techniques are proposed under this architecture. The first one is for converting heterogeneous physical data into uniform services for subsequent abnormality analysis and detection. The second one is a set of Jidoka scene rules, which is abstracted based on the analysis of the operator, machine, material, quality, and other factors in different time dimensions. These Jidoka rules can support executive FBs in performing different Jidoka functions. Finally, supported by the integrated and standardized approach of our proposed engine, a case study is conducted to verify the current research results. The proposed SLAE-CPS can serve as an important reference value for combining the benefits of innovative technology and proper methodology.

  9. The Phenix Detector magnet subsystem

    International Nuclear Information System (INIS)

    Yamamoto, R.M.; Bowers, J.M.; Harvey, A.R.

    1995-01-01

    The PHENIX [Photon Electron New Heavy Ion Experiment] Detector is one of two large detectors presently under construction for RHIC (Relativistic Heavy Ion Collider) located at Brookhaven National Laboratory. Its primary goal is to detect a new phase of matter; the quark-gluon plasma. In order to achieve this objective, the PHENIX Detector utilizes a complex magnet subsystem which is comprised of two large magnets identified as the Central Magnet (CM) and the Muon Magnet (MM). Muon Identifier steel is also included as part of this package. The entire magnet subsystem stands over 10 meters tall and weighs in excess of 1900 tons (see Fig. 1). Magnet size alone provided many technical challenges throughout the design and fabrication of the project. In addition, interaction with foreign collaborators provided the authors with new areas to address and problems to solve. Russian collaborators would fabricate a large fraction of the steel required and Japanese collaborators would supply the first coil. This paper will describe the overall design of the PHENIX magnet subsystem and discuss its present fabrication status

  10. The Phenix Detector magnet subsystem

    Energy Technology Data Exchange (ETDEWEB)

    Yamamoto, R.M.; Bowers, J.M.; Harvey, A.R. [Lawrence Livermore National Lab., CA (United States)] [and others

    1995-05-19

    The PHENIX [Photon Electron New Heavy Ion Experiment] Detector is one of two large detectors presently under construction for RHIC (Relativistic Heavy Ion Collider) located at Brookhaven National Laboratory. Its primary goal is to detect a new phase of matter; the quark-gluon plasma. In order to achieve this objective, the PHENIX Detector utilizes a complex magnet subsystem which is comprised of two large magnets identified as the Central Magnet (CM) and the Muon Magnet (MM). Muon Identifier steel is also included as part of this package. The entire magnet subsystem stands over 10 meters tall and weighs in excess of 1900 tons (see Fig. 1). Magnet size alone provided many technical challenges throughout the design and fabrication of the project. In addition, interaction with foreign collaborators provided the authors with new areas to address and problems to solve. Russian collaborators would fabricate a large fraction of the steel required and Japanese collaborators would supply the first coil. This paper will describe the overall design of the PHENIX magnet subsystem and discuss its present fabrication status.

  11. A survey of visual preprocessing and shape representation techniques

    Science.gov (United States)

    Olshausen, Bruno A.

    1988-01-01

    Many recent theories and methods proposed for visual preprocessing and shape representation are summarized. The survey brings together research from the fields of biology, psychology, computer science, electrical engineering, and most recently, neural networks. It was motivated by the need to preprocess images for a sparse distributed memory (SDM), but the techniques presented may also prove useful for applying other associative memories to visual pattern recognition. The material of this survey is divided into three sections: an overview of biological visual processing; methods of preprocessing (extracting parts of shape, texture, motion, and depth); and shape representation and recognition (form invariance, primitives and structural descriptions, and theories of attention).

  12. Development of technology for fabrication of lithium CPS on basis of CNT-reinforced carboxylic fabric

    International Nuclear Information System (INIS)

    Tazhibayeva, Irina; Baklanov, Viktor; Ponkratov, Yuriy; Abdullin, Khabibulla; Kulsartov, Timur; Gordienko, Yuriy; Zaurbekova, Zhanna; Lyublinski, Igor; Vertkov, Alexey; Skakov, Mazhyn

    2017-01-01

    Highlights: • Preliminary study of carboxylic fabric wettability with liquid lithium is presented. • Preliminary studies of carboxylic fabric wettability with liquid lithium consist in carrying out of experiments at temperatures 673,773 and 873 К in vacuum during long time. • A scheme of experimental device for manufacturing of lithium CPS and matrix filling procedure with liquid lithium are presented. • The concept of lithium limiter with CPS on basis of CNT-reinforced carboxylic fabric is proposed. - Abstract: The paper describes the analysis of liquid lithium interaction with materials based on carbon, the manufacture technology of capillary-porous system (CPS) matrix on basis of CNT-reinforced carboxylic fabric. Preliminary study of carboxylic fabric wettability with liquid lithium is presented. The development of technology includes: microstructural studies of carboxylic fabric before its CNT-reinforcing; validation of CNT-reinforcing technology; mode validation of CVD-method for CNT synthesize; study of synthesized carbon structures. Preliminary studies of carboxylic fabric wettability with liquid lithium consist in carrying out of experiments at temperatures 673, 773 and 873 К in vacuum during long time. The scheme of experimental device for manufacturing of lithium CPS and matrix filling procedure with liquid lithium are presented. The concept of lithium limiter with CPS on basis of CNT-reinforced carboxylic fabric is proposed.

  13. Development of technology for fabrication of lithium CPS on basis of CNT-reinforced carboxylic fabric

    Energy Technology Data Exchange (ETDEWEB)

    Tazhibayeva, Irina, E-mail: tazhibayeva@ntsc.kz [Institute of Atomic Energy, National Nuclear Center of RK, Kurchatov (Kazakhstan); Baklanov, Viktor; Ponkratov, Yuriy [Institute of Atomic Energy, National Nuclear Center of RK, Kurchatov (Kazakhstan); Abdullin, Khabibulla [Institute of Experimental and Theoretical Physics of Kazakh National University, Almaty (Kazakhstan); Kulsartov, Timur; Gordienko, Yuriy; Zaurbekova, Zhanna [Institute of Atomic Energy, National Nuclear Center of RK, Kurchatov (Kazakhstan); Lyublinski, Igor [JSC «Red Star», Moscow (Russian Federation); NRNU «MEPhI», Moscow (Russian Federation); Vertkov, Alexey [JSC «Red Star», Moscow (Russian Federation); Skakov, Mazhyn [Institute of Atomic Energy, National Nuclear Center of RK, Kurchatov (Kazakhstan)

    2017-04-15

    Highlights: • Preliminary study of carboxylic fabric wettability with liquid lithium is presented. • Preliminary studies of carboxylic fabric wettability with liquid lithium consist in carrying out of experiments at temperatures 673,773 and 873 К in vacuum during long time. • A scheme of experimental device for manufacturing of lithium CPS and matrix filling procedure with liquid lithium are presented. • The concept of lithium limiter with CPS on basis of CNT-reinforced carboxylic fabric is proposed. - Abstract: The paper describes the analysis of liquid lithium interaction with materials based on carbon, the manufacture technology of capillary-porous system (CPS) matrix on basis of CNT-reinforced carboxylic fabric. Preliminary study of carboxylic fabric wettability with liquid lithium is presented. The development of technology includes: microstructural studies of carboxylic fabric before its CNT-reinforcing; validation of CNT-reinforcing technology; mode validation of CVD-method for CNT synthesize; study of synthesized carbon structures. Preliminary studies of carboxylic fabric wettability with liquid lithium consist in carrying out of experiments at temperatures 673, 773 and 873 К in vacuum during long time. The scheme of experimental device for manufacturing of lithium CPS and matrix filling procedure with liquid lithium are presented. The concept of lithium limiter with CPS on basis of CNT-reinforced carboxylic fabric is proposed.

  14. Impact of data transformation and preprocessing in supervised ...

    African Journals Online (AJOL)

    Impact of data transformation and preprocessing in supervised learning ... Nowadays, the ideas of integrating machine learning techniques in power system has ... The proposed algorithm used Python-based split train and k-fold model ...

  15. Preprocessing Algorithm for Deciphering Historical Inscriptions Using String Metric

    Directory of Open Access Journals (Sweden)

    Lorand Lehel Toth

    2016-07-01

    Full Text Available The article presents the improvements in the preprocessing part of the deciphering method (shortly preprocessing algorithm for historical inscriptions of unknown origin. Glyphs used in historical inscriptions changed through time; therefore, various versions of the same script may contain different glyphs for each grapheme. The purpose of the preprocessing algorithm is reducing the running time of the deciphering process by filtering out the less probable interpretations of the examined inscription. However, the first version of the preprocessing algorithm leads incorrect outcome or no result in the output in certain cases. Therefore, its improved version was developed to find the most similar words in the dictionary by relaying the search conditions more accurately, but still computationally effectively. Moreover, a sophisticated similarity metric used to determine the possible meaning of the unknown inscription is introduced. The results of the evaluations are also detailed.

  16. Preprocessing of emotional visual information in the human piriform cortex.

    Science.gov (United States)

    Schulze, Patrick; Bestgen, Anne-Kathrin; Lech, Robert K; Kuchinke, Lars; Suchan, Boris

    2017-08-23

    This study examines the processing of visual information by the olfactory system in humans. Recent data point to the processing of visual stimuli by the piriform cortex, a region mainly known as part of the primary olfactory cortex. Moreover, the piriform cortex generates predictive templates of olfactory stimuli to facilitate olfactory processing. This study fills the gap relating to the question whether this region is also capable of preprocessing emotional visual information. To gain insight into the preprocessing and transfer of emotional visual information into olfactory processing, we recorded hemodynamic responses during affective priming using functional magnetic resonance imaging (fMRI). Odors of different valence (pleasant, neutral and unpleasant) were primed by images of emotional facial expressions (happy, neutral and disgust). Our findings are the first to demonstrate that the piriform cortex preprocesses emotional visual information prior to any olfactory stimulation and that the emotional connotation of this preprocessing is subsequently transferred and integrated into an extended olfactory network for olfactory processing.

  17. Investigation of thermohydraulic parameter variations in operating conditions of Bilibino NPP CPS cooling circuit

    International Nuclear Information System (INIS)

    Baranaev, Yu.D.; Koz'menkova, V.V.; Parafilo, L.M.

    2015-01-01

    In consequence of activities on uncovering the reasons for through faults formation in cooling tubes of reactor control and protection system (CPS) channels of Bilibino-2 reactor the conclusion is made that corrosion failure development takes place against the backdrop of periodic increase of total moisture in reactor space at transient and standby modes at top of cooling tubes of CPS channels. Formation of corrosion defects in cooling tubes of four CPS channels of unit 2 in 2011-2012 is specific factor for this plant unit and do not effect on operation of other plant units. It is pointed out that ingress of moisture into gas system of the reactor is the critical factor providing integrity of structure elements of EhPG-6 reactor core cooling system. This fact agrees nicely with the results obtained during operation AM reactor of the First NPP [ru

  18. An Anonymous Access Authentication Scheme Based on Proxy Ring Signature for CPS-WMNs

    Directory of Open Access Journals (Sweden)

    Tianhan Gao

    2017-01-01

    Full Text Available Access security and privacy have become a bottleneck for the popularization of future Cyber-Physical System (CPS networks. Furthermore, users’ need for privacy-preserved access during movement procedure is more urgent. To address the anonymous access authentication issue for CPS Wireless Mesh Network (CPS-WMN, a novel anonymous access authentication scheme based on proxy ring signature is proposed. A hierarchical authentication architecture is presented first. The scheme is then achieved from the aspect of intergroup and intragroup anonymous mutual authentication through proxy ring signature mechanism and certificateless signature mechanism, respectively. We present a formal security proof of the proposed protocol with SVO logic. The simulation and performance analysis demonstrate that the proposed scheme owns higher efficiency and adaptability than the typical one.

  19. Streptococcus iniae cpsG alters capsular carbohydrate composition and is a cause of serotype switching in vaccinated fish.

    Science.gov (United States)

    Heath, Candice; Gillen, Christine M; Chrysanthopoulos, Panagiotis; Walker, Mark J; Barnes, Andrew C

    2016-09-25

    Streptococcus iniae causes septicaemia and meningitis in marine and freshwater fish wherever they are farmed in warm-temperate and tropical regions. Although serotype specific, vaccination with bacterins (killed bacterial cultures) is largely successful and vaccine failure occurs only occasionally through emergence of new capsular serotypes. Previously we showed that mutations in vaccine escapes are restricted to a limited repertoire of genes within the 20-gene capsular polysaccharide (cps) operon. cpsG, a putative UDP-galactose 4-epimerase, has three sequence types based on the insertion or deletion of the three amino acids leucine, serine and lysine in the substrate binding site of the protein. To elucidate the role of cpsG in capsular polysaccharide (CPS) biosynthesis and capsular composition, we first prepared isogenic knockout and complemented mutants of cpsG by allelic exchange mutagenesis. Deletion of cpsG resulted in changes to colony morphology and cell buoyant density, and also significantly decreased galactose content relative to glucose in the capsular polysaccharide as determined by GC-MS, consistent with epimerase activity of CpsG. There was also a metabolic penalty of cpsG knockout revealed by slower growth in complex media, and reduced proliferation in whole fish blood. Moreover, whilst antibodies raised in fish against the wild type cross-reacted in whole cell and cps ELISA, they did not cross-opsonise the mutant in a peripheral blood neutrophil opsonisation assay, consistent with reported vaccine escape. We have shown here that mutation in cpsG results in altered CPS composition and this in turn results in poor cross-opsonisation that explains some of the historic vaccination failure on fish farms in Australia. Crown Copyright © 2016. Published by Elsevier B.V. All rights reserved.

  20. Reliable CPS design for mitigating semiconductor and battery aging in electric vehicles

    NARCIS (Netherlands)

    Chang, W.; Proebstl, A.; Goswami, D.; Zamani, M.; Chakraborty, S.

    2015-01-01

    Reliability and performance of cyber-physical systems (CPS) in electric vehicles (EVs) are influenced by three design aspects: (i) controller design, (ii) battery usage, i.e., Battery rate capacity and aging effects, (iii) processor aging of the in-vehicle embedded platform. In this paper, we

  1. Diffusive Promotion by Velocity Gradient of Cytoplasmic Streaming (CPS in Nitella Internodal Cells.

    Directory of Open Access Journals (Sweden)

    Kenji Kikuchi

    Full Text Available Cytoplasmic streaming (CPS is well known to assist the movement of nutrients, organelles and genetic material by transporting all of the cytoplasmic contents of a cell. CPS is generated by motility organelles that are driven by motor proteins near a membrane surface, where the CPS has been found to have a flat velocity profile in the flow field according to the sliding theory. There is a consistent mixing of contents inside the cell by CPS if the velocity gradient profile is flattened, which is not assisted by advection diffusion but is only supported by Brownian diffusion. Although the precise flow structure of the cytoplasm has an important role for cellular metabolism, the hydrodynamic mechanism of its convection has not been clarified. We conducted an experiment to visualise the flow of cytoplasm in Nitella cells by injecting tracer fluorescent nanoparticles and using a flow visualisation system in order to understand how the flow profile affects their metabolic system. We determined that the velocity field in the cytosol has an obvious velocity gradient, not a flattened gradient, which suggests that the gradient assists cytosolic mixing by Taylor-Aris dispersion more than by Brownian diffusion.

  2. 77 FR 58510 - Proposed Information Collection; Comment Request; Current Population Survey (CPS), Annual Social...

    Science.gov (United States)

    2012-09-21

    ... various population groups. A prime statistic of interest is the classification of people in poverty and... Information Collection; Comment Request; Current Population Survey (CPS), Annual Social and Economic... conducted this supplement annually for over 50 years. The Census Bureau and the Bureau of Labor Statistics...

  3. Development of the Childbirth Perception Scale (CPS) : Perception of delivery and the first postpartum week

    NARCIS (Netherlands)

    Truijens, Sophie E. M.; Wijnen, Hennie A.; Pommer, Antoinette M.; Oei, S. Guid; Pop, Victor J. M.

    2014-01-01

    Some caregivers suggest a more positive experience of childbirth when giving birth at home. Since properly developed instruments that assess women’s perception of delivery and the early postpartum are missing, the aim of the current study is to develop a Childbirth Perception Scale (CPS). Three

  4. Medium-Chain Chlorinated Paraffins (CPs) Dominate in Australian Sewage Sludge

    NARCIS (Netherlands)

    Brandsma, Sicco H; van Mourik, Louise; O'Brien, Jake W; Eaglesham, Geoff; Leonards, Pim E G; de Boer, Jacob; Gallen, Christie; Mueller, Jochen; Gaus, Caroline; Bogdal, Christian

    2017-01-01

    To simultaneously quantify and profile the complex mixture of short-, median-, and long-chain CPs (SCCPs, MCCPs, and LCCPs) in Australian sewage sludge, we applied and further validated a recently developed novel instrumental technique, using quadrupole time-of-flight high resolution mass

  5. Determination of tritium generation and release parameters at lithium CPS under neutron irradiation

    Energy Technology Data Exchange (ETDEWEB)

    Ponkratov, Yuriy, E-mail: ponkratov@nnc.kz [Institute of Atomic Energy, National Nuclear Center of RK, Kurchatov (Kazakhstan); Baklanov, Viktor; Skakov, Mazhyn; Kulsartov, Timur; Tazhibayeva, Irina; Gordienko, Yuriy; Zaurbekova, Zhanna; Tulubayev, Yevgeniy [Institute of Atomic Energy, National Nuclear Center of RK, Kurchatov (Kazakhstan); Chikhray, Yevgeniy [Institute of Experimental and Theoretical Physics of Kazakh National University, Almaty (Kazakhstan); Lyublinski, Igor [JSC “Star”, Moscow (Russian Federation); NRNU “MEPhI”, Moscow (Russian Federation); Vertkov, Alexey [JSC “Star”, Moscow (Russian Federation)

    2016-11-01

    Highlights: • The main parameters of tritium generation and release from lithium capillary-porous system (CPS) under neutron irradiation at the IVG.1 M research reactor is described in paper. • In the experiments a very small tritium release was fixed likely due to its high solubility in liquid lithium. • If the lithium CPS will be used as a plasma facing material in temperature range up to 773 K under neutron irradiation only helium will release from lithium CPS into a vacuum chamber. - Abstract: This paper describes the main parameters of tritium generation and release from lithium capillary-porous system (CPS) under neutron irradiation at the IVG.1 M research reactor. The experiments were carried out using the method of mass-spectrometric registration of released gases and using a specially constructed ampoule device. Irradiation was carried out at different reactor thermal powers (1, 2 and 6 MW) and sample temperatures from 473 to 773 K. In the experiments a very small tritium release was detected likely due to its high solubility in liquid lithium. It can be caused by formation of lithium tritide during tritium diffusion to the lithium surface.

  6. 76 FR 75869 - Proposed Information Collection; Comment Request; Current Population Survey (CPS) Fertility...

    Science.gov (United States)

    2011-12-05

    ... of the first birth. Potential needs for government assistance, such as aid to families with dependent children, child care, and maternal health care for single parent households, can be estimated using CPS... child spacing, and to aid policymakers in their decisions affected by changes in family size and...

  7. MPM4CPS: multi-pardigm modelling for cyber-physical systems

    NARCIS (Netherlands)

    Vangeheluwe, Hans; Ameral, Vasco; Giese, Holger; Broenink, Johannes F.; Schätz, Bernhard; Norta, Alexander; Carreira, Paulo; Lukovic, Ivan; Mayerhofer, Tanja; Wimmer, Manuel; Vellecillo, Antonio

    2016-01-01

    The last decades have seen the emergence of truly complex, designed systems, known as Cyber-Physical Systems (CPS). Engineering such systems requires integrating physical, software, and network aspects. To date, neither a unifying theory nor systematic design methods, techniques and tools exist to

  8. 78 FR 45910 - Proposed Information Collection; Comment Request; Current Population Survey (CPS) Email Address...

    Science.gov (United States)

    2013-07-30

    ... Population Survey (CPS) Email Address Collection Test Supplement AGENCY: U.S. Census Bureau, Commerce. ACTION... request clearance for the collection of data concerning the November 2013 Email Address Collection Test... tool to help increase response rates. We foresee that in the future, we could collect email addresses...

  9. Automated glycan assembly of a S. pneumoniae serotype 3 CPS antigen

    Directory of Open Access Journals (Sweden)

    Markus W. Weishaupt

    2016-07-01

    Full Text Available Vaccines against S. pneumoniae, one of the most prevalent bacterial infections causing severe disease, rely on isolated capsular polysaccharide (CPS that are conjugated to proteins. Such isolates contain a heterogeneous oligosaccharide mixture of different chain lengths and frame shifts. Access to defined synthetic S. pneumoniae CPS structures is desirable. Known syntheses of S. pneumoniae serotype 3 CPS rely on a time-consuming and low-yielding late-stage oxidation step, or use disaccharide building blocks which limits variability. Herein, we report the first iterative automated glycan assembly (AGA of a conjugation-ready S. pneumoniae serotype 3 CPS trisaccharide. This oligosaccharide was assembled using a novel glucuronic acid building block to circumvent the need for a late-stage oxidation. The introduction of a washing step with the activator prior to each glycosylation cycle greatly increased the yields by neutralizing any residual base from deprotection steps in the synthetic cycle. This process improvement is applicable to AGA of many other oligosaccharides.

  10. Implementation of the On-the-fly Encryption for the Linux OS Based on Certified CPS

    Directory of Open Access Journals (Sweden)

    Alexander Mikhailovich Korotin

    2013-02-01

    Full Text Available The article is devoted to tools for on-the-fly encryption and a method to implement such tool for the Linux OS based on a certified CPS.The idea is to modify the existing tool named eCryptfs. Russian cryptographic algorithms will be used in the user and kernel modes.

  11. Blockchain-Oriented Coalition Formation by CPS Resources: Ontological Approach and Case Study

    Directory of Open Access Journals (Sweden)

    Alexey Kashevnik

    2018-05-01

    Full Text Available Cyber-physical systems (CPS, robotics, Internet of Things, information and communication technologies have become more and more popular over the last several years. These topics open new perspectives and scenarios that can automate processes in human life. CPS are aimed at interaction support in information space for physical entities communicated in physical space in real time. At the same time the blockchain technology that becomes popular last years allows to organize immutable distributed database that store all significant information and provide access for CPS participants. The paper proposes an approach that is based on ontology-based context management, publish/subscribe semantic interoperability support, and blockchain techniques. Utilization of these techniques provide possibilities to develop CPS that supports dynamic, distributed, and stable coalition formation of the resources. The case study presented has been implemented for the scenario of heterogeneous mobile robots’ collaboration for the overcoming of obstacles. There are two types of robots and an information service participating in the scenario. Evaluation shows that the proposed approach is applicable for the presented class of scenarios.

  12. Evaluating the impact of image preprocessing on iris segmentation

    Directory of Open Access Journals (Sweden)

    José F. Valencia-Murillo

    2014-08-01

    Full Text Available Segmentation is one of the most important stages in iris recognition systems. In this paper, image preprocessing algorithms are applied in order to evaluate their impact on successful iris segmentation. The preprocessing algorithms are based on histogram adjustment, Gaussian filters and suppression of specular reflections in human eye images. The segmentation method introduced by Masek is applied on 199 images acquired under unconstrained conditions, belonging to the CASIA-irisV3 database, before and after applying the preprocessing algorithms. Then, the impact of image preprocessing algorithms on the percentage of successful iris segmentation is evaluated by means of a visual inspection of images in order to determine if circumferences of iris and pupil were detected correctly. An increase from 59% to 73% in percentage of successful iris segmentation is obtained with an algorithm that combine elimination of specular reflections, followed by the implementation of a Gaussian filter having a 5x5 kernel. The results highlight the importance of a preprocessing stage as a previous step in order to improve the performance during the edge detection and iris segmentation processes.

  13. Effects of preprocessing method on TVOC emission of car mat

    Science.gov (United States)

    Wang, Min; Jia, Li

    2013-02-01

    The effects of the mat preprocessing method on total volatile organic compounds (TVOC) emission of car mat are studied in this paper. An appropriate TVOC emission period for car mat is suggested. The emission factors for total volatile organic compounds from three kinds of new car mats are discussed. The car mats are preprocessed by washing, baking and ventilation. When car mats are preprocessed by washing, the TVOC emission for all samples tested are lower than that preprocessed in other methods. The TVOC emission is in stable situation for a minimum of 4 days. The TVOC emitted from some samples may exceed 2500μg/kg. But the TVOC emitted from washed Polyamide (PA) and wool mat is less than 2500μg/kg. The emission factors of total volatile organic compounds (TVOC) are experimentally investigated in the case of different preprocessing methods. The air temperature in environment chamber and the water temperature for washing are important factors influencing on emission of car mats.

  14. Cassini Mission Sequence Subsystem (MSS)

    Science.gov (United States)

    Alland, Robert

    2011-01-01

    This paper describes my work with the Cassini Mission Sequence Subsystem (MSS) team during the summer of 2011. It gives some background on the motivation for this project and describes the expected benefit to the Cassini program. It then introduces the two tasks that I worked on - an automatic system auditing tool and a series of corrections to the Cassini Sequence Generator (SEQ_GEN) - and the specific objectives these tasks were to accomplish. Next, it details the approach I took to meet these objectives and the results of this approach, followed by a discussion of how the outcome of the project compares with my initial expectations. The paper concludes with a summary of my experience working on this project, lists what the next steps are, and acknowledges the help of my Cassini colleagues.

  15. Operationally Responsive Spacecraft Subsystem, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Saber Astronautics proposes spacecraft subsystem control software which can autonomously reconfigure avionics for best performance during various mission conditions....

  16. Effect of microaerobic fermentation in preprocessing fibrous lignocellulosic materials.

    Science.gov (United States)

    Alattar, Manar Arica; Green, Terrence R; Henry, Jordan; Gulca, Vitalie; Tizazu, Mikias; Bergstrom, Robby; Popa, Radu

    2012-06-01

    Amending soil with organic matter is common in agricultural and logging practices. Such amendments have benefits to soil fertility and crop yields. These benefits may be increased if material is preprocessed before introduction into soil. We analyzed the efficiency of microaerobic fermentation (MF), also referred to as Bokashi, in preprocessing fibrous lignocellulosic (FLC) organic materials using varying produce amendments and leachate treatments. Adding produce amendments increased leachate production and fermentation rates and decreased the biological oxygen demand of the leachate. Continuously draining leachate without returning it to the fermentors led to acidification and decreased concentrations of polysaccharides (PS) in leachates. PS fragmentation and the production of soluble metabolites and gases stabilized in fermentors in about 2-4 weeks. About 2 % of the carbon content was lost as CO(2). PS degradation rates, upon introduction of processed materials into soil, were similar to unfermented FLC. Our results indicate that MF is insufficient for adequate preprocessing of FLC material.

  17. Real-time topic-aware influence maximization using preprocessing.

    Science.gov (United States)

    Chen, Wei; Lin, Tian; Yang, Cheng

    2016-01-01

    Influence maximization is the task of finding a set of seed nodes in a social network such that the influence spread of these seed nodes based on certain influence diffusion model is maximized. Topic-aware influence diffusion models have been recently proposed to address the issue that influence between a pair of users are often topic-dependent and information, ideas, innovations etc. being propagated in networks are typically mixtures of topics. In this paper, we focus on the topic-aware influence maximization task. In particular, we study preprocessing methods to avoid redoing influence maximization for each mixture from scratch. We explore two preprocessing algorithms with theoretical justifications. Our empirical results on data obtained in a couple of existing studies demonstrate that one of our algorithms stands out as a strong candidate providing microsecond online response time and competitive influence spread, with reasonable preprocessing effort.

  18. National Ignition Facility subsystem design requirements target positioning subsystem SSDR 1.8.2

    International Nuclear Information System (INIS)

    Pittenger, L.

    1996-01-01

    This Subsystem Design Requirement document is a development specification that establishes the performance, design, development and test requirements for the target positioner subsystem (WBS 1.8.2) of the NIF Target Experimental System (WBS 1.8)

  19. Information Subsystem of Shadow Economy Deactivation

    OpenAIRE

    Filippova, Tatyana V.

    2015-01-01

    The article presents information subsystem of shadow economy deactivation aimed at minimizing negative effects caused by its reproduction. In Russia, as well as in other countries, efficient implementation of the suggested system of shadow economy deactivation can be ensured by the developed information subsystem.

  20. Installation package for the Solaron solar subsystem

    Science.gov (United States)

    1979-01-01

    Information that is intended to be a guide for installation, operation, and maintenance of the various solar subsystems is presented. The subsystems consist of the following: collectors, storage, transport (air handler) and controller for heat pump and peak storage. Two prototype residential systems were installed at Akron, Ohio, and Duffield, Virginia.

  1. Private quantum subsystems and quasiorthogonal operator algebras

    International Nuclear Information System (INIS)

    Levick, Jeremy; Kribs, David W; Pereira, Rajesh; Jochym-O’Connor, Tomas; Laflamme, Raymond

    2016-01-01

    We generalize a recently discovered example of a private quantum subsystem to find private subsystems for Abelian subgroups of the n-qubit Pauli group, which exist in the absence of private subspaces. In doing so, we also connect these quantum privacy investigations with the theory of quasiorthogonal operator algebras through the use of tools from group theory and operator theory. (paper)

  2. Micro-Analyzer: automatic preprocessing of Affymetrix microarray data.

    Science.gov (United States)

    Guzzi, Pietro Hiram; Cannataro, Mario

    2013-08-01

    A current trend in genomics is the investigation of the cell mechanism using different technologies, in order to explain the relationship among genes, molecular processes and diseases. For instance, the combined use of gene-expression arrays and genomic arrays has been demonstrated as an effective instrument in clinical practice. Consequently, in a single experiment different kind of microarrays may be used, resulting in the production of different types of binary data (images and textual raw data). The analysis of microarray data requires an initial preprocessing phase, that makes raw data suitable for use on existing analysis platforms, such as the TIGR M4 (TM4) Suite. An additional challenge to be faced by emerging data analysis platforms is the ability to treat in a combined way those different microarray formats coupled with clinical data. In fact, resulting integrated data may include both numerical and symbolic data (e.g. gene expression and SNPs regarding molecular data), as well as temporal data (e.g. the response to a drug, time to progression and survival rate), regarding clinical data. Raw data preprocessing is a crucial step in analysis but is often performed in a manual and error prone way using different software tools. Thus novel, platform independent, and possibly open source tools enabling the semi-automatic preprocessing and annotation of different microarray data are needed. The paper presents Micro-Analyzer (Microarray Analyzer), a cross-platform tool for the automatic normalization, summarization and annotation of Affymetrix gene expression and SNP binary data. It represents the evolution of the μ-CS tool, extending the preprocessing to SNP arrays that were not allowed in μ-CS. The Micro-Analyzer is provided as a Java standalone tool and enables users to read, preprocess and analyse binary microarray data (gene expression and SNPs) by invoking TM4 platform. It avoids: (i) the manual invocation of external tools (e.g. the Affymetrix Power

  3. STRATEGI PEMBELAJARAN CREATIVE PROBLEM SOLVING (CPS BERBASIS EKSPERIMEN UNTUK MENINGKATKAN KEMAMPUAN KOGNITIF DAN KETERAMPILAN BERPIKIR KREATIF

    Directory of Open Access Journals (Sweden)

    Ahmad Busyairi

    2015-09-01

    Full Text Available This study aimed to get an idea related to the development of cognitive abilities and creative thinking skills in problem solving student after being given treatment with CPS-based experimental learning and conventional learning. The method used in this research is a quasi-experimental design with the randomized pretest-posttest control group design. The research sample group of 58 high school students who are divided into two classes (class 29 experimental and 29 control group. The collected data was then analyzed using N-gain calculation, t-test, and the calculation of effect size. The result showed that the students' cognitive abilities for both classes equally increased by the moderate category. For the creative thinking skills of students in problem solving, experimental class increased by categories was increased while the control class with low category. Based on the test results show that the application of learning hypothesis-based experiments CPS can significantly improve the cognitive abilities and skills of creative thinking in solving problems of students compared to the application of conventional learning. In addition, based on the calculation of effect size indicates that the application of experiment-based learning CPS effective in improving cognitive ability and creative thinking skills in problem solving students with moderate category. ABSTRAK Penelitian ini bertujuan untuk mendapatkan gambaran terkait peningkatan kemampuan kognitif dan keterampilan berpikir kreatif dalam pemecahan masalah siswa setelah diberikan perlakuan dengan pembelajaran CPS berbasis eksperimen dan pembelajaran kovensional. Metode yang digunakan dalam penelitian ini adalah metode kuasi eksperimen dengan desain the randomized pretest-posttest control group design. Sampel penelitian sebanyak 58 siswa SMA yang dibagi ke dalam dua kelas (29 kelas eksperimen dan 29 kelas kontrol. Data yang terkumpul kemudian dianalisis dengan menggunakan perhitungan N

  4. Response of subsystems on inelastic structures

    International Nuclear Information System (INIS)

    Lin, J.; Mahin, S.A.

    1984-01-01

    Preliminary analysis are performed to obtain insight into the seismic response of subsystems supported on simple structures that yield during severe earthquake ground motions. Current design recommendations for subsystems accounting for yielding of the supporting structures are assessed and found to be unconservative. An amplification factor is defined to quantify the effects of inelastic deformations of the supporting structure on subsystem response. Design guidelines are formulated for predicting the amplification factor based on statistical evaluation of the results generated for ten earthquake ground motions. Using these values, design floor response spectra can be obtained from conventional linear elastic floor response spectra accounting for yielding of the supporting structure without having to perform inelastic analysis. The effects of non-zero subsystem mass are examined. The recommended amplification factors are found to be applicable even when the mass of subsystem approaches that of the supporting structure

  5. Efficient chaotic based satellite power supply subsystem

    International Nuclear Information System (INIS)

    Ramos Turci, Luiz Felipe; Macau, Elbert E.N.; Yoneyama, Takashi

    2009-01-01

    In this work, we investigate the use of the Dynamical System Theory to increase the efficiency of the satellite power supply subsystems. The core of a satellite power subsystem relies on its DC/DC converter. This is a very nonlinear system that presents a multitude of phenomena ranging from bifurcations, quasi-periodicity, chaos, coexistence of attractors, among others. The traditional power subsystem design techniques try to avoid these nonlinear phenomena so that it is possible to use linear system theory in small regions about the equilibrium points. Here, we show that more efficiency can be drawn from a power supply subsystem if the DC/DC converter operates in regions of high nonlinearity. In special, if it operates in a chaotic regime, is has an intrinsic sensitivity that can be exploited to efficiently drive the power subsystem over high ranges of power requests by using control of chaos techniques.

  6. Efficient chaotic based satellite power supply subsystem

    Energy Technology Data Exchange (ETDEWEB)

    Ramos Turci, Luiz Felipe [Technological Institute of Aeronautics (ITA), Sao Jose dos Campos, SP (Brazil)], E-mail: felipeturci@yahoo.com.br; Macau, Elbert E.N. [National Institute of Space Research (Inpe), Sao Jose dos Campos, SP (Brazil)], E-mail: elbert@lac.inpe.br; Yoneyama, Takashi [Technological Institute of Aeronautics (ITA), Sao Jose dos Campos, SP (Brazil)], E-mail: takashi@ita.br

    2009-10-15

    In this work, we investigate the use of the Dynamical System Theory to increase the efficiency of the satellite power supply subsystems. The core of a satellite power subsystem relies on its DC/DC converter. This is a very nonlinear system that presents a multitude of phenomena ranging from bifurcations, quasi-periodicity, chaos, coexistence of attractors, among others. The traditional power subsystem design techniques try to avoid these nonlinear phenomena so that it is possible to use linear system theory in small regions about the equilibrium points. Here, we show that more efficiency can be drawn from a power supply subsystem if the DC/DC converter operates in regions of high nonlinearity. In special, if it operates in a chaotic regime, is has an intrinsic sensitivity that can be exploited to efficiently drive the power subsystem over high ranges of power requests by using control of chaos techniques.

  7. ECCS Operability With One or More Subsystem(s) Inoperable

    International Nuclear Information System (INIS)

    Swantner, Stephen R.; Andrachek, James D.

    2002-01-01

    equivalent to a single Operable ECCS train exists with those components out of service. This evaluation ensures that the safety analysis assumption associated with one train of emergency core cooling system (ECCS) is still preserved by various combinations of components in opposite trains. An ECCS train is inoperable if it is not capable of delivering design flow to the reactor coolant system (RCS). Individual components are inoperable of they are not capable of performing their design function, or support systems are not available. Due to the redundancy of trains and the diversity of subsystems, the inoperability of one component in a train does render the ECCS incapable of performing its function. Neither does the inoperability of two different components, each in a different train, necessarily result in a loss of function for the ECCS. The intent of Condition A is to maintain a combination of components such that 100% of the ECCS flow equivalent to a single Operable ECCS train remains available. This allows increased flexibility in plant operations under circumstances when components in the required subsystem may be inoperable, but the ECCS remains capable of delivering 100% of the required flow equivalent. This paper presents a methodology for identifying the minimum set of components necessary for 100% of the ECCS flow equivalent to a single Operable ECCS train. An example of the implementation of this methodology is provided for a typical Westinghouse 3-loop ECCS design. (authors)

  8. ACCESS Sub-system Performance

    Science.gov (United States)

    Kaiser, Mary Elizabeth; Morris, Matthew J.; Aldoroty, Lauren Nicole; Godon, David; Pelton, Russell; McCandliss, Stephan R.; Kurucz, Robert L.; Kruk, Jeffrey W.; Rauscher, Bernard J.; Kimble, Randy A.; Wright, Edward L.; Benford, Dominic J.; Gardner, Jonathan P.; Feldman, Paul D.; Moos, H. Warren; Riess, Adam G.; Bohlin, Ralph; Deustua, Susana E.; Dixon, William Van Dyke; Sahnow, David J.; Lampton, Michael; Perlmutter, Saul

    2016-01-01

    ACCESS: Absolute Color Calibration Experiment for Standard Stars is a series of rocket-borne sub-orbital missions and ground-based experiments designed to leverage significant technological advances in detectors, instruments, and the precision of the fundamental laboratory standards used to calibrate these instruments to enable improvements in the precision of the astrophysical flux scale through the transfer of laboratory absolute detector standards from the National Institute of Standards and Technology (NIST) to a network of stellar standards with a calibration accuracy of 1% and a spectral resolving power of 500 across the 0.35 to 1.7 micron bandpass.A cross wavelength calibration of the astrophysical flux scale to this level of precision over this broad a bandpass is relevant for the data used to probe fundamental astrophysical problems such as the SNeIa photometry based measurements used to constrain dark energy theories.We will describe the strategy for achieving this level of precision, the payload and calibration configuration, present sub-system test data, and the status and preliminary performance of the integration and test of the spectrograph and telescope. NASA APRA sounding rocket grant NNX14AH48G supports this work.

  9. The Calipso Thermal Control Subsystem

    Science.gov (United States)

    Gasbarre, Joseph F.; Ousley, Wes; Valentini, Marc; Thomas, Jason; Dejoie, Joel

    2007-01-01

    The Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) is a joint NASA-CNES mission to study the Earth s cloud and aerosol layers. The satellite is composed of a primary payload (built by Ball Aerospace) and a spacecraft platform bus (PROTEUS, built by Alcatel Alenia Space). The thermal control subsystem (TCS) for the CALIPSO satellite is a passive design utilizing radiators, multi-layer insulation (MLI) blankets, and both operational and survival surface heaters. The most temperature sensitive component within the satellite is the laser system. During thermal vacuum testing of the integrated satellite, the laser system s operational heaters were found to be inadequate in maintaining the lasers required set point. In response, a solution utilizing the laser system s survival heaters to augment the operational heaters was developed with collaboration between NASA, CNES, Ball Aerospace, and Alcatel-Alenia. The CALIPSO satellite launched from Vandenberg Air Force Base in California on April 26th, 2006. Evaluation of both the platform and payload thermal control systems show they are performing as expected and maintaining the critical elements of the satellite within acceptable limits.

  10. Pre-processing for Triangulation of Probabilistic Networks

    NARCIS (Netherlands)

    Bodlaender, H.L.; Koster, A.M.C.A.; Eijkhof, F. van den; Gaag, L.C. van der

    2001-01-01

    The currently most efficient algorithm for inference with a probabilistic network builds upon a triangulation of a networks graph. In this paper, we show that pre-processing can help in finding good triangulations for probabilistic networks, that is, triangulations with a minimal maximum

  11. Orthogonal feature selection method. [For preprocessing of man spectral data

    Energy Technology Data Exchange (ETDEWEB)

    Kowalski, B R [Univ. of Washington, Seattle; Bender, C F

    1976-01-01

    A new method of preprocessing spectral data for extraction of molecular structural information is desired. This SELECT method generates orthogonal features that are important for classification purposes and that also retain their identity to the original measurements. A brief introduction to chemical pattern recognition is presented. A brief description of the method and an application to mass spectral data analysis follow. (BLM)

  12. Image preprocessing study on KPCA-based face recognition

    Science.gov (United States)

    Li, Xuan; Li, Dehua

    2015-12-01

    Face recognition as an important biometric identification method, with its friendly, natural, convenient advantages, has obtained more and more attention. This paper intends to research a face recognition system including face detection, feature extraction and face recognition, mainly through researching on related theory and the key technology of various preprocessing methods in face detection process, using KPCA method, focuses on the different recognition results in different preprocessing methods. In this paper, we choose YCbCr color space for skin segmentation and choose integral projection for face location. We use erosion and dilation of the opening and closing operation and illumination compensation method to preprocess face images, and then use the face recognition method based on kernel principal component analysis method for analysis and research, and the experiments were carried out using the typical face database. The algorithms experiment on MATLAB platform. Experimental results show that integration of the kernel method based on PCA algorithm under certain conditions make the extracted features represent the original image information better for using nonlinear feature extraction method, which can obtain higher recognition rate. In the image preprocessing stage, we found that images under various operations may appear different results, so as to obtain different recognition rate in recognition stage. At the same time, in the process of the kernel principal component analysis, the value of the power of the polynomial function can affect the recognition result.

  13. An Effective Measured Data Preprocessing Method in Electrical Impedance Tomography

    Directory of Open Access Journals (Sweden)

    Chenglong Yu

    2014-01-01

    Full Text Available As an advanced process detection technology, electrical impedance tomography (EIT has widely been paid attention to and studied in the industrial fields. But the EIT techniques are greatly limited to the low spatial resolutions. This problem may result from the incorrect preprocessing of measuring data and lack of general criterion to evaluate different preprocessing processes. In this paper, an EIT data preprocessing method is proposed by all rooting measured data and evaluated by two constructed indexes based on all rooted EIT measured data. By finding the optimums of the two indexes, the proposed method can be applied to improve the EIT imaging spatial resolutions. In terms of a theoretical model, the optimal rooting times of the two indexes range in [0.23, 0.33] and in [0.22, 0.35], respectively. Moreover, these factors that affect the correctness of the proposed method are generally analyzed. The measuring data preprocessing is necessary and helpful for any imaging process. Thus, the proposed method can be generally and widely used in any imaging process. Experimental results validate the two proposed indexes.

  14. Total synthesis of a Streptococcus pneumoniae serotype 12F CPS repeating unit hexasaccharide

    Directory of Open Access Journals (Sweden)

    Peter H. Seeberger

    2017-01-01

    Full Text Available The Gram-positive bacterium Streptococcus pneumoniae causes severe disease globally. Vaccines that prevent S. pneumoniae infections induce antibodies against epitopes within the bacterial capsular polysaccharide (CPS. A better immunological understanding of the epitopes that protect from bacterial infection requires defined oligosaccharides obtained by total synthesis. The key to the synthesis of the S. pneumoniae serotype 12F CPS hexasaccharide repeating unit that is not contained in currently used glycoconjugate vaccines is the assembly of the trisaccharide β-D-GalpNAc-(1→4-[α-D-Glcp-(1→3]-β-D-ManpNAcA, in which the branching points are equipped with orthogonal protecting groups. A linear approach relying on the sequential assembly of monosaccharide building blocks proved superior to a convergent [3 + 3] strategy that was not successful due to steric constraints. The synthetic hexasaccharide is the starting point for further immunological investigations.

  15. A game-theoretic method for cross-layer stochastic resilient control design in CPS

    Science.gov (United States)

    Shen, Jiajun; Feng, Dongqin

    2018-03-01

    In this paper, the cross-layer security problem of cyber-physical system (CPS) is investigated from the game-theoretic perspective. Physical dynamics of plant is captured by stochastic differential game with cyber-physical influence being considered. The sufficient and necessary condition for the existence of state-feedback equilibrium strategies is given. The attack-defence cyber interactions are formulated by a Stackelberg game intertwined with stochastic differential game in physical layer. The condition such that the Stackelberg equilibrium being unique and the corresponding analytical solutions are both provided. An algorithm is proposed for obtaining hierarchical security strategy by solving coupled games, which ensures the operational normalcy and cyber security of CPS subject to uncertain disturbance and unexpected cyberattacks. Simulation results are given to show the effectiveness and performance of the proposed algorithm.

  16. Using creative problem solving (CPS) to improve leadership in a non-profit organization

    OpenAIRE

    Sousa, Fernando; Castelão, Paula; Monteiro, Ileana Pardal; Pellissier, René

    2013-01-01

    The purpose of this study was to evaluate the effectiveness of the Creative Problem Solving (CPS) method in improving the leadership process in a non-profit organization. The research was designed around an intervention and structured in three stages (pre-consult, intervention and follow-up), with a team designated by management, in order to bring leadership cohesion to both departments of the organization and also between the board and executive management. The results, expressed in the task...

  17. Analysis mathematical literacy skills in terms of the students’ metacognition on PISA-CPS model

    Science.gov (United States)

    Ovan; Waluya, S. B.; Nugroho, S. E.

    2018-03-01

    This research was aimed to know the effectiveness of PISA-CPS model and desceibe the mathematical literacy skills (KLM) in terms of the students’ metacognition. This study used Mixed Methods approaches with the concurrent embedded desaign. The technique of data analysis on quantitative research done analysis of lesson plan, prerequisite test, test hypotesis 1 and hypotesis test. While qualitative research done data reduction, data presentation, and drawing conclution and data verification. The subject of this study was the students of Grade Eight (VIII) of SMP Islam Sultan Agung 4 Semarang, Central Java. The writer analyzed the data with quantitative and qualitative approaches based on the metacognition of the students in low, medium and high groups. Subsequently, taken the mathematical literacy skills (KLM) from students’ metacognition in low, medium, and high . The results of the study showed that the PISA-CPS model was complete and the students’ mathematical literacy skills in terms of the students’ metacognition taught by the PISA-CPS model was higher than the expository learning. metacognitions’ students classified low hadmathematical literacy skills (KLM) less good, metacognitions’ students classified medium had mathematical literacy skills (KLM) good enough, metacognitions’ students classified high had mathematical literacy skills (KLM) very good. Based onresult analysis got conclusion that the PISA-CPS model was effective toward the students’ mathematical literacy skills (KLM). To increase the students’ mathematical literacy skills (KLM), the teachers need to provide reinforcements in the form of the exercises so that the student’s mathematical literacy was achieved at level 5 and level 6.

  18. Effect of eccentric location of the RBMK CPS displacer graphite block in the shielding sheath

    International Nuclear Information System (INIS)

    Dostov, A.I.

    2001-01-01

    Temperature conditions and accumulation of Wigner energy in the graphite block of the RBMK reactor CPS (control power system) displacer is examined. It is shown, that at eccentric location of the block in the shielding sheath average temperature of the block drops sharply. Due to the design demerit quantity of the stored energy in the block may be so great, that its release will result in melting of the displacer tube. (author)

  19. Medium-Chain Chlorinated Paraffins (CPs) Dominate in Australian Sewage Sludge.

    Science.gov (United States)

    Brandsma, Sicco H; van Mourik, Louise; O'Brien, Jake W; Eaglesham, Geoff; Leonards, Pim E G; de Boer, Jacob; Gallen, Christie; Mueller, Jochen; Gaus, Caroline; Bogdal, Christian

    2017-03-21

    To simultaneously quantify and profile the complex mixture of short-, median-, and long-chain CPs (SCCPs, MCCPs, and LCCPs) in Australian sewage sludge, we applied and further validated a recently developed novel instrumental technique, using quadrupole time-of-flight high resolution mass spectrometry running in the negative atmospheric pressure chemical ionization mode (APCI-qTOF-HRMS). Without using an analytical column the cleaned extracts were directly injected into the qTOF-HRMS followed by quantification of the CPs by a mathematical algorithm. The recoveries of the four SCCP, MCCP and LCCP-spiked sewage sludge samples ranged from 86 to 123%. This APCI-qTOF-HRMS method is a fast and promising technique for routinely measuring SCCPs, MCCPs, and LCCPs in sewage sludge. Australian sewage sludge was dominated by MCCPs with concentrations ranging from 542 to 3645 ng/g dry weight (dw). Lower SCCPs concentrations (<57-1421 ng/g dw) were detected in the Australian sewage sludge, which were comparable with the LCCPs concentrations (116-960 ng/g dw). This is the first time that CPs were reported in Australian sewage sludge. The results of this study gives a first impression on the distribution of the SCCPs, MCCPs, and LCCPs in Australia wastewater treatment plants (WWTPs).

  20. On the spatial behavior of background plasma in different background pressure in CPS device

    International Nuclear Information System (INIS)

    Samantaray, Subrata; Paikaray, Rita; Sahoo, Gourishankar; Das, Parthasarathi; Ghosh, Joydeep; Sanyasi, Amulya Kumar

    2015-01-01

    Blob formation and transport is a major concern for investigators as it greatly reduces the efficiency of the devices. Initial results from CPS device confirm the role of fast neutrals inside the bulk plasma in the process of blob formation and transport. 2-D simulation of curvature and velocity shear instability in plasma structures suggest that in the presence of background plasma, secondary instability do not grow non-linearly to a high level and stabilizes the flow. Adiabaticity effect also creates a radial barrier for interchange modes. In the absence of background plasma the blob fragments even at the modest level of viscosity. The fast neutrals outside bulk plasma supposed to stabilize the system. The background plasma set up is aimed at creating fast neutrals outside main plasma column, hence; the background plasma set up is done in CPS device. The spatial behavior of plasma column in between electrodes is different for different base pressure in CPS device. The spatial variation of electron temperature of plasma column between electrodes is presented in this communication. Electron temperature is measured from emission spectroscopy data. The maximum electron temperature (line averaged) is ∼ 1.5 eV. (author)

  1. Data Transport Subsystem - The SFOC glue

    Science.gov (United States)

    Parr, Stephen J.

    1988-01-01

    The design and operation of the Data Transport Subsystem (DTS) for the JPL Space Flight Operation Center (SFOC) are described. The SFOC is the ground data system under development to serve interplanetary space probes; in addition to the DTS, it comprises a ground interface facility, a telemetry-input subsystem, data monitor and display facilities, and a digital TV system. DTS links the other subsystems via an ISO OSI presentation layer and an LAN. Here, particular attention is given to the DTS services and service modes (virtual circuit, datagram, and broadcast), the DTS software architecture, the logical-name server, the role of the integrated AI library, and SFOC as a distributed system.

  2. Study on perception and control layer of mine CPS with mixed logic dynamic approach

    Science.gov (United States)

    Li, Jingzhao; Ren, Ping; Yang, Dayu

    2017-01-01

    Mine inclined roadway transportation system of mine cyber physical system is a hybrid system consisting of a continuous-time system and a discrete-time system, which can be divided into inclined roadway signal subsystem, error-proofing channel subsystems, anti-car subsystems, and frequency control subsystems. First, to ensure stable operation, improve efficiency and production safety, this hybrid system model with n inputs and m outputs is constructed and analyzed in detail, then its steady schedule state to be solved. Second, on the basis of the formal modeling for real-time systems, we use hybrid toolbox for system security verification. Third, the practical application of mine cyber physical system shows that the method for real-time simulation of mine cyber physical system is effective.

  3. Verificación del cumplimiento de los criterios CPS de la NERC en el SNI del Ecuador; Verification of the execution of the CPS approachies of NEPC in the national interconnected System of Ecuador

    Directory of Open Access Journals (Sweden)

    Marcelo Arias Castañeda

    2011-02-01

    Full Text Available En este artículo se describe el procedimiento utilizado para la evaluación del controlautomático de la generación en el Sistema Nacional Interconectado (SNI del Ecuador,tomando como marco de referencia los criterios CPS-1 y CPS-2 (Control PerformanceStandars por sus siglas en inglés. de la NERC. El artículo está dividido en secciones queexplican en detalle en qué consisten los criterios CPS de la NERC, cómo se calculan loslímites de estos criterios teniendo en consideración las condiciones específicas del sistemaeléctrico ecuatoriano (sección 2, y cómo se aplican los criterios en la evaluación del controlautomático de la generación en el SNI del Ecuador teniendo en cuenta la interconexión conColombia (sección 3. Además se sugiere el valor mínimo de BIAS a fijar en el sistemaecuatoriano para el cumplimiento del criterio CPS-2. This paper presents an analytic framework for the formulation and evaluation of automaticgeneration control in the Ecuadorian Electrical System (SNI taking like reference mark theapproaches CPS-1 and CPS-2 of NERC. The article is divided in sections that they explain indetail the approaches CPS of NERC, how the limits of these approaches are calculated havingin consideration the specific conditions of the electric Ecuadorian system (section 2, and howthe approaches are applied in the evaluation of the automatic control of the generation in SNIof Ecuador keeping in mind the interconnection with Colombia (section 3. The minimum valueof BIAS is also suggested to fix in the Ecuadorian system for the execution of the approachCPS-2.

  4. Research on pre-processing of QR Code

    Science.gov (United States)

    Sun, Haixing; Xia, Haojie; Dong, Ning

    2013-10-01

    QR code encodes many kinds of information because of its advantages: large storage capacity, high reliability, full arrange of utter-high-speed reading, small printing size and high-efficient representation of Chinese characters, etc. In order to obtain the clearer binarization image from complex background, and improve the recognition rate of QR code, this paper researches on pre-processing methods of QR code (Quick Response Code), and shows algorithms and results of image pre-processing for QR code recognition. Improve the conventional method by changing the Souvola's adaptive text recognition method. Additionally, introduce the QR code Extraction which adapts to different image size, flexible image correction approach, and improve the efficiency and accuracy of QR code image processing.

  5. Periodic subsystem density-functional theory

    International Nuclear Information System (INIS)

    Genova, Alessandro; Pavanello, Michele; Ceresoli, Davide

    2014-01-01

    By partitioning the electron density into subsystem contributions, the Frozen Density Embedding (FDE) formulation of subsystem Density Functional Theory (DFT) has recently emerged as a powerful tool for reducing the computational scaling of Kohn–Sham DFT. To date, however, FDE has been employed to molecular systems only. Periodic systems, such as metals, semiconductors, and other crystalline solids have been outside the applicability of FDE, mostly because of the lack of a periodic FDE implementation. To fill this gap, in this work we aim at extending FDE to treat subsystems of molecular and periodic character. This goal is achieved by a dual approach. On one side, the development of a theoretical framework for periodic subsystem DFT. On the other, the realization of the method into a parallel computer code. We find that periodic FDE is capable of reproducing total electron densities and (to a lesser extent) also interaction energies of molecular systems weakly interacting with metallic surfaces. In the pilot calculations considered, we find that FDE fails in those cases where there is appreciable density overlap between the subsystems. Conversely, we find FDE to be in semiquantitative agreement with Kohn–Sham DFT when the inter-subsystem density overlap is low. We also conclude that to make FDE a suitable method for describing molecular adsorption at surfaces, kinetic energy density functionals that go beyond the GGA level must be employed

  6. Periodic subsystem density-functional theory

    Science.gov (United States)

    Genova, Alessandro; Ceresoli, Davide; Pavanello, Michele

    2014-11-01

    By partitioning the electron density into subsystem contributions, the Frozen Density Embedding (FDE) formulation of subsystem Density Functional Theory (DFT) has recently emerged as a powerful tool for reducing the computational scaling of Kohn-Sham DFT. To date, however, FDE has been employed to molecular systems only. Periodic systems, such as metals, semiconductors, and other crystalline solids have been outside the applicability of FDE, mostly because of the lack of a periodic FDE implementation. To fill this gap, in this work we aim at extending FDE to treat subsystems of molecular and periodic character. This goal is achieved by a dual approach. On one side, the development of a theoretical framework for periodic subsystem DFT. On the other, the realization of the method into a parallel computer code. We find that periodic FDE is capable of reproducing total electron densities and (to a lesser extent) also interaction energies of molecular systems weakly interacting with metallic surfaces. In the pilot calculations considered, we find that FDE fails in those cases where there is appreciable density overlap between the subsystems. Conversely, we find FDE to be in semiquantitative agreement with Kohn-Sham DFT when the inter-subsystem density overlap is low. We also conclude that to make FDE a suitable method for describing molecular adsorption at surfaces, kinetic energy density functionals that go beyond the GGA level must be employed.

  7. Panel summary of cyber-physical systems (CPS) and Internet of Things (IoT) opportunities with information fusion

    Science.gov (United States)

    Blasch, Erik; Kadar, Ivan; Grewe, Lynne L.; Brooks, Richard; Yu, Wei; Kwasinski, Andres; Thomopoulos, Stelios; Salerno, John; Qi, Hairong

    2017-05-01

    During the 2016 SPIE DSS conference, nine panelists were invited to highlight the trends and opportunities in cyber-physical systems (CPS) and Internet of Things (IoT) with information fusion. The world will be ubiquitously outfitted with many sensors to support our daily living thorough the Internet of Things (IoT), manage infrastructure developments with cyber-physical systems (CPS), as well as provide communication through networked information fusion technology over the internet (NIFTI). This paper summarizes the panel discussions on opportunities of information fusion to the growing trends in CPS and IoT. The summary includes the concepts and areas where information supports these CPS/IoT which includes situation awareness, transportation, and smart grids.

  8. Linguistic Preprocessing and Tagging for Problem Report Trend Analysis

    Science.gov (United States)

    Beil, Robert J.; Malin, Jane T.

    2012-01-01

    Mr. Robert Beil, Systems Engineer at Kennedy Space Center (KSC), requested the NASA Engineering and Safety Center (NESC) develop a prototype tool suite that combines complementary software technology used at Johnson Space Center (JSC) and KSC for problem report preprocessing and semantic tag extraction, to improve input to data mining and trend analysis. This document contains the outcome of the assessment and the Findings, Observations and NESC Recommendations.

  9. Learning and Generalisation in Neural Networks with Local Preprocessing

    OpenAIRE

    Kutsia, Merab

    2007-01-01

    We study learning and generalisation ability of a specific two-layer feed-forward neural network and compare its properties to that of a simple perceptron. The input patterns are mapped nonlinearly onto a hidden layer, much larger than the input layer, and this mapping is either fixed or may result from an unsupervised learning process. Such preprocessing of initially uncorrelated random patterns results in the correlated patterns in the hidden layer. The hidden-to-output mapping of the net...

  10. Summary of ENDF/B pre-processing codes

    International Nuclear Information System (INIS)

    Cullen, D.E.

    1981-12-01

    This document contains the summary documentation for the ENDF/B pre-processing codes: LINEAR, RECENT, SIGMA1, GROUPIE, EVALPLOT, MERGER, DICTION, CONVERT. This summary documentation is merely a copy of the comment cards that appear at the beginning of each programme; these comment cards always reflect the latest status of input options, etc. For the latest published documentation on the methods used in these codes see UCRL-50400, Vol.17 parts A-E, Lawrence Livermore Laboratory (1979)

  11. Pre-processing by data augmentation for improved ellipse fitting.

    Science.gov (United States)

    Kumar, Pankaj; Belchamber, Erika R; Miklavcic, Stanley J

    2018-01-01

    Ellipse fitting is a highly researched and mature topic. Surprisingly, however, no existing method has thus far considered the data point eccentricity in its ellipse fitting procedure. Here, we introduce the concept of eccentricity of a data point, in analogy with the idea of ellipse eccentricity. We then show empirically that, irrespective of ellipse fitting method used, the root mean square error (RMSE) of a fit increases with the eccentricity of the data point set. The main contribution of the paper is based on the hypothesis that if the data point set were pre-processed to strategically add additional data points in regions of high eccentricity, then the quality of a fit could be improved. Conditional validity of this hypothesis is demonstrated mathematically using a model scenario. Based on this confirmation we propose an algorithm that pre-processes the data so that data points with high eccentricity are replicated. The improvement of ellipse fitting is then demonstrated empirically in real-world application of 3D reconstruction of a plant root system for phenotypic analysis. The degree of improvement for different underlying ellipse fitting methods as a function of data noise level is also analysed. We show that almost every method tested, irrespective of whether it minimizes algebraic error or geometric error, shows improvement in the fit following data augmentation using the proposed pre-processing algorithm.

  12. A Stereo Music Preprocessing Scheme for Cochlear Implant Users.

    Science.gov (United States)

    Buyens, Wim; van Dijk, Bas; Wouters, Jan; Moonen, Marc

    2015-10-01

    Listening to music is still one of the more challenging aspects of using a cochlear implant (CI) for most users. Simple musical structures, a clear rhythm/beat, and lyrics that are easy to follow are among the top factors contributing to music appreciation for CI users. Modifying the audio mix of complex music potentially improves music enjoyment in CI users. A stereo music preprocessing scheme is described in which vocals, drums, and bass are emphasized based on the representation of the harmonic and the percussive components in the input spectrogram, combined with the spatial allocation of instruments in typical stereo recordings. The scheme is assessed with postlingually deafened CI subjects (N = 7) using pop/rock music excerpts with different complexity levels. The scheme is capable of modifying relative instrument level settings, with the aim of improving music appreciation in CI users, and allows individual preference adjustments. The assessment with CI subjects confirms the preference for more emphasis on vocals, drums, and bass as offered by the preprocessing scheme, especially for songs with higher complexity. The stereo music preprocessing scheme has the potential to improve music enjoyment in CI users by modifying the audio mix in widespread (stereo) music recordings. Since music enjoyment in CI users is generally poor, this scheme can assist the music listening experience of CI users as a training or rehabilitation tool.

  13. Optimization of miRNA-seq data preprocessing.

    Science.gov (United States)

    Tam, Shirley; Tsao, Ming-Sound; McPherson, John D

    2015-11-01

    The past two decades of microRNA (miRNA) research has solidified the role of these small non-coding RNAs as key regulators of many biological processes and promising biomarkers for disease. The concurrent development in high-throughput profiling technology has further advanced our understanding of the impact of their dysregulation on a global scale. Currently, next-generation sequencing is the platform of choice for the discovery and quantification of miRNAs. Despite this, there is no clear consensus on how the data should be preprocessed before conducting downstream analyses. Often overlooked, data preprocessing is an essential step in data analysis: the presence of unreliable features and noise can affect the conclusions drawn from downstream analyses. Using a spike-in dilution study, we evaluated the effects of several general-purpose aligners (BWA, Bowtie, Bowtie 2 and Novoalign), and normalization methods (counts-per-million, total count scaling, upper quartile scaling, Trimmed Mean of M, DESeq, linear regression, cyclic loess and quantile) with respect to the final miRNA count data distribution, variance, bias and accuracy of differential expression analysis. We make practical recommendations on the optimal preprocessing methods for the extraction and interpretation of miRNA count data from small RNA-sequencing experiments. © The Author 2015. Published by Oxford University Press.

  14. The evaluation subsystem of RODOS

    International Nuclear Information System (INIS)

    Niculae, C.; Treitz, M.; Geldermann, J.

    2003-01-01

    Full text: The evaluation subsystem (ESY) of the RODOS aims to rank countermeasure strategies according to their potential benefit and preference weights provided by the decision makers (DMS). In the previous version of the ESY, the structure of the decision problem (attributes, strategies, etc.) had to be largely defined by the early modules in the RODOS chain (ASY-CSYESY). For this reason, the ESY runs would be initiated with a list of strategies, a comprehensive attribute tree and a consequence table giving the impacts for each attribute under each strategy. The first sub-module of the ESY allows the user to select the attributes to be analyzed and then filters out the remaining attributes. For instance, the CSY module LCMT passes over 100 attributes to the ESY, from which one would expect the analyst/DMS to select maybe 10 to 15 for the evaluation. This sub-module also adds a sub-tree of subjective attributes (qualitative information) to the attribute tree provided by the CSY and allows the user to select which of these should be passed forward for further analysis. In addition, data from the economic and health modules (e.g. costs, health effects, etc.) can be grafted on as a sub-tree. The second sub-module performs the ranking of the alternative strategies and outputs a short list of best strategies. The last component of the ESY contains an explanation facility that uses a fine set of rules to reason about the ranking of the strategies. Due to the complexity of the nuclear emergency management and the wide range of DMS and stakeholders involved in the decision process, it is difficult to predetermine the range of strategies they will consider. The current strategies or groups of strategies included in the system are only driven by radiological factors. Research in the field of multicriteria decision aid has shown that value focused approaches could result in new sets of alternatives, new criteria to be considered or different decision tree structures

  15. Comparison of multivariate preprocessing techniques as applied to electronic tongue based pattern classification for black tea

    International Nuclear Information System (INIS)

    Palit, Mousumi; Tudu, Bipan; Bhattacharyya, Nabarun; Dutta, Ankur; Dutta, Pallab Kumar; Jana, Arun; Bandyopadhyay, Rajib; Chatterjee, Anutosh

    2010-01-01

    In an electronic tongue, preprocessing on raw data precedes pattern analysis and choice of the appropriate preprocessing technique is crucial for the performance of the pattern classifier. While attempting to classify different grades of black tea using a voltammetric electronic tongue, different preprocessing techniques have been explored and a comparison of their performances is presented in this paper. The preprocessing techniques are compared first by a quantitative measurement of separability followed by principle component analysis; and then two different supervised pattern recognition models based on neural networks are used to evaluate the performance of the preprocessing techniques.

  16. Embedded Thermal Control for Spacecraft Subsystems Miniaturization

    Science.gov (United States)

    Didion, Jeffrey R.

    2014-01-01

    Optimization of spacecraft size, weight and power (SWaP) resources is an explicit technical priority at Goddard Space Flight Center. Embedded Thermal Control Subsystems are a promising technology with many cross cutting NSAA, DoD and commercial applications: 1.) CubeSatSmallSat spacecraft architecture, 2.) high performance computing, 3.) On-board spacecraft electronics, 4.) Power electronics and RF arrays. The Embedded Thermal Control Subsystem technology development efforts focus on component, board and enclosure level devices that will ultimately include intelligent capabilities. The presentation will discuss electric, capillary and hybrid based hardware research and development efforts at Goddard Space Flight Center. The Embedded Thermal Control Subsystem development program consists of interrelated sub-initiatives, e.g., chip component level thermal control devices, self-sensing thermal management, advanced manufactured structures. This presentation includes technical status and progress on each of these investigations. Future sub-initiatives, technical milestones and program goals will be presented.

  17. Parallel pipeline algorithm of real time star map preprocessing

    Science.gov (United States)

    Wang, Hai-yong; Qin, Tian-mu; Liu, Jia-qi; Li, Zhi-feng; Li, Jian-hua

    2016-03-01

    To improve the preprocessing speed of star map and reduce the resource consumption of embedded system of star tracker, a parallel pipeline real-time preprocessing algorithm is presented. The two characteristics, the mean and the noise standard deviation of the background gray of a star map, are firstly obtained dynamically by the means that the intervene of the star image itself to the background is removed in advance. The criterion on whether or not the following noise filtering is needed is established, then the extraction threshold value is assigned according to the level of background noise, so that the centroiding accuracy is guaranteed. In the processing algorithm, as low as two lines of pixel data are buffered, and only 100 shift registers are used to record the connected domain label, by which the problems of resources wasting and connected domain overflow are solved. The simulating results show that the necessary data of the selected bright stars could be immediately accessed in a delay time as short as 10us after the pipeline processing of a 496×496 star map in 50Mb/s is finished, and the needed memory and registers resource total less than 80kb. To verify the accuracy performance of the algorithm proposed, different levels of background noise are added to the processed ideal star map, and the statistic centroiding error is smaller than 1/23 pixel under the condition that the signal to noise ratio is greater than 1. The parallel pipeline algorithm of real time star map preprocessing helps to increase the data output speed and the anti-dynamic performance of star tracker.

  18. Timing subsystem development: Network synchronization experiments

    Science.gov (United States)

    Backe, K. R.

    1983-01-01

    This paper describes a program in which several experimental timing subsystem prototypes were designed, fabricated, and field tested using a small network of troposcatter and microwave digital communication links. This equipment was responsible for modem/radio interfacing, time interval measurement, clock adjustment and distribution, synchronization technique, and node to node information exchange. Presented are discussions of the design approach, measurement plan, and performance assessment methods. Recommendations are made based on the findings of the test program and an evaluation of the design of both the hardware and software elements of the timing subsystem prototypes.

  19. Primary electric propulsion thrust subsystem definition

    Science.gov (United States)

    Masek, T. D.; Ward, J. W.; Kami, S.

    1975-01-01

    A review is presented of the current status of primary propulsion thrust subsystem (TSS) performance, packaging considerations, and certain operational characteristics. Thrust subsystem related work from recent studies by Jet Propulsion Laboratories (JPL), Rockwell and Boeing is discussed. Existing performance for 30-cm thrusters, power processors and TSS is present along with projections for future improvements. Results of analyses to determine (1) magnetic field distributions resulting from an array of thrusters, (2) thruster emitted particle flux distributions from an array of thrusters, and (3) TSS element failure rates are described to indicate the availability of analytical tools for evaluation of TSS designs.

  20. Lasing without inversion due to cooling subsystem

    International Nuclear Information System (INIS)

    Shakhmuratov, R.N.

    1997-01-01

    The new possibility of inversionless lasing is discussed. We have considered the resonant interaction of a two-level system (TLS) with photons and the adiabatic interaction with an ensemble of Bose particles. It is found out that a TLS with equally populated energy levels amplifies the coherent light with Stokes-shifted frequency. This becomes possible as photon emission is accompanied by Bose particles excitation. The energy flow from the TLS to the photon subsystem is realized due to the Bose subsystem being at finite temperature and playing the cooler role. The advantage of this new lasing principle is discussed. It is shown that lasing conditions strongly differ from conventional ones

  1. Contour extraction of echocardiographic images based on pre-processing

    Energy Technology Data Exchange (ETDEWEB)

    Hussein, Zinah Rajab; Rahmat, Rahmita Wirza; Abdullah, Lili Nurliyana [Department of Multimedia, Faculty of Computer Science and Information Technology, Department of Computer and Communication Systems Engineering, Faculty of Engineering University Putra Malaysia 43400 Serdang, Selangor (Malaysia); Zamrin, D M [Department of Surgery, Faculty of Medicine, National University of Malaysia, 56000 Cheras, Kuala Lumpur (Malaysia); Saripan, M Iqbal

    2011-02-15

    In this work we present a technique to extract the heart contours from noisy echocardiograph images. Our technique is based on improving the image before applying contours detection to reduce heavy noise and get better image quality. To perform that, we combine many pre-processing techniques (filtering, morphological operations, and contrast adjustment) to avoid unclear edges and enhance low contrast of echocardiograph images, after implementing these techniques we can get legible detection for heart boundaries and valves movement by traditional edge detection methods.

  2. Contour extraction of echocardiographic images based on pre-processing

    International Nuclear Information System (INIS)

    Hussein, Zinah Rajab; Rahmat, Rahmita Wirza; Abdullah, Lili Nurliyana; Zamrin, D M; Saripan, M Iqbal

    2011-01-01

    In this work we present a technique to extract the heart contours from noisy echocardiograph images. Our technique is based on improving the image before applying contours detection to reduce heavy noise and get better image quality. To perform that, we combine many pre-processing techniques (filtering, morphological operations, and contrast adjustment) to avoid unclear edges and enhance low contrast of echocardiograph images, after implementing these techniques we can get legible detection for heart boundaries and valves movement by traditional edge detection methods.

  3. Parallel preprocessing in a nuclear data acquisition system

    International Nuclear Information System (INIS)

    Pichot, G.; Auriol, E.; Lemarchand, G.; Millaud, J.

    1977-01-01

    The appearance of microprocessors and large memory chips has somewhat modified the spectrum of tools usable by the data acquisition system designer. This is particular true in the nuclear research field where the data flow has been continuously growing as a consequence of the increasing capabilities of new detectors. This paper deals with the insertion, between a data acquisition system and a computer, of a preprocessing structure based on microprocessors and large capacity high speed memories. The results shows a significant improvement on several aspects in the operation of the system with returns paying back the investments in 18 months

  4. Modeling of continuous withdrawal and falling out of CPS control rods accident, using QUABOX/CUBBOX-HYCA code

    International Nuclear Information System (INIS)

    Bubelis, E.; Pabarcius, R.; Tonkunas, A.

    2003-01-01

    At present, at the Ignalina NPP the process of a wider use of the new uranium-erbium fuel of higher saturation and the manual control rods of new design is going on. These actions are directed to reducing the reactor control and protection system (CPS) cooling circuit voiding effect and to improving the technical and economical reactor operation parameters. Continuous withdrawal and falling out of CPS control rods lead to the reactivity and power changes in the reactor core. Therefore, important for safety is the evaluation of the CPS ability to compensate for the resulting excess reactivity in the reactor core, having the changed core loading conditions during such accidents. This article presents the calculation results of the continuous withdrawal and falling out of CPS control rods for the specific reactor core conditions of the Ignalina NPP Unit 2, i.e. during its operation on the maximum allowed power level of 4200 MW. The German code QUABOX/CUBBOX-HYCA with the improved CPS logic was used for the simulation of the above-mentioned transients. (author)

  5. Interaction of valleys and circulation patterns (CPs on spatial precipitation patterns in southern Germany

    Directory of Open Access Journals (Sweden)

    M. Liu

    2013-11-01

    Full Text Available Topography exerts influence on the spatial precipitation distribution over different scales, known typically at the large scale as the orographic effect, and at the small scale as the wind-drift rainfall (WDR effect. At the intermediate scale (1~10 km, which is characterized by secondary mountain valleys, topography also demonstrates some effect on the precipitation pattern. This paper investigates such intermediate-scale topographic effects on precipitation patterns, focusing on narrow-steep valleys in the complex terrain of southern Germany, based on the daily observations over a 48 yr period (1960~2007 from a high-density rain-gauge network covering two sub-areas, Baden-Wuerttemberg (BW and Bavaria (BY. Precipitation data at the valley and non-valley stations are compared under consideration of the daily general circulation patterns (CPs classified by a fuzzy rule-based algorithm. Scatter plots of precipitation against elevation demonstrate a different behavior of valley stations comparing to non-valley stations. A detailed study of the precipitation time series for selected station triplets, each consisting of a valley station, a mountain station and an open station have been investigated by statistical analysis with the Kolmogorov–Smirnov (KS test supplemented by the One-way analysis of variance (One-way ANOVA and a graphical comparison of the mean precipitation amounts. The results show an interaction of valley orientation and the direction of the CPs at the intermediate scale, i.e. when the valley is shielded from the CP which carries the precipitation, the precipitation amount within the valley is comparable to that on the mountain crest, and both larger than the precipitation at the open station. When the valley is open to the CP, the precipitation within the valley is similar to the open station but much less than that on the mountain. Such phenomenon where the precipitation is "blind" to the valleys at the intermediate scale

  6. Experimental evaluation of the IP multimedia subsystem

    NARCIS (Netherlands)

    Oredope, A.; Liotta, A.; Yang, K.; Tyrode-Goilo, D.H.; Magedanz, T.; Mauro Madeira, E.R.M.; Dini, P.

    2005-01-01

    The IP Multimedia Subsystem (IMS) is the latest framework for a seamless conversion of the ordinary Internet with mobile cellular systems. As such it has the backing of all major companies since it aims to offer a unified solution to integrated mobile services, including mechanisms for security,

  7. MITS Feed and Withdrawal Subsystem: operating procedures

    International Nuclear Information System (INIS)

    Brown, W.S.

    1980-01-01

    This procedure details the steps involved in establishing closed loop flows, providing UF 6 vapor to the FEED header of the Sampling Subsystem and returning it through the PRODUCT and TAILS headers via the F and W recycle valves. It is essentially a Startup Procedure

  8. Union Listing via OCLC's Serials Control Subsystem.

    Science.gov (United States)

    O'Malley, Terrence J.

    1984-01-01

    Describes library use of Conversion of Serials Project's (CONSER) online national machine-readable database for serials to create online union lists of serials via OCLC's Serial Control Subsystem. Problems in selection of appropriate, accurate, and authenticated records and prospects for the future are discussed. Twenty sources and sample records…

  9. Accelerated life testing of spacecraft subsystems

    Science.gov (United States)

    Wiksten, D.; Swanson, J.

    1972-01-01

    The rationale and requirements for conducting accelerated life tests on electronic subsystems of spacecraft are presented. A method for applying data on the reliability and temperature sensitivity of the parts contained in a sybsystem to the selection of accelerated life test parameters is described. Additional considerations affecting the formulation of test requirements are identified, and practical limitations of accelerated aging are described.

  10. Integrating the autonomous subsystems management process

    Science.gov (United States)

    Ashworth, Barry R.

    1992-01-01

    Ways in which the ranking of the Space Station Module Power Management and Distribution testbed may be achieved and an individual subsystem's internal priorities may be managed within the complete system are examined. The application of these results in the integration and performance leveling of the autonomously managed system is discussed.

  11. MITS Feed and Withdrawal Subsystem: operating procedures

    International Nuclear Information System (INIS)

    Brown, W.S.

    1980-01-01

    This document details procedures for the operation of the MITS (Machine Interface Test System) Feed and Withdrawal Subsystem (F and W). Included are fill with UF 6 , establishment of recycle and thruput flows, shutdown, UF 6 makeup, dump to supply container, Cascade dump to F and W, and lights cold trap dump, all normal procedures, plus an alternate procedure for trapping light gases

  12. Analog subsystem for the plutonium protection system

    International Nuclear Information System (INIS)

    Arlowe, H.D.

    1978-12-01

    An analog subsystem is described which monitors certain functions in the Plutonium Protection System. Rotary and linear potentiometer output signals are digitized, as are the outputs from thermistors and container ''bulge'' sensors. This work was sponsored by the Department of Energy/Office of Safeguards and Security (DOE/OSS) as part of the overall Sandia Fixed Facility Physical Protection Program

  13. MITS Feed and Withdrawal Subsystem: operating procedures

    International Nuclear Information System (INIS)

    Brown, W.S.

    1980-01-01

    This procedure details the steps involved in filling two of the four MITS (Machine Interface Test System) Feed and Withdrawal subsystem main traps and the Sample/Inventory Make-up Pipette with uranium hexafluoride from the ''AS RECEIVED'' UF 6 supply

  14. Presence in the IP multimedia subsystem

    NARCIS (Netherlands)

    Lin, L.; Liotta, A.

    2007-01-01

    With an ever increasing penetration of Internet Protocol (IP) technologies, the wireless industry is evolving the mobile core network towards all-IP network. The IP Multimedia Subsystem (IMS) is a standardised Next Generation Network (NGN) architectural framework defined by the 3rd Generation

  15. Electronic Subsystems For Laser Communication System

    Science.gov (United States)

    Long, Catherine; Maruschak, John; Patschke, Robert; Powers, Michael

    1992-01-01

    Electronic subsystems of free-space laser communication system carry digital signals at 650 Mb/s over long distances. Applicable to general optical communications involving transfer of great quantities of data, and transmission and reception of video images of high definition.

  16. Interlibrary Loan Communications Subsystem: Users Manual.

    Science.gov (United States)

    OCLC Online Computer Library Center, Inc., Dublin, OH.

    The OCLC Interlibrary Loan (ILL) Communications Subsystem provides participating libraries with on-line control of ILL transactions. This user manual includes a glossary of terms related to the procedures in using the system. Sections describe computer entry, searching, loan request form, loan response form, ILL procedures, the special message…

  17. National Ingition Facility subsystem design requirements optics subsystems SSDR 1.6

    International Nuclear Information System (INIS)

    English, R.E.

    1996-01-01

    This Subsystems Design Requirement (SSDR) document specifies the functions to be performed and the subsystems design requirements for the major optical components. These optical components comprise those custom designed and fabricated for amplification and transport of the full aperture NIF beam and does not include those off-the-shelf components that may be part of other optical sub-systems (i.e. alignment or diagnostic systems). This document also describes the optical component processing requirements and the QA/damage testing necessary to ensure that the optical components meet or exceed the requirements

  18. Pre-Processing and Modeling Tools for Bigdata

    Directory of Open Access Journals (Sweden)

    Hashem Hadi

    2016-09-01

    Full Text Available Modeling tools and operators help the user / developer to identify the processing field on the top of the sequence and to send into the computing module only the data related to the requested result. The remaining data is not relevant and it will slow down the processing. The biggest challenge nowadays is to get high quality processing results with a reduced computing time and costs. To do so, we must review the processing sequence, by adding several modeling tools. The existing processing models do not take in consideration this aspect and focus on getting high calculation performances which will increase the computing time and costs. In this paper we provide a study of the main modeling tools for BigData and a new model based on pre-processing.

  19. Preprocessing in a Tiered Sensor Network for Habitat Monitoring

    Directory of Open Access Journals (Sweden)

    Hanbiao Wang

    2003-03-01

    Full Text Available We investigate task decomposition and collaboration in a two-tiered sensor network for habitat monitoring. The system recognizes and localizes a specified type of birdcalls. The system has a few powerful macronodes in the first tier, and many less powerful micronodes in the second tier. Each macronode combines data collected by multiple micronodes for target classification and localization. We describe two types of lightweight preprocessing which significantly reduce data transmission from micronodes to macronodes. Micronodes classify events according to their cross-zero rates and discard irrelevant events. Data about events of interest is reduced and compressed before being transmitted to macronodes for target localization. Preliminary experiments illustrate the effectiveness of event filtering and data reduction at micronodes.

  20. Textural Analysis of Fatique Crack Surfaces: Image Pre-processing

    Directory of Open Access Journals (Sweden)

    H. Lauschmann

    2000-01-01

    Full Text Available For the fatique crack history reconstitution, new methods of quantitative microfractography are beeing developed based on the image processing and textural analysis. SEM magnifications between micro- and macrofractography are used. Two image pre-processing operatins were suggested and proved to prepare the crack surface images for analytical treatment: 1. Normalization is used to transform the image to a stationary form. Compared to the generally used equalization, it conserves the shape of brightness distribution and saves the character of the texture. 2. Binarization is used to transform the grayscale image to a system of thick fibres. An objective criterion for the threshold brightness value was found as that resulting into the maximum number of objects. Both methods were succesfully applied together with the following textural analysis.

  1. Piecewise Polynomial Aggregation as Preprocessing for Data Numerical Modeling

    Science.gov (United States)

    Dobronets, B. S.; Popova, O. A.

    2018-05-01

    Data aggregation issues for numerical modeling are reviewed in the present study. The authors discuss data aggregation procedures as preprocessing for subsequent numerical modeling. To calculate the data aggregation, the authors propose using numerical probabilistic analysis (NPA). An important feature of this study is how the authors represent the aggregated data. The study shows that the offered approach to data aggregation can be interpreted as the frequency distribution of a variable. To study its properties, the density function is used. For this purpose, the authors propose using the piecewise polynomial models. A suitable example of such approach is the spline. The authors show that their approach to data aggregation allows reducing the level of data uncertainty and significantly increasing the efficiency of numerical calculations. To demonstrate the degree of the correspondence of the proposed methods to reality, the authors developed a theoretical framework and considered numerical examples devoted to time series aggregation.

  2. Pre-processing of input files for the AZTRAN code

    International Nuclear Information System (INIS)

    Vargas E, S.; Ibarra, G.

    2017-09-01

    The AZTRAN code began to be developed in the Nuclear Engineering Department of the Escuela Superior de Fisica y Matematicas (ESFM) of the Instituto Politecnico Nacional (IPN) with the purpose of numerically solving various models arising from the physics and engineering of nuclear reactors. The code is still under development and is part of the AZTLAN platform: Development of a Mexican platform for the analysis and design of nuclear reactors. Due to the complexity to generate an input file for the code, a script based on D language is developed, with the purpose of making its elaboration easier, based on a new input file format which includes specific cards, which have been divided into two blocks, mandatory cards and optional cards, including a pre-processing of the input file to identify possible errors within it, as well as an image generator for the specific problem based on the python interpreter. (Author)

  3. Statistics in experimental design, preprocessing, and analysis of proteomics data.

    Science.gov (United States)

    Jung, Klaus

    2011-01-01

    High-throughput experiments in proteomics, such as 2-dimensional gel electrophoresis (2-DE) and mass spectrometry (MS), yield usually high-dimensional data sets of expression values for hundreds or thousands of proteins which are, however, observed on only a relatively small number of biological samples. Statistical methods for the planning and analysis of experiments are important to avoid false conclusions and to receive tenable results. In this chapter, the most frequent experimental designs for proteomics experiments are illustrated. In particular, focus is put on studies for the detection of differentially regulated proteins. Furthermore, issues of sample size planning, statistical analysis of expression levels as well as methods for data preprocessing are covered.

  4. Digital soil mapping: strategy for data pre-processing

    Directory of Open Access Journals (Sweden)

    Alexandre ten Caten

    2012-08-01

    Full Text Available The region of greatest variability on soil maps is along the edge of their polygons, causing disagreement among pedologists about the appropriate description of soil classes at these locations. The objective of this work was to propose a strategy for data pre-processing applied to digital soil mapping (DSM. Soil polygons on a training map were shrunk by 100 and 160 m. This strategy prevented the use of covariates located near the edge of the soil classes for the Decision Tree (DT models. Three DT models derived from eight predictive covariates, related to relief and organism factors sampled on the original polygons of a soil map and on polygons shrunk by 100 and 160 m were used to predict soil classes. The DT model derived from observations 160 m away from the edge of the polygons on the original map is less complex and has a better predictive performance.

  5. Maintenance and operations cost model for DSN subsystems

    Science.gov (United States)

    Burt, R. W.; Lesh, J. R.

    1977-01-01

    A procedure is described which partitions the recurring costs of the Deep Space Network (DSN) over the individual DSN subsystems. The procedure results in a table showing the maintenance, operations, sustaining engineering and supportive costs for each subsystems.

  6. Partitioning a macroscopic system into independent subsystems

    Science.gov (United States)

    Delle Site, Luigi; Ciccotti, Giovanni; Hartmann, Carsten

    2017-08-01

    We discuss the problem of partitioning a macroscopic system into a collection of independent subsystems. The partitioning of a system into replica-like subsystems is nowadays a subject of major interest in several fields of theoretical and applied physics. The thermodynamic approach currently favoured by practitioners is based on a phenomenological definition of an interface energy associated with the partition, due to a lack of easily computable expressions for a microscopic (i.e. particle-based) interface energy. In this article, we outline a general approach to derive sharp and computable bounds for the interface free energy in terms of microscopic statistical quantities. We discuss potential applications in nanothermodynamics and outline possible future directions.

  7. RF subsystem design for microwave communication receivers

    Science.gov (United States)

    Bickford, W. J.; Brodsky, W. G.

    A system review of the RF subsystems of (IFF) transponders, tropscatter receivers and SATCOM receivers is presented. The quantity potential for S-band and X-band IFF transponders establishes a baseline requirement. From this, the feasibility of a common design for these and other receivers is evaluated. Goals are established for a GaAs MMIC (monolithic microwave integrated circuit) device and related local oscillator preselector and self-test components.

  8. Optical Subsystems for Next Generation Access Networks

    DEFF Research Database (Denmark)

    Lazaro, J.A; Polo, V.; Schrenk, B.

    2011-01-01

    Recent optical technologies are providing higher flexibility to next generation access networks: on the one hand, providing progressive FTTx and specifically FTTH deployment, progressively shortening the copper access network; on the other hand, also opening fixed-mobile convergence solutions...... in next generation PON architectures. It is provided an overview of the optical subsystems developed for the implementation of the proposed NG-Access Networks....

  9. T Plant removal of PWR Chiller Subsystem

    International Nuclear Information System (INIS)

    Dana, C.M.

    1994-01-01

    The PWR Pool Chiller System is not longer required for support of the Shippingport Blanket Fuel Assemblies Storage. The Engineering Work Plan will provide the overall coordination of the documentation and physical changes to deactivate the unneeded subsystem. The physical removal of all energy sources for the Chiller equipment will be covered under a one time work plan. The documentation changes will be covered using approved Engineering Change Notices and Procedure Change Authorizations as needed

  10. The charged particle accelerators subsystems modeling

    International Nuclear Information System (INIS)

    Averyanov, G P; Kobylyatskiy, A V

    2017-01-01

    Presented web-based resource for information support the engineering, science and education in Electrophysics, containing web-based tools for simulation subsystems charged particle accelerators. Formulated the development motivation of Web-Environment for Virtual Electrophysical Laboratories. Analyzes the trends of designs the dynamic web-environments for supporting of scientific research and E-learning, within the framework of Open Education concept. (paper)

  11. Stepping-Motion Motor-Control Subsystem For Testing Bearings

    Science.gov (United States)

    Powers, Charles E.

    1992-01-01

    Control subsystem closed-loop angular-position-control system causing motor and bearing under test to undergo any of variety of continuous or stepping motions. Also used to test bearing-and-motor assemblies, motors, angular-position sensors including rotating shafts, and like. Monitoring subsystem gathers data used to evaluate performance of bearing or other article under test. Monitoring subsystem described in article, "Monitoring Subsystem For Testing Bearings" (GSC-13432).

  12. Automated searching for quantum subsystem codes

    International Nuclear Information System (INIS)

    Crosswhite, Gregory M.; Bacon, Dave

    2011-01-01

    Quantum error correction allows for faulty quantum systems to behave in an effectively error-free manner. One important class of techniques for quantum error correction is the class of quantum subsystem codes, which are relevant both to active quantum error-correcting schemes as well as to the design of self-correcting quantum memories. Previous approaches for investigating these codes have focused on applying theoretical analysis to look for interesting codes and to investigate their properties. In this paper we present an alternative approach that uses computational analysis to accomplish the same goals. Specifically, we present an algorithm that computes the optimal quantum subsystem code that can be implemented given an arbitrary set of measurement operators that are tensor products of Pauli operators. We then demonstrate the utility of this algorithm by performing a systematic investigation of the quantum subsystem codes that exist in the setting where the interactions are limited to two-body interactions between neighbors on lattices derived from the convex uniform tilings of the plane.

  13. Nontypeable pneumococci can be divided into multiple cps types, including one type expressing the novel gene pspK.

    Science.gov (United States)

    Park, In Ho; Kim, Kyung-Hyo; Andrade, Ana Lucia; Briles, David E; McDaniel, Larry S; Nahm, Moon H

    2012-01-01

    Although virulence of Streptococcus pneumoniae is associated with its capsule, some pathogenic S. pneumoniae isolates lack capsules and are serologically nontypeable (NT). We obtained 64 isolates that were identified as NT "pneumococci" (i.e., bacteria satisfying the conventional definition but without the multilocus sequence typing [MLST]-based definition of S. pneumoniae) by the traditional criteria. All 64 were optochin sensitive and had lytA, and 63 had ply. Twelve isolates had cpsA, suggesting the presence of a conventional but defective capsular polysaccharide synthesis (cps) locus. The 52 cpsA-negative isolates could be divided into three null capsule clades (NCC) based on aliC (aliB-like ORF1), aliD (aliB-like ORF2), and our newly discovered gene, pspK, in their cps loci. pspK encodes a protein with a long alpha-helical region containing an LPxTG motif and a YPT motif known to bind human pIgR. There were nine isolates in NCC1 (pspK(+) but negative for aliC and aliD), 32 isolates in NCC2 (aliC(+) aliD(+) but negative for pspK), and 11 in NCC3 (aliD(+) but negative for aliC and pspK). Among 52 cpsA-negative isolates, 41 were identified as S. pneumoniae by MLST analysis. All NCC1 and most NCC2 isolates were S. pneumoniae, whereas all nine NCC3 and two NCC2 isolates were not S. pneumoniae. Several NCC1 and NCC2 isolates from multiple individuals had identical MLST and cps regions, showing that unencapsulated S. pneumoniae can be infectious among humans. Furthermore, NCC1 and NCC2 S. pneumoniae isolates could colonize mice as well as encapsulated S. pneumoniae, although S. pneumoniae with an artificially disrupted cps locus did not. Moreover, an NCC1 isolate with pspK deletion did not colonize mice, suggesting that pspK is critical for colonization. Thus, PspK may provide pneumococci a means of surviving in the nasopharynx without capsule. IMPORTANCE The presence of a capsule is critical for many pathogenic bacteria, including pneumococci. Reflecting the

  14. Space reactor system and subsystem investigations: assessment of technology issues for the reactor and shield subsystem. SP-100 Program

    International Nuclear Information System (INIS)

    Atkins, D.F.; Lillie, A.F.

    1983-01-01

    As part of Rockwell's effort on the SP-100 Program, preliminary assessment has been completed of current nuclear technology as it relates to candidate reactor/shield subsystems for the SP-100 Program. The scope of the assessment was confined to the nuclear package (to the reactor and shield subsystems). The nine generic reactor subsystems presented in Rockwell's Subsystem Technology Assessment Report, ESG-DOE-13398, were addressed for the assessment

  15. Space-reactor electric systems: subsystem technology assessment

    International Nuclear Information System (INIS)

    Anderson, R.V.; Bost, D.; Determan, W.R.

    1983-01-01

    This report documents the subsystem technology assessment. For the purpose of this report, five subsystems were defined for a space reactor electric system, and the report is organized around these subsystems: reactor; shielding; primary heat transport; power conversion and processing; and heat rejection. The purpose of the assessment was to determine the current technology status and the technology potentials for different types of the five subsystems. The cost and schedule needed to develop these potentials were estimated, and sets of development-compatible subsystems were identified

  16. A New Indicator for Optimal Preprocessing and Wavelengths Selection of Near-Infrared Spectra

    NARCIS (Netherlands)

    Skibsted, E.; Boelens, H.F.M.; Westerhuis, J.A.; Witte, D.T.; Smilde, A.K.

    2004-01-01

    Preprocessing of near-infrared spectra to remove unwanted, i.e., non-related spectral variation and selection of informative wavelengths is considered to be a crucial step prior to the construction of a quantitative calibration model. The standard methodology when comparing various preprocessing

  17. New indicator for optimal preprocessing and wavelength selection of near-infrared spectra

    NARCIS (Netherlands)

    Skibsted, E. T. S.; Boelens, H. F. M.; Westerhuis, J. A.; Witte, D. T.; Smilde, A. K.

    2004-01-01

    Preprocessing of near-infrared spectra to remove unwanted, i.e., non-related spectral variation and selection of informative wavelengths is considered to be a crucial step prior to the construction of a quantitative calibration model. The standard methodology when comparing various preprocessing

  18. Ensemble preprocessing of near-infrared (NIR) spectra for multivariate calibration

    International Nuclear Information System (INIS)

    Xu Lu; Zhou Yanping; Tang Lijuan; Wu Hailong; Jiang Jianhui; Shen Guoli; Yu Ruqin

    2008-01-01

    Preprocessing of raw near-infrared (NIR) spectral data is indispensable in multivariate calibration when the measured spectra are subject to significant noises, baselines and other undesirable factors. However, due to the lack of sufficient prior information and an incomplete knowledge of the raw data, NIR spectra preprocessing in multivariate calibration is still trial and error. How to select a proper method depends largely on both the nature of the data and the expertise and experience of the practitioners. This might limit the applications of multivariate calibration in many fields, where researchers are not very familiar with the characteristics of many preprocessing methods unique in chemometrics and have difficulties to select the most suitable methods. Another problem is many preprocessing methods, when used alone, might degrade the data in certain aspects or lose some useful information while improving certain qualities of the data. In order to tackle these problems, this paper proposes a new concept of data preprocessing, ensemble preprocessing method, where partial least squares (PLSs) models built on differently preprocessed data are combined by Monte Carlo cross validation (MCCV) stacked regression. Little or no prior information of the data and expertise are required. Moreover, fusion of complementary information obtained by different preprocessing methods often leads to a more stable and accurate calibration model. The investigation of two real data sets has demonstrated the advantages of the proposed method

  19. National Ignition Facility subsystem design requirements target area auxiliary subsystem SSDR 1.8.6

    International Nuclear Information System (INIS)

    Reitz, T.

    1996-01-01

    This Subsystem Design Requirement (SSDR) establishes the performance, design, development, and test requirements for the Target Area Auxiliary Subsystems (WBS 1.8.6), which is part of the NIF Target Experimental System (WBS 1.8). This document responds directly to the requirements detailed in NIF Target Experimental System SDR 003 document. Key elements of the Target Area Auxiliary Subsystems include: WBS 1.8.6.1 Local Utility Services; WBS 1.8.6.2 Cable Trays; WBS 1.8.6.3 Personnel, Safety, and Occupational Access; WBS 1.8.6.4 Assembly, Installation, and Maintenance Equipment; WBS 1.8.6.4.1 Target Chamber Service System; WBS 1.8.6.4.2 Target Bay Service Systems

  20. The JPL telerobotic Manipulator Control and Mechanization (MCM) subsystem

    Science.gov (United States)

    Hayati, Samad; Lee, Thomas S.; Tso, Kam; Backes, Paul; Kan, Edwin; Lloyd, J.

    1989-01-01

    The Manipulator Control and Mechanization (MCM) subsystem of the telerobot system provides the real-time control of the robot manipulators in autonomous and teleoperated modes and real time input/output for a variety of sensors and actuators. Substantial hardware and software are included in this subsystem which interfaces in the hierarchy of the telerobot system with the other subsystems. The other subsystems are: run time control, task planning and reasoning, sensing and perception, and operator control subsystem. The architecture of the MCM subsystem, its capabilities, and details of various hardware and software elements are described. Important improvements in the MCM subsystem over the first version are: dual arm coordinated trajectory generation and control, addition of integrated teleoperation, shared control capability, replacement of the ultimate controllers with motor controllers, and substantial increase in real time processing capability.

  1. Software for Preprocessing Data from Rocket-Engine Tests

    Science.gov (United States)

    Cheng, Chiu-Fu

    2004-01-01

    Three computer programs have been written to preprocess digitized outputs of sensors during rocket-engine tests at Stennis Space Center (SSC). The programs apply exclusively to the SSC E test-stand complex and utilize the SSC file format. The programs are the following: Engineering Units Generator (EUGEN) converts sensor-output-measurement data to engineering units. The inputs to EUGEN are raw binary test-data files, which include the voltage data, a list identifying the data channels, and time codes. EUGEN effects conversion by use of a file that contains calibration coefficients for each channel. QUICKLOOK enables immediate viewing of a few selected channels of data, in contradistinction to viewing only after post-test processing (which can take 30 minutes to several hours depending on the number of channels and other test parameters) of data from all channels. QUICKLOOK converts the selected data into a form in which they can be plotted in engineering units by use of Winplot (a free graphing program written by Rick Paris). EUPLOT provides a quick means for looking at data files generated by EUGEN without the necessity of relying on the PV-WAVE based plotting software.

  2. Zseq: An Approach for Preprocessing Next-Generation Sequencing Data.

    Science.gov (United States)

    Alkhateeb, Abedalrhman; Rueda, Luis

    2017-08-01

    Next-generation sequencing technology generates a huge number of reads (short sequences), which contain a vast amount of genomic data. The sequencing process, however, comes with artifacts. Preprocessing of sequences is mandatory for further downstream analysis. We present Zseq, a linear method that identifies the most informative genomic sequences and reduces the number of biased sequences, sequence duplications, and ambiguous nucleotides. Zseq finds the complexity of the sequences by counting the number of unique k-mers in each sequence as its corresponding score and also takes into the account other factors such as ambiguous nucleotides or high GC-content percentage in k-mers. Based on a z-score threshold, Zseq sweeps through the sequences again and filters those with a z-score less than the user-defined threshold. Zseq algorithm is able to provide a better mapping rate; it reduces the number of ambiguous bases significantly in comparison with other methods. Evaluation of the filtered reads has been conducted by aligning the reads and assembling the transcripts using the reference genome as well as de novo assembly. The assembled transcripts show a better discriminative ability to separate cancer and normal samples in comparison with another state-of-the-art method. Moreover, de novo assembled transcripts from the reads filtered by Zseq have longer genomic sequences than other tested methods. Estimating the threshold of the cutoff point is introduced using labeling rules with optimistic results.

  3. Road Sign Recognition with Fuzzy Adaptive Pre-Processing Models

    Science.gov (United States)

    Lin, Chien-Chuan; Wang, Ming-Shi

    2012-01-01

    A road sign recognition system based on adaptive image pre-processing models using two fuzzy inference schemes has been proposed. The first fuzzy inference scheme is to check the changes of the light illumination and rich red color of a frame image by the checking areas. The other is to check the variance of vehicle's speed and angle of steering wheel to select an adaptive size and position of the detection area. The Adaboost classifier was employed to detect the road sign candidates from an image and the support vector machine technique was employed to recognize the content of the road sign candidates. The prohibitory and warning road traffic signs are the processing targets in this research. The detection rate in the detection phase is 97.42%. In the recognition phase, the recognition rate is 93.04%. The total accuracy rate of the system is 92.47%. For video sequences, the best accuracy rate is 90.54%, and the average accuracy rate is 80.17%. The average computing time is 51.86 milliseconds per frame. The proposed system can not only overcome low illumination and rich red color around the road sign problems but also offer high detection rates and high computing performance. PMID:22778650

  4. Neural Online Filtering Based on Preprocessed Calorimeter Data

    CERN Document Server

    Torres, R C; The ATLAS collaboration; Simas Filho, E F; De Seixas, J M

    2009-01-01

    Among LHC detectors, ATLAS aims at coping with such high event rate by designing a three-level online triggering system. The first level trigger output will be ~75 kHz. This level will mark the regions where relevant events were found. The second level will validate LVL1 decision by looking only at the approved data using full granularity. At the level two output, the event rate will be reduced to ~2 kHz. Finally, the third level will look at full event information and a rate of ~200 Hz events is expected to be approved, and stored in persistent media for further offline analysis. Many interesting events decay into electrons, which have to be identified from the huge background noise (jets). This work proposes a high-efficient LVL2 electron / jet discrimination system based on neural networks fed from preprocessed calorimeter information. The feature extraction part of the proposed system performs a ring structure of data description. A set of concentric rings centered at the highest energy cell is generated ...

  5. Data preprocessing methods for robust Fourier ptychographic microscopy

    Science.gov (United States)

    Zhang, Yan; Pan, An; Lei, Ming; Yao, Baoli

    2017-12-01

    Fourier ptychographic microscopy (FPM) is a recently developed computational imaging technique that achieves gigapixel images with both high resolution and large field-of-view. In the current FPM experimental setup, the dark-field images with high-angle illuminations are easily overwhelmed by stray lights and background noises due to the low signal-to-noise ratio, thus significantly degrading the achievable resolution of the FPM approach. We provide an overall and systematic data preprocessing scheme to enhance the FPM's performance, which involves sampling analysis, underexposed/overexposed treatments, background noises suppression, and stray lights elimination. It is demonstrated experimentally with both US Air Force (USAF) 1951 resolution target and biological samples that the benefit of the noise removal by these methods far outweighs the defect of the accompanying signal loss, as part of the lost signals can be compensated by the improved consistencies among the captured raw images. In addition, the reported nonparametric scheme could be further cooperated with the existing state-of-the-art algorithms with a great flexibility, facilitating a stronger noise-robust capability of the FPM approach in various applications.

  6. Arabic text preprocessing for the natural language processing applications

    International Nuclear Information System (INIS)

    Awajan, A.

    2007-01-01

    A new approach for processing vowelized and unvowelized Arabic texts in order to prepare them for Natural Language Processing (NLP) purposes is described. The developed approach is rule-based and made up of four phases: text tokenization, word light stemming, word's morphological analysis and text annotation. The first phase preprocesses the input text in order to isolate the words and represent them in a formal way. The second phase applies a light stemmer in order to extract the stem of each word by eliminating the prefixes and suffixes. The third phase is a rule-based morphological analyzer that determines the root and the morphological pattern for each extracted stem. The last phase produces an annotated text where each word is tagged with its morphological attributes. The preprocessor presented in this paper is capable of dealing with vowelized and unvowelized words, and provides the input words along with relevant linguistics information needed by different applications. It is designed to be used with different NLP applications such as machine translation text summarization, text correction, information retrieval and automatic vowelization of Arabic Text. (author)

  7. ASAP: an environment for automated preprocessing of sequencing data

    Directory of Open Access Journals (Sweden)

    Torstenson Eric S

    2013-01-01

    Full Text Available Abstract Background Next-generation sequencing (NGS has yielded an unprecedented amount of data for genetics research. It is a daunting task to process the data from raw sequence reads to variant calls and manually processing this data can significantly delay downstream analysis and increase the possibility for human error. The research community has produced tools to properly prepare sequence data for analysis and established guidelines on how to apply those tools to achieve the best results, however, existing pipeline programs to automate the process through its entirety are either inaccessible to investigators, or web-based and require a certain amount of administrative expertise to set up. Findings Advanced Sequence Automated Pipeline (ASAP was developed to provide a framework for automating the translation of sequencing data into annotated variant calls with the goal of minimizing user involvement without the need for dedicated hardware or administrative rights. ASAP works both on computer clusters and on standalone machines with minimal human involvement and maintains high data integrity, while allowing complete control over the configuration of its component programs. It offers an easy-to-use interface for submitting and tracking jobs as well as resuming failed jobs. It also provides tools for quality checking and for dividing jobs into pieces for maximum throughput. Conclusions ASAP provides an environment for building an automated pipeline for NGS data preprocessing. This environment is flexible for use and future development. It is freely available at http://biostat.mc.vanderbilt.edu/ASAP.

  8. ASAP: an environment for automated preprocessing of sequencing data.

    Science.gov (United States)

    Torstenson, Eric S; Li, Bingshan; Li, Chun

    2013-01-04

    Next-generation sequencing (NGS) has yielded an unprecedented amount of data for genetics research. It is a daunting task to process the data from raw sequence reads to variant calls and manually processing this data can significantly delay downstream analysis and increase the possibility for human error. The research community has produced tools to properly prepare sequence data for analysis and established guidelines on how to apply those tools to achieve the best results, however, existing pipeline programs to automate the process through its entirety are either inaccessible to investigators, or web-based and require a certain amount of administrative expertise to set up. Advanced Sequence Automated Pipeline (ASAP) was developed to provide a framework for automating the translation of sequencing data into annotated variant calls with the goal of minimizing user involvement without the need for dedicated hardware or administrative rights. ASAP works both on computer clusters and on standalone machines with minimal human involvement and maintains high data integrity, while allowing complete control over the configuration of its component programs. It offers an easy-to-use interface for submitting and tracking jobs as well as resuming failed jobs. It also provides tools for quality checking and for dividing jobs into pieces for maximum throughput. ASAP provides an environment for building an automated pipeline for NGS data preprocessing. This environment is flexible for use and future development. It is freely available at http://biostat.mc.vanderbilt.edu/ASAP.

  9. ASAP: an environment for automated preprocessing of sequencing data

    Science.gov (United States)

    2013-01-01

    Background Next-generation sequencing (NGS) has yielded an unprecedented amount of data for genetics research. It is a daunting task to process the data from raw sequence reads to variant calls and manually processing this data can significantly delay downstream analysis and increase the possibility for human error. The research community has produced tools to properly prepare sequence data for analysis and established guidelines on how to apply those tools to achieve the best results, however, existing pipeline programs to automate the process through its entirety are either inaccessible to investigators, or web-based and require a certain amount of administrative expertise to set up. Findings Advanced Sequence Automated Pipeline (ASAP) was developed to provide a framework for automating the translation of sequencing data into annotated variant calls with the goal of minimizing user involvement without the need for dedicated hardware or administrative rights. ASAP works both on computer clusters and on standalone machines with minimal human involvement and maintains high data integrity, while allowing complete control over the configuration of its component programs. It offers an easy-to-use interface for submitting and tracking jobs as well as resuming failed jobs. It also provides tools for quality checking and for dividing jobs into pieces for maximum throughput. Conclusions ASAP provides an environment for building an automated pipeline for NGS data preprocessing. This environment is flexible for use and future development. It is freely available at http://biostat.mc.vanderbilt.edu/ASAP. PMID:23289815

  10. Power Subsystem Approach for the Europa Mission

    Directory of Open Access Journals (Sweden)

    Ulloa-Severino Antonio

    2017-01-01

    Full Text Available NASA is planning to launch a spacecraft on a mission to the Jovian moon Europa, in order to conduct a detailed reconnaissance and investigation of its habitability. The spacecraft would orbit Jupiter and perform a detailed science investigation of Europa, utilizing a number of science instruments including an ice-penetrating radar to determine the icy shell thickness and presence of subsurface oceans. The spacecraft would be exposed to harsh radiation and extreme temperature environments. To meet mission objectives, the spacecraft power subsystem is being architected and designed to operate efficiently, and with a high degree of reliability.

  11. Building the IOOS data management subsystem

    Science.gov (United States)

    de La Beaujardière, J.; Mendelssohn, R.; Ortiz, C.; Signell, R.

    2010-01-01

    We discuss progress to date and plans for the Integrated Ocean Observing System (IOOS??) Data Management and Communications (DMAC) subsystem. We begin by presenting a conceptual architecture of IOOS DMAC. We describe work done as part of a 3-year pilot project known as the Data Integration Framework and the subsequent assessment of lessons learned. We present work that has been accomplished as part of the initial version of the IOOS Data Catalog. Finally, we discuss near-term plans for augmenting IOOS DMAC capabilities.

  12. Structure of the Galaxy and its subsystems

    International Nuclear Information System (INIS)

    Ruprecht, J.

    1979-01-01

    Current knowledge is summed up of the structure of our galaxy consisting of more than 100 thousand million stars of an overal mass of 10 44 g, and of interstellar dust and gas. The galaxy comprises several subsystems, the oldest of which being of a spherical shape while the younger ones are more-or-less oblate rotational ellipsoids. It is considered on the basis of visual and radio observations that the galaxy has a spiral structure with many arms, similar to other galaxies. The structure of the galaxy nucleus has not yet been fully explained. (Ha)

  13. Optical fiber telecommunications components and subsystems

    CERN Document Server

    Kaminow, Ivan; Willner, Alan E

    2013-01-01

    Optical Fiber Telecommunications VI (A&B) is the sixth in a series that has chronicled the progress in the R&D of lightwave communications since the early 1970s. Written by active authorities from academia and industry, this edition brings a fresh look to many essential topics, including devices, subsystems, systems and networks. A central theme is the enabling of high-bandwidth communications in a cost-effective manner for the development of customer applications. These volumes are an ideal reference for R&D engineers and managers, optical systems implementers, university researchers and s

  14. The Sentinel 4 focal plane subsystem

    Science.gov (United States)

    Hohn, Rüdiger; Skegg, Michael P.; Hermsen, Markus; Hinger, Jürgen; Williges, Christian; Reulke, Ralf

    2017-09-01

    The Sentinel 4 instrument is an imaging spectrometer, developed by Airbus under ESA contract in the frame of the joint European Union (EU)/ESA COPERNICUS program with the objective of monitoring trace gas concentrations. Sentinel 4 will provide accurate measurements of key atmospheric constituents such as ozone, nitrogen dioxide, sulfur dioxide, formaldehyde, as well as aerosol and cloud properties. Sentinel 4 is unique in being the first geostationary UVN mission. The SENTINEL 4 space segment will be integrated on EUMETSAT's Meteosat Third Generation Sounder satellite (MTG-S). Sentinel 4 will provide coverage of Europe and adjacent regions. The Sentinel 4 instrument comprises as a major element two Focal Plane Subsystems (FPS) covering the wavelength ranges 305 nm to 500 nm (UVVIS) and 750 nm to 775 nm (NIR) respectively. The paper describes the Focal Plane Subsystems, comprising the detectors, the optical bench and the control electronics. Further the design and development approach will be presented as well as first measurement results of FPS Qualification Model.

  15. UGV: security analysis of subsystem control network

    Science.gov (United States)

    Abbott-McCune, Sam; Kobezak, Philip; Tront, Joseph; Marchany, Randy; Wicks, Al

    2013-05-01

    Unmanned Ground vehicles (UGVs) are becoming prolific in the heterogeneous superset of robotic platforms. The sensors which provide odometry, localization, perception, and vehicle diagnostics are fused to give the robotic platform a sense of the environment it is traversing. The automotive industry CAN bus has dominated the industry due to the fault tolerance and the message structure allowing high priority messages to reach the desired node in a real time environment. UGVs are being researched and produced at an accelerated rate to preform arduous, repetitive, and dangerous missions that are associated with a military action in a protracted conflict. The technology and applications of the research will inevitably be turned into dual-use platforms to aid civil agencies in the performance of their various operations. Our motivation is security of the holistic system; however as subsystems are outsourced in the design, the overall security of the system may be diminished. We will focus on the CAN bus topology and the vulnerabilities introduced in UGVs and recognizable security vulnerabilities that are inherent in the communications architecture. We will show how data can be extracted from an add-on CAN bus that can be customized to monitor subsystems. The information can be altered or spoofed to force the vehicle to exhibit unwanted actions or render the UGV unusable for the designed mission. The military relies heavily on technology to maintain information dominance, and the security of the information introduced onto the network by UGVs must be safeguarded from vulnerabilities that can be exploited.

  16. Predicting Child Protective Services (CPS) Involvement among Low-Income U.S. Families with Young Children Receiving Nutritional Assistance.

    Science.gov (United States)

    Slack, Kristen S; Font, Sarah; Maguire-Jack, Kathryn; Berger, Lawrence M

    2017-10-11

    This exploratory study examines combinations of income-tested welfare benefits and earnings, as they relate to the likelihood of child maltreatment investigations among low-income families with young children participating in a nutritional assistance program in one U.S. state (Wisconsin). Using a sample of 1065 parents who received the Special Supplemental Nutrition Assistance Program for Women, Infants, and Children (WIC) benefits in late 2010 and early 2011, we find that relying on either work in the absence of other means-tested welfare benefits, or a combination of work and welfare benefits, reduces the likelihood of CPS involvement compared to parents who rely on welfare benefits in the absence of work. Additionally, we find that housing instability increases the risk of CPS involvement in this population. The findings from this investigation may be useful to programs serving low-income families with young children, as they attempt to identify safety net resources for their clientele.

  17. On-Board, Real-Time Preprocessing System for Optical Remote-Sensing Imagery.

    Science.gov (United States)

    Qi, Baogui; Shi, Hao; Zhuang, Yin; Chen, He; Chen, Liang

    2018-04-25

    With the development of remote-sensing technology, optical remote-sensing imagery processing has played an important role in many application fields, such as geological exploration and natural disaster prevention. However, relative radiation correction and geometric correction are key steps in preprocessing because raw image data without preprocessing will cause poor performance during application. Traditionally, remote-sensing data are downlinked to the ground station, preprocessed, and distributed to users. This process generates long delays, which is a major bottleneck in real-time applications for remote-sensing data. Therefore, on-board, real-time image preprocessing is greatly desired. In this paper, a real-time processing architecture for on-board imagery preprocessing is proposed. First, a hierarchical optimization and mapping method is proposed to realize the preprocessing algorithm in a hardware structure, which can effectively reduce the computation burden of on-board processing. Second, a co-processing system using a field-programmable gate array (FPGA) and a digital signal processor (DSP; altogether, FPGA-DSP) based on optimization is designed to realize real-time preprocessing. The experimental results demonstrate the potential application of our system to an on-board processor, for which resources and power consumption are limited.

  18. On-Board, Real-Time Preprocessing System for Optical Remote-Sensing Imagery

    Science.gov (United States)

    Qi, Baogui; Zhuang, Yin; Chen, He; Chen, Liang

    2018-01-01

    With the development of remote-sensing technology, optical remote-sensing imagery processing has played an important role in many application fields, such as geological exploration and natural disaster prevention. However, relative radiation correction and geometric correction are key steps in preprocessing because raw image data without preprocessing will cause poor performance during application. Traditionally, remote-sensing data are downlinked to the ground station, preprocessed, and distributed to users. This process generates long delays, which is a major bottleneck in real-time applications for remote-sensing data. Therefore, on-board, real-time image preprocessing is greatly desired. In this paper, a real-time processing architecture for on-board imagery preprocessing is proposed. First, a hierarchical optimization and mapping method is proposed to realize the preprocessing algorithm in a hardware structure, which can effectively reduce the computation burden of on-board processing. Second, a co-processing system using a field-programmable gate array (FPGA) and a digital signal processor (DSP; altogether, FPGA-DSP) based on optimization is designed to realize real-time preprocessing. The experimental results demonstrate the potential application of our system to an on-board processor, for which resources and power consumption are limited. PMID:29693585

  19. Switched Flip-Flop based Preprocessing Circuit for ISFETs

    Directory of Open Access Journals (Sweden)

    Martin Kollár

    2005-03-01

    Full Text Available In this paper, a preprocessing circuit for ISFETs (Ion-sensitive field-effecttransistors to measure hydrogen-ion concentration in electrolyte is presented. A modifiedflip-flop is the main part of the circuit. The modification consists in replacing the standardtransistors by ISFETs and periodically switching the supply voltage on and off.Concentration of hydrogen ions to be measured discontinues the flip-flop value symmetry,which means that by switching the supply voltage on the flip-flop goes to one of two stablestates, ‘one’ or ‘zero’. The recovery of the value symmetry can be achieved by changing abalanced voltage, which is incorporated to the flip-flop, to bring the flip-flop to a 50%position (probability of ‘one’ equals to probability of ‘zero’. Thus, the balanced voltagereflects the measured concentration of hydrogen ions. Its magnitude is set automatically byusing a feedback circuit whose input is connected to the flip-flop output. The preprocessingcircuit, as the whole, is the well-known δ modulator in which the switched flip-flop servesas a comparator and a sampling circuit. The advantages of this approach in comparison tothose of standard approaches are discussed. Finally, theoretical results are verified bysimulations with TSPICE and a good agreement is reported.

  20. Automated Pre-processing for NMR Assignments with Reduced Tedium

    Energy Technology Data Exchange (ETDEWEB)

    2004-05-11

    An important rate-limiting step in the reasonance asignment process is accurate identification of resonance peaks in MNR spectra. NMR spectra are noisy. Hence, automatic peak-picking programs must navigate between the Scylla of reliable but incomplete picking, and the Charybdis of noisy but complete picking. Each of these extremes complicates the assignment process: incomplete peak-picking results in the loss of essential connectivities, while noisy picking conceals the true connectivities under a combinatiorial explosion of false positives. Intermediate processing can simplify the assignment process by preferentially removing false peaks from noisy peak lists. This is accomplished by requiring consensus between multiple NMR experiments, exploiting a priori information about NMR spectra, and drawing on empirical statistical distributions of chemical shift extracted from the BioMagResBank. Experienced NMR practitioners currently apply many of these techniques "by hand", which is tedious, and may appear arbitrary to the novice. To increase efficiency, we have created a systematic and automated approach to this process, known as APART. Automated pre-processing has three main advantages: reduced tedium, standardization, and pedagogy. In the hands of experienced spectroscopists, the main advantage is reduced tedium (a rapid increase in the ratio of true peaks to false peaks with minimal effort). When a project is passed from hand to hand, the main advantage is standardization. APART automatically documents the peak filtering process by archiving its original recommendations, the accompanying justifications, and whether a user accepted or overrode a given filtering recommendation. In the hands of a novice, this tool can reduce the stumbling block of learning to differentiate between real peaks and noise, by providing real-time examples of how such decisions are made.

  1. Exploring the integration of the human as a flexibility factor in CPS enabled manufacturing environments: Methodology and results

    OpenAIRE

    Fantini, P.; Tavola, G.; Taisch, M.; Barbosa, José; Leitão, Paulo; Liu, Y.; Sayed, M.S.; Lohse, N.

    2016-01-01

    Cyber Physical Systems (CPS) are expected to shape the evolution of production towards the fourth industrial revolution named Industry 4.0. The increasing integration of manufacturing processes and the strengthening of the autonomous capabilities of manufacturing systems make investigating the role of humans a primary research objective in view of emerging social and demographic megatrends. Understanding how the employees can be better integrated to enable increased flexibility in manufacturi...

  2. A Personalized QoS Prediction Approach for CPS Service Recommendation Based on Reputation and Location-Aware Collaborative Filtering.

    Science.gov (United States)

    Kuang, Li; Yu, Long; Huang, Lan; Wang, Yin; Ma, Pengju; Li, Chuanbin; Zhu, Yujia

    2018-05-14

    With the rapid development of cyber-physical systems (CPS), building cyber-physical systems with high quality of service (QoS) has become an urgent requirement in both academia and industry. During the procedure of building Cyber-physical systems, it has been found that a large number of functionally equivalent services exist, so it becomes an urgent task to recommend suitable services from the large number of services available in CPS. However, since it is time-consuming, and even impractical, for a single user to invoke all of the services in CPS to experience their QoS, a robust QoS prediction method is needed to predict unknown QoS values. A commonly used method in QoS prediction is collaborative filtering, however, it is hard to deal with the data sparsity and cold start problem, and meanwhile most of the existing methods ignore the data credibility issue. Thence, in order to solve both of these challenging problems, in this paper, we design a framework of QoS prediction for CPS services, and propose a personalized QoS prediction approach based on reputation and location-aware collaborative filtering. Our approach first calculates the reputation of users by using the Dirichlet probability distribution, so as to identify untrusted users and process their unreliable data, and then it digs out the geographic neighborhood in three levels to improve the similarity calculation of users and services. Finally, the data from geographical neighbors of users and services are fused to predict the unknown QoS values. The experiments using real datasets show that our proposed approach outperforms other existing methods in terms of accuracy, efficiency, and robustness.

  3. Double Shell Tank (DST) Process Waste Sampling Subsystem Definition Report

    International Nuclear Information System (INIS)

    RASMUSSEN, J.H.

    2000-01-01

    This report defines the Double-Shell Tank (DST) Process Waste Sampling Subsystem (PWSS). This subsystem definition report fully describes and identifies the system boundaries of the PWSS. This definition provides a basis for developing functional, performance, and test requirements (i.e., subsystem specification), as necessary, for the PWSS. The resultant PWSS specification will include the sampling requirements to support the transfer of waste from the DSTs to the Privatization Contractor during Phase 1 of Waste Feed Delivery

  4. Conditional density matrix: systems and subsystems in quantum mechanics

    International Nuclear Information System (INIS)

    Belokurov, V.V.; Khrustalev, O.A.; Sadovnichij, V.A.; Timofeevskaya, O.D.

    2003-01-01

    A new quantum mechanical notion - Conditional Density Matrix - is discussed and is applied to describe some physical processes. This notion is a natural generalization of von Neumann density matrix for such processes as divisions of quantum systems into subsystems and reunifications of subsystems into new joint systems. Conditional Density Matrix assigns a quantum state to a subsystem of a composite system on condition that another part of the composite system is in some pure state

  5. IOOS modeling subsystem: vision and implementation strategy

    Science.gov (United States)

    Rosenfeld, Leslie; Chao, Yi; Signell, Richard P.

    2012-01-01

    Numerical modeling is vital to achieving the U.S. IOOS® goals of predicting, understanding and adapting to change in the ocean and Great Lakes. In the next decade IOOS should cultivate a holistic approach to coastal ocean prediction, and encourage more balanced investment among the observing, modeling and information management subsystems. We believe the vision of a prediction framework driven by observations, and leveraging advanced technology and understanding of the ocean and Great Lakes, would lead to a new era for IOOS that would not only produce more powerful information, but would also capture broad community support, particularly from the general public, thus allowing IOOS to develop into the comprehensive information system that was envisioned at the outset.

  6. Response spectrum analysis for multi-supported subsystems

    International Nuclear Information System (INIS)

    Reed, J.W.

    1983-01-01

    A methodology was developed to analyze multi-supported subsystems (e.g., piping systems) for seismic or other dynamic forces using response spectrum input. Currently, subsystems which are supported at more than one location in a nuclear power plant building are analyzed either by the time-history method or by response spectrum procedures, where spectra which envelop all support locations are used. The former procedure is exceedingly expensive, while the latter procedure is inexpensive but very conservative. Improved analysis procedures are currently being developed which are either coupled- or uncoupled-system approaches. For the coupled-system approach, response feedback between the subsystem and building system is included. For the uncoupled-system approach, feedback is neglected; however, either time history or response spectrum methods can be used. The methodology developed for analyzing multi-supported subsystems is based on the assumption that the building response and the subsystem response are uncoupled. This is the same assumption implicitly made by analysts who design singly-supported subsystems using floor response spectrum input. This approach implies that there is no response feedback between the primary building system and the subsystem, which is generally found to be conservative. The methodology developed for multi-supported subsystems makes this same assumption and thus should produce results with the same ease and degree of accuracy as results obtained for singly-supported subsystems. (orig./HP)

  7. Automatic control of a primary electric thrust subsystem

    Science.gov (United States)

    Macie, T. W.; Macmedan, M. L.

    1975-01-01

    A concept for automatic control of the thrust subsystem has been developed by JPL and participating NASA Centers. This paper reports on progress in implementing the concept at JPL. Control of the Thrust Subsystem (TSS) is performed by the spacecraft computer command subsystem, and telemetry data is extracted by the spacecraft flight data subsystem. The Data and Control Interface Unit, an element of the TSS, provides the interface with the individual elements of the TSS. The control philosophy and implementation guidelines are presented. Control requirements are listed, and the control mechanism, including the serial digital data intercommunication system, is outlined. The paper summarizes progress to Fall 1974.

  8. Plant development, auxin, and the subsystem incompleteness theorem.

    Science.gov (United States)

    Niklas, Karl J; Kutschera, Ulrich

    2012-01-01

    Plant morphogenesis (the process whereby form develops) requires signal cross-talking among all levels of organization to coordinate the operation of metabolic and genomic subsystems operating in a larger network of subsystems. Each subsystem can be rendered as a logic circuit supervising the operation of one or more signal-activated system. This approach simplifies complex morphogenetic phenomena and allows for their aggregation into diagrams of progressively larger networks. This technique is illustrated here by rendering two logic circuits and signal-activated subsystems, one for auxin (IAA) polar/lateral intercellular transport and another for IAA-mediated cell wall loosening. For each of these phenomena, a circuit/subsystem diagram highlights missing components (either in the logic circuit or in the subsystem it supervises) that must be identified experimentally if each of these basic plant phenomena is to be fully understood. We also illustrate the "subsystem incompleteness theorem," which states that no subsystem is operationally self-sufficient. Indeed, a whole-organism perspective is required to understand even the most simple morphogenetic process, because, when isolated, every biological signal-activated subsystem is morphogenetically ineffective.

  9. Thinning: A Preprocessing Technique for an OCR System for the Brahmi Script

    Directory of Open Access Journals (Sweden)

    H. K. Anasuya Devi

    2006-12-01

    Full Text Available In this paper we study the methodology employed for preprocessing the archaeological images. We present the various algorithms used in the low level processing stage of image analysis for Optical Character Recognition System for Brahmi Script. The image preprocessing technique covered in this paper include Thinning method. We also try to analyze the results obtained by the pixel-level processing algorithms.

  10. An Automated, Adaptive Framework for Optimizing Preprocessing Pipelines in Task-Based Functional MRI.

    Directory of Open Access Journals (Sweden)

    Nathan W Churchill

    Full Text Available BOLD fMRI is sensitive to blood-oxygenation changes correlated with brain function; however, it is limited by relatively weak signal and significant noise confounds. Many preprocessing algorithms have been developed to control noise and improve signal detection in fMRI. Although the chosen set of preprocessing and analysis steps (the "pipeline" significantly affects signal detection, pipelines are rarely quantitatively validated in the neuroimaging literature, due to complex preprocessing interactions. This paper outlines and validates an adaptive resampling framework for evaluating and optimizing preprocessing choices by optimizing data-driven metrics of task prediction and spatial reproducibility. Compared to standard "fixed" preprocessing pipelines, this optimization approach significantly improves independent validation measures of within-subject test-retest, and between-subject activation overlap, and behavioural prediction accuracy. We demonstrate that preprocessing choices function as implicit model regularizers, and that improvements due to pipeline optimization generalize across a range of simple to complex experimental tasks and analysis models. Results are shown for brief scanning sessions (<3 minutes each, demonstrating that with pipeline optimization, it is possible to obtain reliable results and brain-behaviour correlations in relatively small datasets.

  11. Comparison of pre-processing methods for multiplex bead-based immunoassays.

    Science.gov (United States)

    Rausch, Tanja K; Schillert, Arne; Ziegler, Andreas; Lüking, Angelika; Zucht, Hans-Dieter; Schulz-Knappe, Peter

    2016-08-11

    High throughput protein expression studies can be performed using bead-based protein immunoassays, such as the Luminex® xMAP® technology. Technical variability is inherent to these experiments and may lead to systematic bias and reduced power. To reduce technical variability, data pre-processing is performed. However, no recommendations exist for the pre-processing of Luminex® xMAP® data. We compared 37 different data pre-processing combinations of transformation and normalization methods in 42 samples on 384 analytes obtained from a multiplex immunoassay based on the Luminex® xMAP® technology. We evaluated the performance of each pre-processing approach with 6 different performance criteria. Three performance criteria were plots. All plots were evaluated by 15 independent and blinded readers. Four different combinations of transformation and normalization methods performed well as pre-processing procedure for this bead-based protein immunoassay. The following combinations of transformation and normalization were suitable for pre-processing Luminex® xMAP® data in this study: weighted Box-Cox followed by quantile or robust spline normalization (rsn), asinh transformation followed by loess normalization and Box-Cox followed by rsn.

  12. Biochemical Characterization of CPS-1, a Subclass B3 Metallo-β-Lactamase from a Chryseobacterium piscium Soil Isolate

    DEFF Research Database (Denmark)

    Gudeta, Dereje Dadi; Pollini, Simona; Docquier, Jean-Denis

    2016-01-01

    CPS-1 is a subclass B3 metallo-β-lactamase from a Chryseobacterium piscium isolated from soil, showing 68 % amino acid identity to GOB-1 enzyme. CPS-1 was overproduced in Escherichia coli Rosetta (DE3), purified by chromatography and biochemically characterized. This enzyme exhibits a broad spect...... spectrum substrate profile including penicillins, cephalosporins and carbapenems, which overall resembles those of L1, GOB-1 and acquired subclass B3 enzymes AIM-1 and SMB-1....

  13. Double Shell Tank (DST) Monitor and Control Subsystem Definition Report

    International Nuclear Information System (INIS)

    BAFUS, R.R.

    2000-01-01

    The system description of the Double-Shell Tank (DST) Monitor and Control Subsystem establishes the system boundaries and describes the interface of the DST Monitor and Control Subsystem with new and existing systems that are required to accomplish the Waste Feed Delivery (WFD) mission

  14. Subsystem cost data for the tritium systems test assembly

    International Nuclear Information System (INIS)

    Bartlit, J.R.; Anderson, J.L.; Rexroth, V.G.

    1983-01-01

    Details of subsystem costs are among the questions most frequently asked about the $14.4 million Tritium Systems Test Assembly (TSTA) at Los Alamos National Laboratory. This paper presents a breakdown of cost components for each of the 20 major subsystems of TSTA. Also included are details to aid in adjusting the costs to other years, contracting conditions, or system sizes

  15. Does Normal Processing Provide Evidence of Specialised Semantic Subsystems?

    Science.gov (United States)

    Shapiro, Laura R.; Olson, Andrew C.

    2005-01-01

    Category-specific disorders are frequently explained by suggesting that living and non-living things are processed in separate subsystems (e.g. Caramazza & Shelton, 1998). If subsystems exist, there should be benefits for normal processing, beyond the influence of structural similarity. However, no previous study has separated the relative…

  16. Simulating the Various Subsystems of a Coal Mine

    Directory of Open Access Journals (Sweden)

    V. Okolnishnikov

    2016-06-01

    Full Text Available A set of simulation models of various subsystems of a coal mine was developed with the help of a new visual interactive simulation system of technological processes. This paper contains a brief description of this simulation system and its possibilities. The main possibilities provided by the simulation system are: the quick construction of models from library elements, 3D representation, and the communication of models with actual control systems. These simulation models were developed for the simulation of various subsystems of a coal mine: underground conveyor network subsystems, pumping subsystems and coal face subsystems. These simulation models were developed with the goal to be used as a quality and reliability assurance tool for new process control systems in coal mining.

  17. Presence in the IP Multimedia Subsystem

    Directory of Open Access Journals (Sweden)

    Ling Lin

    2007-01-01

    Full Text Available With an ever increasing penetration of Internet Protocol (IP technologies, the wireless industry is evolving the mobile core network towards all-IP network. The IP Multimedia Subsystem (IMS is a standardised Next Generation Network (NGN architectural framework defined by the 3rd Generation Partnership Project (3GPP to bridge the gap between circuit-switched and packet-switched networks and consolidate both sides into on single all-IP network for all services. In this paper, we provide an insight into the limitation of the presence service, one of the fundamental building blocks of the IMS. Our prototype-based study is unique of its kind and helps identifying the factors which limit the scalability of the current version of the presence service (3GPP TS 23.141 version 7.2.0 Release 7 [1], which will in turn dramatically limit the performance of advanced IMS services. We argue that the client-server paradigm behind the current IMS architecture does not suite the requirements of the IMS system, which defies the very purpose of its introduction. We finally elaborate on possible avenues for addressing this problem.

  18. The CALIPSO Integrated Thermal Control Subsystem

    Science.gov (United States)

    Gasbarre, Joseph F.; Ousley, Wes; Valentini, Marc; Thomas, Jason; Dejoie, Joel

    2007-01-01

    The Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) is a joint NASA-CNES mission to study the Earth's cloud and aerosol layers. The satellite is composed of a primary payload (built by Ball Aerospace) and a spacecraft platform bus (PROTEUS, built by Alcatel Alenia Space). The thermal control subsystem (TCS) for the CALIPSO satellite is a passive design utilizing radiators, multi-layer insulation (MLI) blankets, and both operational and survival surface heaters. The most temperature sensitive component within the satellite is the laser system. During thermal vacuum testing of the integrated satellite, the laser system's operational heaters were found to be inadequate in maintaining the lasers required set point. In response, a solution utilizing the laser system's survival heaters to augment the operational heaters was developed with collaboration between NASA, CNES, Ball Aerospace, and Alcatel-Alenia. The CALIPSO satellite launched from Vandenberg Air Force Base in California on April 26th, 2006. Evaluation of both the platform and payload thermal control systems show they are performing as expected and maintaining the critical elements of the satellite within acceptable limits.

  19. Rosary as the ethnoreligious marker of the actional subsystem of significative field of catholicism

    Directory of Open Access Journals (Sweden)

    V. I. Kryachko

    2015-02-01

    The socioevaluative, autointentional, identificative attempts to explicate some ethnoreligious crosscorrelations between different significative structural fields in sociospace are based on the author’s Model of the structure of significative field of Catholicism, which consists of the following 12 basic significative structural subsystems: 1 anthropomorphic significative subsystem, which includes human­similar (manlike and personificated symbolic constructions, monuments and architectural ensembles, as well as symbols of human body parts, their combinations and signals; 2 zoomorphic significative subsystem, which includes animal­similar significative constructions and signs of their separate bodyparts, as well as symbols of their lifeproducts; 3 vegetomorphic significative subsystem, which includes plant­similar significative elements and food products; 4 geomorphic significative subsystem; 5 geometric significative subsystem; 6 astral­referent significative subsystem; 7 coloristic significative subsystem; 8 topos­instalative significative subsystem; 9 objective­instrumental significative subsystem; 10 architectural exterior­interior significative subsystem; 11 abstractive significative subsystem; 12 actional significative subsystem.

  20. Modeling and simulation of a 100 kWe HT-PEMFC subsystem integrated with an absorption chiller subsystem

    DEFF Research Database (Denmark)

    Arsalis, Alexandros

    2012-01-01

    A 100 kWe liquid-cooled HT-PEMFC subsystem is integrated with an absorption chiller subsystem to provide electricity and cooling. The system is designed, modeled and simulated to investigate the potential of this technology for future novel energy system applications. Liquid-cooling can provide...

  1. Quantitation of stress/rest 201TI SPECT of the legs in the diagnosis of compartment-NT syndromes (CPS)

    International Nuclear Information System (INIS)

    Hayes, A.A.; Bower, G.D.; Pitstock, K.L.; Maguire, K.F.

    1998-01-01

    Full text: Compartment-NT syndrom (CPS) of the legs is considered to have an ischaemic basis related to muscle swelling and pressure increase in a muscle compartment (MC) during isotonic work. We decided to study selected patients where CPS was suspected with exercise 201 TI SPECT of the legs to better define their diagnoses. Eighteen patients with probable CPS reproduced their leg pain(s) during isotonic work, and 100 MBq of 201 TI was given i.v. during continued work and pain. Anterior 300 sec. planar and 360 degree, elliptical SPECT studies were acquired five minutes after stress and again four hours later. Quantitation of whole calf and regional MC uptake was attempted after the first five patients were assessed qualitatively. Ten patients were men and eight were women. The mean age was 30.8 y. Four had localised posterior and three had anterior pain with 11 having mixed and bilateral symptoms. Five patients had had a bone scan in the past and nine had MC pressure studies done within a month of study. Six patients had had previous decompressive surgery and seven patients had surgery after stress/rest studies. Four asymptomatic cardiac patients (''controls'') were imaged after their cardiac 201 TI studies and data used for comparison. Mean age of controls was 33 years. Generally even muscle uptake was seen on stress images with mean washout of 201 TI of 12% (7-23%) being calculated on delayed images of controls. Painful MCs with qualitative reduction in uptake after stress showed a mean increase in 201 TI of 25.7% (6-39%) on delayed imaging. Three patients with dramatic improvement in symptoms after surgery had shown a mean increase of 25.2% in delayed uptake in MCs on pre-operative studies. One patient showed washout of 11 and 15% from posterior MCs and had a poor response to subsequent surgery. Further clinical follow up in a large group of patients will be required to fully identify the place of Stress 201 TI imaging of the legs in this difficult group of

  2. E-cigarette use and smoking reduction or cessation in the 2010/2011 TUS-CPS longitudinal cohort

    OpenAIRE

    Yuyan Shi; John P. Pierce; Martha White; Maya Vijayaraghavan; Wilson Compton; Kevin Conway; Anne M. Hartman; Karen Messer

    2016-01-01

    Abstract Background Electronic cigarettes (e-cigarettes) are heavily marketed and widely perceived as helpful for quitting or reducing smoking intensity. We test whether ever-use of e-cigarettes among early adopters was associated with: 1) increased cigarette smoking cessation; and 2) reduced cigarette consumption. Methods A representative cohort of U.S. smokers (N = 2454) from the 2010 Tobacco Use Supplement to the Current Population Survey (TUS-CPS) was re-interviewed 1 year later. Outcomes...

  3. Subsystem response analysis for the Seismic Safety Margins Research Program

    International Nuclear Information System (INIS)

    Chuang, T.Y.

    1981-01-01

    A review of the state-of-the-art of seismic qualification methods of subsystem has been completed. This task assesses the accuracy of seismic analysis techniques to predict dynamic response, and also identifies and quantifies sources of random and modeling undertainty in subsystem response determination. The subsystem has been classified as two categories according to the nature of support: multiply supported subsystems (e.g., piping systems) and singly supported subsystems (e.g., pumps, turbines, electrical control panels, etc.). The mutliply supported piping systems are analyzed by multisupport input time history method. The input motions are the responses of major structures. The dynamic models of the subsystems identified by the event/fault tree are created. The responses calculated by multisupport input time history method are consistent with the fragility parameters. These responses are also coordinated with the event/fault tree description. The subsystem responses are then evaluated against the fragility curves of components and systems and incorporated in the event/fault tree analysis. (orig./HP)

  4. Effect of packaging on physicochemical characteristics of irradiated pre-processed chicken

    International Nuclear Information System (INIS)

    Jiang Xiujie; Zhang Dongjie; Zhang Dequan; Li Shurong; Gao Meixu; Wang Zhidong

    2011-01-01

    To explore the effect of modified atmosphere packaging and antioxidants on the physicochemical characteristics of irradiated pre-processed chicken, the pre-processed chicken was added antioxidants first, and then packaged in common, vacuum and gas respectively, and finally irradiated at 5 kGy dosage. All samples was stored at 4 ℃. The pH, TBA, TVB-N and color deviation were evaluated after 0, 3, 7, 10, 14, 18 and 21 d of storage. The results showed that pH value of pre-processed chicken with antioxidants and vacuum packaged increased with the storage time but not significantly among different treatments. The TBA value was also increased but not significantly (P > 0.05), which indicated that vacuum package inhibited the lipid oxidation. TVB-N value increased with storage time, TVB-N value of vacuum package samples reached 14.29 mg/100 g at 21 d storage, which did not exceeded the reference indexes of fresh meat. a * value of the pre-processed chicken of vacuum package and non-oxygen package samples increased significantly during storage (P > 0.05), and chicken color kept bright red after 21 d storage with vacuum package It is concluded that vacuum packaging of irradiated pre-processed chicken is effective on ensuring its physical and chemical properties during storage. (authors)

  5. Examination of Speed Contribution of Parallelization for Several Fingerprint Pre-Processing Algorithms

    Directory of Open Access Journals (Sweden)

    GORGUNOGLU, S.

    2014-05-01

    Full Text Available In analysis of minutiae based fingerprint systems, fingerprints needs to be pre-processed. The pre-processing is carried out to enhance the quality of the fingerprint and to obtain more accurate minutiae points. Reducing the pre-processing time is important for identification and verification in real time systems and especially for databases holding large fingerprints information. Parallel processing and parallel CPU computing can be considered as distribution of processes over multi core processor. This is done by using parallel programming techniques. Reducing the execution time is the main objective in parallel processing. In this study, pre-processing of minutiae based fingerprint system is implemented by parallel processing on multi core computers using OpenMP and on graphics processor using CUDA to improve execution time. The execution times and speedup ratios are compared with the one that of single core processor. The results show that by using parallel processing, execution time is substantially improved. The improvement ratios obtained for different pre-processing algorithms allowed us to make suggestions on the more suitable approaches for parallelization.

  6. High-energy coordination polymers (CPs) exhibiting good catalytic effect on the thermal decomposition of ammonium dinitramide

    Science.gov (United States)

    Li, Xin; Han, Jing; Zhang, Sheng; Zhai, Lianjie; Wang, Bozhou; Yang, Qi; Wei, Qing; Xie, Gang; Chen, Sanping; Gao, Shengli

    2017-09-01

    High-energy coordination polymers (CPs) not only exhibit good energetic performances but also have a good catalytic effect on the thermal decomposition of energetic materials. In this contribution, two high-energy CPs Cu2(DNBT)2(CH3OH)(H2O)3·3H2O (1) and [Cu3(DDT)2(H2O)2]n (2) (H2DNBT = 3,3‧-dinitro-5,5‧-bis(1H-1,2,4-triazole and H3DDT = 4,5-bis(1H-tetrazol-5-yl)-2H-1,2,3-triazole) were synthesized and structurally characterized. Furthermore, 1 was thermos-dehydrated to produce Cu2(DNBT)2(CH3OH)(H2O)3 (1a). The thermal decomposition kinetics of 1, 1a and 2 were studied by Kissinger's method and Ozawa's method. Thermal analyses and sensitivity tests show that all compounds exhibit high thermal stability and low sensitivity for external stimuli. Meanwhile, all compounds have large positive enthalpy of formation, which are calculated as being (1067.67 ± 2.62) kJ mol-1 (1), (1464.12 ± 3.12) kJ mol-1 (1a) and (3877.82 ± 2.75) kJ mol-1 (2), respectively. The catalytic effects of 1a and 2 on the thermal decomposition of ammonium dinitramide (ADN) were also investigated.

  7. Cost-effectiveness evaluation of clobetasol propionate shampoo (CPS) maintenance in patients with moderate scalp psoriasis: a Pan-European analysis.

    Science.gov (United States)

    Papp, K; Poulin, Y; Barber, K; Lynde, C; Prinz, J C; Berg, M; Kerrouche, N; Rives, V P

    2012-11-01

    Scalp psoriasis is a difficult to treat and usually chronic manifestation of psoriasis. The CalePso study showed that CPS (Clobex(®) Shampoo) in maintenance therapy of scalp psoriasis (twice weekly) significantly increases the probability of keeping patient under remission during 6 months, compared with vehicle (40.3% relapses vs. 11.6% relapses, ITT). The objective of the study was to assess the cost-effectiveness of a maintenance therapy with CPS vs. its vehicle in nine European countries. A 24-week decision tree model was developed with 4-weekly time steps. The considered population has moderate scalp psoriasis successfully treated with a daily application of CPS up to 4 weeks. Data were taken from the CalePso study and from national experts' recommendations for alternative treatment choices, with their probabilities of success taken from literature to develop country-specific models. Health benefits are measured in disease-free days (DFD). The economic analysis includes drug and physician costs. A probabilistic sensitivity analysis (PrSA) assesses the uncertainty of the model. Depending on the country, the mean total number of DFDs per patient is 21-42% higher with CPS compared with vehicle, and the mean total cost is 11-31% lower. The mean costs per DFD are 30-46% lower with CPS compared with the vehicle. The PrSA showed in 1000 simulations that CPS is more effective vs. vehicle in 100% of the cases and less expensive than its vehicle in 80-99% of the cases. This model suggests that CPS is cost-effective in maintaining the success achieved in moderate scalp psoriasis patients. © 2011 The Authors. Journal of the European Academy of Dermatology and Venereology © 2011 European Academy of Dermatology and Venereology.

  8. Shuttle Orbiter Active Thermal Control Subsystem design and flight experience

    Science.gov (United States)

    Bond, Timothy A.; Metcalf, Jordan L.; Asuncion, Carmelo

    1991-01-01

    The paper examines the design of the Space Shuttle Orbiter Active Thermal Control Subsystem (ATCS) constructed for providing the vehicle and payload cooling during all phases of a mission and during ground turnaround operations. The operation of the Shuttle ATCS and some of the problems encountered during the first 39 flights of the Shuttle program are described, with special attention given to the major problems encountered with the degradation of the Freon flow rate on the Orbiter Columbia, the Flash Evaporator Subsystem mission anomalies which occurred on STS-26 and STS-34, and problems encountered with the Ammonia Boiler Subsystem. The causes and the resolutions of these problems are discussed.

  9. Subsystem response review. Seismic safety margins research program

    International Nuclear Information System (INIS)

    Kennedy, R.P.; Campbell, R.D.; Wesley, D.A.; Kamil, H.; Gantayat, A.; Vasudevan, R.

    1981-07-01

    A study was conducted to document the state of the art in seismic qualification of nuclear power plant components and subsystems by analysis and testing and to identify the sources and magnitude of the uncertainties associated with analysis and testing methods. The uncertainties are defined in probabilistic terms for use in probabilistic seismic risk studies. Recommendations are made for the most appropriate subsystem response analysis methods to minimize response uncertainties. Additional studies, to further quantify testing uncertainties, are identified. Although the general effect of non-linearities on subsystem response is discussed, recommendations and conclusions are based principally on linear elastic analysis and testing models. (author)

  10. Comparative performance evaluation of transform coding in image pre-processing

    Science.gov (United States)

    Menon, Vignesh V.; NB, Harikrishnan; Narayanan, Gayathri; CK, Niveditha

    2017-07-01

    We are in the midst of a communication transmute which drives the development as largely as dissemination of pioneering communication systems with ever-increasing fidelity and resolution. Distinguishable researches have been appreciative in image processing techniques crazed by a growing thirst for faster and easier encoding, storage and transmission of visual information. In this paper, the researchers intend to throw light on many techniques which could be worn at the transmitter-end in order to ease the transmission and reconstruction of the images. The researchers investigate the performance of different image transform coding schemes used in pre-processing, their comparison, and effectiveness, the necessary and sufficient conditions, properties and complexity in implementation. Whimsical by prior advancements in image processing techniques, the researchers compare various contemporary image pre-processing frameworks- Compressed Sensing, Singular Value Decomposition, Integer Wavelet Transform on performance. The paper exposes the potential of Integer Wavelet transform to be an efficient pre-processing scheme.

  11. Performance of Pre-processing Schemes with Imperfect Channel State Information

    DEFF Research Database (Denmark)

    Christensen, Søren Skovgaard; Kyritsi, Persa; De Carvalho, Elisabeth

    2006-01-01

    Pre-processing techniques have several benefits when the CSI is perfect. In this work we investigate three linear pre-processing filters, assuming imperfect CSI caused by noise degradation and channel temporal variation. Results indicate, that the LMMSE filter achieves the lowest BER and the high......Pre-processing techniques have several benefits when the CSI is perfect. In this work we investigate three linear pre-processing filters, assuming imperfect CSI caused by noise degradation and channel temporal variation. Results indicate, that the LMMSE filter achieves the lowest BER...... and the highest SINR when the CSI is perfect, whereas the simple matched filter may be a good choice when the CSI is imperfect. Additionally the results give insight into the inherent trade-off between robustness against CSI imperfections and spatial focusing ability....

  12. ngVLA Cryogenic Subsystem Concept

    Science.gov (United States)

    Wootten, Al; Urbain, Denis; Grammer, Wes; Durand, S.

    2018-01-01

    The VLA’s success over 35 years of operations stems in part from dramatically upgraded components over the years. The time has come to build a new array to lead the radio astronomical science into its next 40 years. To accomplish that, a next generation VLA (ngVLA) is envisioned to have 214 antennas with diameters of 18m. The core of the array will be centered at the current VLA location, but the arms will extend out to 1000km.The VLA cryogenic subsystem equipment and technology have remained virtually unchanged since the early 1980s. While adequate for a 27-antenna array, scaling the current system for an array of 214 antennas would be prohibitively expensive in terms of operating cost and maintenance. The overall goal is to limit operating cost to within three times the current level, despite having 8 times the number of antennas. To help realize this goal, broadband receivers and compact feeds will be utilized to reduce both the size and number of cryostats required. The current baseline front end concept calls for just two moderately-sized cryostats for the entire 1.2-116 GHz frequency range, as opposed to 8 in the VLA.For the ngVLA cryogenics, our objective is a well-optimized and efficient system that uses state-of-the-art technology to minimize per-antenna power consumption and maximize reliability. Application of modern technologies, such as variable-speed operation for the scroll compressors and cryocooler motor drives, allow the cooling capacity of the system to be dynamically matched to thermal loading in each cryostat. Significantly, power savings may be realized while the maintenance interval of the cryocoolers is also extended.Finally, a receiver designed to minimize thermal loading can produce savings directly translating to lower operating cost when variable-speed drives are used. Multi-layer insulation (MLI) on radiation shields and improved IR filters on feed windows can significantly reduce heat loading.Measurements done on existing cryogenic

  13. Development status of a preprototype water electrolysis subsystem

    Science.gov (United States)

    Martin, R. B.; Erickson, A. C.

    1981-01-01

    A preprototype water electrolysis subsystem was designed and fabricated for NASA's advanced regenerative life support program. A solid polymer is used for the cell electrolyte. The electrolysis module has 12 cells that can generate 5.5 kg/day of oxygen for the metabolic requirements of three crewmembers, for cabin leakage, and for the oxygen and hydrogen required for carbon dioxide collection and reduction processes. The subsystem can be operated at a pressure between 276 and 2760 kN/sq m and in a continuous constant-current, cyclic, or standby mode. A microprocessor is used to aid in operating the subsystem. Sensors and controls provide fault detection and automatic shutdown. The results of development, demonstration, and parametric testing are presented. Modifications to enhance operation in an integrated and manned test are described. Prospective improvements for the electrolysis subsystem are discussed.

  14. PREVAIL-EPL alpha tool electron optics subsystem

    Science.gov (United States)

    Pfeiffer, Hans C.; Dhaliwal, Rajinder S.; Golladay, Steven D.; Doran, Samuel K.; Gordon, Michael S.; Kendall, Rodney A.; Lieberman, Jon E.; Pinckney, David J.; Quickle, Robert J.; Robinson, Christopher F.; Rockrohr, James D.; Stickel, Werner; Tressler, Eileen V.

    2001-08-01

    The IBM/Nikon alliance is continuing pursuit of an EPL stepper alpha tool based on the PREVAIL technology. This paper provides a status report of the alliance activity with particular focus on the Electron Optical Subsystem developed at IBM. We have previously reported on design features of the PREVAIL alpha system. The new state-of-the-art e-beam lithography concepts have since been reduced to practice and turned into functional building blocks of a production level lithography tool. The electron optical alpha tool subsystem has been designed, build, assembled and tested at IBM's Semiconductor Research and Development Center (SRDC) in East Fishkill, New York. After demonstrating subsystem functionality, the electron optical column and all associated control electronics hardware and software have been shipped during January 2001 to Nikon's facility in Kumagaya, Japan, for integration into the Nikon commercial e-beam stepper alpha tool. Early pre-shipment results obtained with this electron optical subsystem are presented.

  15. Automated Subsystem Control for Life Support System (ASCLSS)

    Science.gov (United States)

    Block, Roger F.

    1987-01-01

    The Automated Subsystem Control for Life Support Systems (ASCLSS) program has successfully developed and demonstrated a generic approach to the automation and control of space station subsystems. The automation system features a hierarchical and distributed real-time control architecture which places maximum controls authority at the lowest or process control level which enhances system autonomy. The ASCLSS demonstration system pioneered many automation and control concepts currently being considered in the space station data management system (DMS). Heavy emphasis is placed on controls hardware and software commonality implemented in accepted standards. The approach demonstrates successfully the application of real-time process and accountability with the subsystem or process developer. The ASCLSS system completely automates a space station subsystem (air revitalization group of the ASCLSS) which moves the crew/operator into a role of supervisory control authority. The ASCLSS program developed over 50 lessons learned which will aide future space station developers in the area of automation and controls..

  16. Effector-Triggered Self-Replication in Coupled Subsystems.

    Science.gov (United States)

    Komáromy, Dávid; Tezcan, Meniz; Schaeffer, Gaël; Marić, Ivana; Otto, Sijbren

    2017-11-13

    In living systems processes like genome duplication and cell division are carefully synchronized through subsystem coupling. If we are to create life de novo, similar control over essential processes such as self-replication need to be developed. Here we report that coupling two dynamic combinatorial subsystems, featuring two separate building blocks, enables effector-mediated control over self-replication. The subsystem based on the first building block shows only self-replication, whereas that based on the second one is solely responsive toward a specific external effector molecule. Mixing the subsystems arrests replication until the effector molecule is added, resulting in the formation of a host-effector complex and the liberation of the building block that subsequently engages in self-replication. The onset, rate and extent of self-replication is controlled by the amount of effector present. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. The Classroom Performance System (CPS): Effects on student participation, attendance, and achievement in multicultural anatomy and physiology classes at South Texas College

    Science.gov (United States)

    Termos, Mohamad Hani

    2011-12-01

    The Classroom Performance System (CPS) is an instructional technology tool that increases student performance and addresses different learning styles. Instructional technologies are used to promote active learning; however, student embarrassment issue in a multicultural setting is not addressed. This study assessed the effect of the CPS on student participation, attendance, and achievement in multicultural college-level anatomy and physiology classes at South Texas College, where the first spoken language is not English. Quantitative method and quasi-experimental design were employed and comparative statistic methods and pre-post tests were used to collect the data. Participants were college students and sections of study were selected by convenient sampling. Participation was 100% during most of the lectures held and participation rate did not strike above 68% in control group. Attendance was significantly higher in CPS sections than the control group as shown by t-tests. Experimental sections had a higher increase in the pre-post test scores and student averages on lecture exams increased at a higher rate as compared to the control group. Therefore, the CPS increased student participation, attendance, and achievement in multicultural anatomy and physiology classes. The CPS can be studied in other settings where the first spoken language is English or in other programs, such as special education programs. Additionally, other variables can be studied and other methodologies can be employed.

  18. Crystallization and preliminary crystallographic analysis of the bacterial capsule assembly-regulating tyrosine phosphatases Wzb of Escherichia coli and Cps4B of Streptococcus pneumoniae

    International Nuclear Information System (INIS)

    Huang, Hexian; Hagelueken, Gregor; Whitfield, Chris; Naismith, James H.

    2009-01-01

    The crystallization is reported of two bacterial tyrosine phosphatases which belong to different enzyme families despite their ability to catalyse identical reactions. Bacterial tyrosine kinases and their cognate phosphatases are key players in the regulation of capsule assembly and thus are important virulence determinants of these bacteria. Examples of the kinase/phosphatase pairing are found in Gram-negative bacteria such as Escherichia coli (Wzc and Wzb) and in Gram-positive bacteria such as Streptococcus pneumoniae (CpsCD and CpsB). Although Wzb and Cps4B are both predicted to dephosphorylate the C-terminal tyrosine cluster of their cognate tyrosine kinase, they appear on the basis of protein sequence to belong to quite different enzyme classes. Recombinant purified proteins Cps4B of S. pneumoniae TIGR4 and Wzb of E. coli K-30 have been crystallized. Wzb crystals belonged to space-group family P3 x 21 and diffracted to 2.7 Å resolution. Crystal form I of Cps4B belonged to space-group family P4 x 2 1 2 and diffracted to 2.8 Å resolution; crystal form II belonged to space group P2 1 2 1 2 1 and diffracted to 1.9 Å resolution

  19. Comparison of classification algorithms for various methods of preprocessing radar images of the MSTAR base

    Science.gov (United States)

    Borodinov, A. A.; Myasnikov, V. V.

    2018-04-01

    The present work is devoted to comparing the accuracy of the known qualification algorithms in the task of recognizing local objects on radar images for various image preprocessing methods. Preprocessing involves speckle noise filtering and normalization of the object orientation in the image by the method of image moments and by a method based on the Hough transform. In comparison, the following classification algorithms are used: Decision tree; Support vector machine, AdaBoost, Random forest. The principal component analysis is used to reduce the dimension. The research is carried out on the objects from the base of radar images MSTAR. The paper presents the results of the conducted studies.

  20. Measurement system as a subsystem of the quality management system

    OpenAIRE

    Ľubica Floreková; Ján Terpák; Marcela Čarnogurská

    2006-01-01

    Each measurement system and a control principle must be based on certain facts about the system behaviour (what), operation (how) and structure (why). Each system is distributed into subsystems that provide an input for the next subsystem. For each system, start is important the begin, that means system characteristics, collecting of data, its hierarchy and the processes distribution.A measurement system (based on the chapter 8 of the standard ISO 9001:2000 Quality management system, requirem...

  1. Opto-mechanical subsystem with temperature compensation through isothemal design

    Science.gov (United States)

    Goodwin, F. E. (Inventor)

    1977-01-01

    An opto-mechanical subsystem for supporting a laser structure which minimizes changes in the alignment of the laser optics in response to temperature variations is described. Both optical and mechanical structural components of the system are formed of the same material, preferably beryllium, which is selected for high mechanical strength and good thermal conducting qualities. All mechanical and optical components are mounted and assembled to provide thorough thermal coupling throughout the subsystem to prevent the development of temperature gradients.

  2. An Algorithm for Integrated Subsystem Embodiment and System Synthesis

    Science.gov (United States)

    Lewis, Kemper

    1997-01-01

    Consider the statement,'A system has two coupled subsystems, one of which dominates the design process. Each subsystem consists of discrete and continuous variables, and is solved using sequential analysis and solution.' To address this type of statement in the design of complex systems, three steps are required, namely, the embodiment of the statement in terms of entities on a computer, the mathematical formulation of subsystem models, and the resulting solution and system synthesis. In complex system decomposition, the subsystems are not isolated, self-supporting entities. Information such as constraints, goals, and design variables may be shared between entities. But many times in engineering problems, full communication and cooperation does not exist, information is incomplete, or one subsystem may dominate the design. Additionally, these engineering problems give rise to mathematical models involving nonlinear functions of both discrete and continuous design variables. In this dissertation an algorithm is developed to handle these types of scenarios for the domain-independent integration of subsystem embodiment, coordination, and system synthesis using constructs from Decision-Based Design, Game Theory, and Multidisciplinary Design Optimization. Implementation of the concept in this dissertation involves testing of the hypotheses using example problems and a motivating case study involving the design of a subsonic passenger aircraft.

  3. An analytical model for an input/output-subsystem

    International Nuclear Information System (INIS)

    Roemgens, J.

    1983-05-01

    An input/output-subsystem of one or several computers if formed by the external memory units and the peripheral units of a computer system. For these subsystems mathematical models are established, taking into account the special properties of the I/O-subsystems, in order to avoid planning errors and to allow for predictions of the capacity of such systems. Here an analytical model is presented for the magnetic discs of a I/O-subsystem, using analytical methods for the individual waiting queues or waiting queue networks. Only I/O-subsystems of IBM-computer configurations are considered, which can be controlled by the MVS operating system. After a description of the hardware and software components of these I/O-systems, possible solutions from the literature are presented and discussed with respect to their applicability in IBM-I/O-subsystems. Based on these models a special scheme is developed which combines the advantages of the literature models and avoids the disadvantages in part. (orig./RW) [de

  4. Extending the Scope of the Resource Admission Control Subsystem (RACS) in IP multimedia subsystem using cognitive radios

    CSIR Research Space (South Africa)

    Muwonge, BK

    2008-04-01

    Full Text Available is greatly increased, and resource reservation and QoS management by the RACS is also greatly increased. Index Terms—Traffic Engineering; Cross Layer; Cognitive Radio, IP Multimedia Subsystem (IMS) I. INTRODUCTION HE IP Multimedia Subsystem (IMS...) is seen as the answer to the much talked-about convergence of data and telecommunication services. The original IMS design was by the 3rd Generation Partnership Project (3GPP) for delivering IP Multimedia services to end users, using telecommunication...

  5. Utility of the CPS+EG staging system in hormone receptor-positive, human epidermal growth factor receptor 2-negative breast cancer treated with neoadjuvant chemotherapy.

    Science.gov (United States)

    Marmé, Frederik; Lederer, Bianca; Blohmer, Jens-Uwe; Costa, Serban Dan; Denkert, Carsten; Eidtmann, Holger; Gerber, Bernd; Hanusch, Claus; Hilfrich, Jörn; Huober, Jens; Jackisch, Christian; Kümmel, Sherko; Loibl, Sibylle; Paepke, Stefan; Untch, Michael; von Minckwitz, Gunter; Schneeweiss, Andreas

    2016-01-01

    Pathologic complete response after neoadjuvant chemotherapy (NACT) correlates with overall survival (OS) in primary breast cancer. A recently described staging system based on pre-treatment clinical stage (CS), final pathological stage (PS), estrogen receptor (ER) status and nuclear grade (NG) leads to a refined estimation of prognosis in unselected patients. Its performance in luminal type breast cancers has not been determined. This study investigates the clinical utility of this CPS+EG score when restricted to hormone receptor-positive (HR+)/human epidermal growth factor receptor 2-negative (HER2-) patients and compares the results to a cohort of unselected patients. The CPS+EG score was calculated for 6637 unselected patients and 2454 patients with HR+/HER2- tumours who received anthracycline/taxane-based NACT within 8 prospective German trials. Five-year disease-free survival (DFS) and OS were 75.6% and 84.1% for the unselected cohort and 80.6% and 87.8% for the HR+/HER2- subgroup, respectively. The CPS+EG system distinguished different prognostic groups with 5-year DFS ranging from 0% to 91%. The CPS+EG system leads to an improved categorisation of patients by outcome compared to CS, PS, ER or NG alone. When applying the CPS+EG score to the HR+/HER2- subgroup, a shift to lower scores was observed compared to the overall population, but 5-year DFS and OS for the individual scores were identical to that observed in the overall population. In HR+/HER2- patients, the CPS+EG staging system retains its ability to facilitate a refined stratification of patients according to outcome. It can help to select candidates for post-neoadjuvant clinical trials in luminal breast cancer. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. A Real-Time Embedded System for Stereo Vision Preprocessing Using an FPGA

    DEFF Research Database (Denmark)

    Kjær-Nielsen, Anders; Jensen, Lars Baunegaard With; Sørensen, Anders Stengaard

    2008-01-01

    In this paper a low level vision processing node for use in existing IEEE 1394 camera setups is presented. The processing node is a small embedded system, that utilizes an FPGA to perform stereo vision preprocessing at rates limited by the bandwidth of IEEE 1394a (400Mbit). The system is used...

  7. Poisson pre-processing of nonstationary photonic signals: Signals with equality between mean and variance.

    Science.gov (United States)

    Poplová, Michaela; Sovka, Pavel; Cifra, Michal

    2017-01-01

    Photonic signals are broadly exploited in communication and sensing and they typically exhibit Poisson-like statistics. In a common scenario where the intensity of the photonic signals is low and one needs to remove a nonstationary trend of the signals for any further analysis, one faces an obstacle: due to the dependence between the mean and variance typical for a Poisson-like process, information about the trend remains in the variance even after the trend has been subtracted, possibly yielding artifactual results in further analyses. Commonly available detrending or normalizing methods cannot cope with this issue. To alleviate this issue we developed a suitable pre-processing method for the signals that originate from a Poisson-like process. In this paper, a Poisson pre-processing method for nonstationary time series with Poisson distribution is developed and tested on computer-generated model data and experimental data of chemiluminescence from human neutrophils and mung seeds. The presented method transforms a nonstationary Poisson signal into a stationary signal with a Poisson distribution while preserving the type of photocount distribution and phase-space structure of the signal. The importance of the suggested pre-processing method is shown in Fano factor and Hurst exponent analysis of both computer-generated model signals and experimental photonic signals. It is demonstrated that our pre-processing method is superior to standard detrending-based methods whenever further signal analysis is sensitive to variance of the signal.

  8. Scene matching based on non-linear pre-processing on reference image and sensed image

    Institute of Scientific and Technical Information of China (English)

    Zhong Sheng; Zhang Tianxu; Sang Nong

    2005-01-01

    To solve the heterogeneous image scene matching problem, a non-linear pre-processing method for the original images before intensity-based correlation is proposed. The result shows that the proper matching probability is raised greatly. Especially for the low S/N image pairs, the effect is more remarkable.

  9. Data pre-processing: a case study in predicting student's retention in ...

    African Journals Online (AJOL)

    dataset with features that are ready for data mining task. The study also proposed a process model and suggestions, which can be applied to support more comprehensible tools for educational domain who is the end user. Subsequently, the data pre-processing become more efficient for predicting student's retention in ...

  10. Summary of ENDF/B Pre-Processing Codes June 1983

    International Nuclear Information System (INIS)

    Cullen, D.E.

    1983-06-01

    This is the summary documentation for the 1983 version of the ENDF/B Pre-Processing Codes LINEAR, RECENT, SIGMA1, GROUPIE, EVALPLOT, MERGER, DICTION, COMPLOT, CONVERT. This summary documentation is merely a copy of the comment cards that appear at the beginning of each programme; these comment cards always reflect the latest status of input options, etc

  11. Evaluation of Microarray Preprocessing Algorithms Based on Concordance with RT-PCR in Clinical Samples

    DEFF Research Database (Denmark)

    Hansen, Kasper Lage; Szallasi, Zoltan Imre; Eklund, Aron Charles

    2009-01-01

    evaluated consistency using the Pearson correlation between measurements obtained on the two platforms. Also, we introduce the log-ratio discrepancy as a more relevant measure of discordance between gene expression platforms. Of nine preprocessing algorithms tested, PLIER+16 produced expression values...

  12. Pre-processing data using wavelet transform and PCA based on ...

    Indian Academy of Sciences (India)

    Abazar Solgi

    2017-07-14

    Jul 14, 2017 ... Pre-processing data using wavelet transform and PCA based on support vector regression and gene expression programming for river flow simulation. Abazar Solgi1,*, Amir Pourhaghi1, Ramin Bahmani2 and Heidar Zarei3. 1. Department of Water Resources Engineering, Shahid Chamran University of ...

  13. Preprocessing for Optimization of Probabilistic-Logic Models for Sequence Analysis

    DEFF Research Database (Denmark)

    Christiansen, Henning; Lassen, Ole Torp

    2009-01-01

    and approximation are needed. The first steps are taken towards a methodology for optimizing such models by approximations using auxiliary models for preprocessing or splitting them into submodels. Evaluation of such approximating models is challenging as authoritative test data may be sparse. On the other hand...

  14. Data preprocessing methods of FT-NIR spectral data for the classification cooking oil

    Science.gov (United States)

    Ruah, Mas Ezatul Nadia Mohd; Rasaruddin, Nor Fazila; Fong, Sim Siong; Jaafar, Mohd Zuli

    2014-12-01

    This recent work describes the data pre-processing method of FT-NIR spectroscopy datasets of cooking oil and its quality parameters with chemometrics method. Pre-processing of near-infrared (NIR) spectral data has become an integral part of chemometrics modelling. Hence, this work is dedicated to investigate the utility and effectiveness of pre-processing algorithms namely row scaling, column scaling and single scaling process with Standard Normal Variate (SNV). The combinations of these scaling methods have impact on exploratory analysis and classification via Principle Component Analysis plot (PCA). The samples were divided into palm oil and non-palm cooking oil. The classification model was build using FT-NIR cooking oil spectra datasets in absorbance mode at the range of 4000cm-1-14000cm-1. Savitzky Golay derivative was applied before developing the classification model. Then, the data was separated into two sets which were training set and test set by using Duplex method. The number of each class was kept equal to 2/3 of the class that has the minimum number of sample. Then, the sample was employed t-statistic as variable selection method in order to select which variable is significant towards the classification models. The evaluation of data pre-processing were looking at value of modified silhouette width (mSW), PCA and also Percentage Correctly Classified (%CC). The results show that different data processing strategies resulting to substantial amount of model performances quality. The effects of several data pre-processing i.e. row scaling, column standardisation and single scaling process with Standard Normal Variate indicated by mSW and %CC. At two PCs model, all five classifier gave high %CC except Quadratic Distance Analysis.

  15. Value of Distributed Preprocessing of Biomass Feedstocks to a Bioenergy Industry

    Energy Technology Data Exchange (ETDEWEB)

    Christopher T Wright

    2006-07-01

    Biomass preprocessing is one of the primary operations in the feedstock assembly system and the front-end of a biorefinery. Its purpose is to chop, grind, or otherwise format the biomass into a suitable feedstock for conversion to ethanol and other bioproducts. Many variables such as equipment cost and efficiency, and feedstock moisture content, particle size, bulk density, compressibility, and flowability affect the location and implementation of this unit operation. Previous conceptual designs show this operation to be located at the front-end of the biorefinery. However, data are presented that show distributed preprocessing at the field-side or in a fixed preprocessing facility can provide significant cost benefits by producing a higher value feedstock with improved handling, transporting, and merchandising potential. In addition, data supporting the preferential deconstruction of feedstock materials due to their bio-composite structure identifies the potential for significant improvements in equipment efficiencies and compositional quality upgrades. Theses data are collected from full-scale low and high capacity hammermill grinders with various screen sizes. Multiple feedstock varieties with a range of moisture values were used in the preprocessing tests. The comparative values of the different grinding configurations, feedstock varieties, and moisture levels are assessed through post-grinding analysis of the different particle fractions separated with a medium-scale forage particle separator and a Rototap separator. The results show that distributed preprocessing produces a material that has bulk flowable properties and fractionation benefits that can improve the ease of transporting, handling and conveying the material to the biorefinery and improve the biochemical and thermochemical conversion processes.

  16. Relative effects of statistical preprocessing and postprocessing on a regional hydrological ensemble prediction system

    Science.gov (United States)

    Sharma, Sanjib; Siddique, Ridwan; Reed, Seann; Ahnert, Peter; Mendoza, Pablo; Mejia, Alfonso

    2018-03-01

    The relative roles of statistical weather preprocessing and streamflow postprocessing in hydrological ensemble forecasting at short- to medium-range forecast lead times (day 1-7) are investigated. For this purpose, a regional hydrologic ensemble prediction system (RHEPS) is developed and implemented. The RHEPS is comprised of the following components: (i) hydrometeorological observations (multisensor precipitation estimates, gridded surface temperature, and gauged streamflow); (ii) weather ensemble forecasts (precipitation and near-surface temperature) from the National Centers for Environmental Prediction 11-member Global Ensemble Forecast System Reforecast version 2 (GEFSRv2); (iii) NOAA's Hydrology Laboratory-Research Distributed Hydrologic Model (HL-RDHM); (iv) heteroscedastic censored logistic regression (HCLR) as the statistical preprocessor; (v) two statistical postprocessors, an autoregressive model with a single exogenous variable (ARX(1,1)) and quantile regression (QR); and (vi) a comprehensive verification strategy. To implement the RHEPS, 1 to 7 days weather forecasts from the GEFSRv2 are used to force HL-RDHM and generate raw ensemble streamflow forecasts. Forecasting experiments are conducted in four nested basins in the US Middle Atlantic region, ranging in size from 381 to 12 362 km2. Results show that the HCLR preprocessed ensemble precipitation forecasts have greater skill than the raw forecasts. These improvements are more noticeable in the warm season at the longer lead times (> 3 days). Both postprocessors, ARX(1,1) and QR, show gains in skill relative to the raw ensemble streamflow forecasts, particularly in the cool season, but QR outperforms ARX(1,1). The scenarios that implement preprocessing and postprocessing separately tend to perform similarly, although the postprocessing-alone scenario is often more effective. The scenario involving both preprocessing and postprocessing consistently outperforms the other scenarios. In some cases

  17. Reproducible cancer biomarker discovery in SELDI-TOF MS using different pre-processing algorithms.

    Directory of Open Access Journals (Sweden)

    Jinfeng Zou

    Full Text Available BACKGROUND: There has been much interest in differentiating diseased and normal samples using biomarkers derived from mass spectrometry (MS studies. However, biomarker identification for specific diseases has been hindered by irreproducibility. Specifically, a peak profile extracted from a dataset for biomarker identification depends on a data pre-processing algorithm. Until now, no widely accepted agreement has been reached. RESULTS: In this paper, we investigated the consistency of biomarker identification using differentially expressed (DE peaks from peak profiles produced by three widely used average spectrum-dependent pre-processing algorithms based on SELDI-TOF MS data for prostate and breast cancers. Our results revealed two important factors that affect the consistency of DE peak identification using different algorithms. One factor is that some DE peaks selected from one peak profile were not detected as peaks in other profiles, and the second factor is that the statistical power of identifying DE peaks in large peak profiles with many peaks may be low due to the large scale of the tests and small number of samples. Furthermore, we demonstrated that the DE peak detection power in large profiles could be improved by the stratified false discovery rate (FDR control approach and that the reproducibility of DE peak detection could thereby be increased. CONCLUSIONS: Comparing and evaluating pre-processing algorithms in terms of reproducibility can elucidate the relationship among different algorithms and also help in selecting a pre-processing algorithm. The DE peaks selected from small peak profiles with few peaks for a dataset tend to be reproducibly detected in large peak profiles, which suggests that a suitable pre-processing algorithm should be able to produce peaks sufficient for identifying useful and reproducible biomarkers.

  18. Double-Shell Tank (DST) Monitor and Control Subsystem Specification

    International Nuclear Information System (INIS)

    BAFUS, R.R.

    2000-01-01

    This specification establishes the performance requirements and provides references to the requisite codes and standards to be applied during design of the Double-Shell Tank (DST) Monitor and Control Subsystem that supports the first phase of Waste Feed Delivery. This subsystem specification establishes the interface and performance requirements and provides references to the requisite codes and standards to be applied during the design of the Double-Shell Tank (DST) Monitor and Control Subsystem. The DST Monitor and Control Subsystem consists of the new and existing equipment that will be used to provide tank farm operators with integrated local monitoring and control of the DST systems to support Waste Feed Delivery (WFD). New equipment will provide automatic control and safety interlocks where required and provide operators with visibility into the status of DST subsystem operations (e.g., DST mixer pump operation and DST waste transfers) and the ability to manually control specified DST functions as necessary. This specification is intended to be the basis for new project/installations (W-521, etc.). This specification is not intended to retroactively affect previously established project design criteria without specific direction by the program

  19. Preprototype vapor compression distillation subsystem. [recovering potable water from wastewater

    Science.gov (United States)

    Ellis, G. S.; Wynveen, R. A.; Schubert, F. H.

    1979-01-01

    A three-person capacity preprototype vapor compression distillation subsystem for recovering potable water from wastewater aboard spacecraft was designed, assembled, and tested. The major components of the subsystem are: (1) a distillation unit which includes a compressor, centrifuge, central shaft, and outer shell; (2) a purge pump; (3) a liquids pump; (4) a post-treat cartridge; (5) a recycle/filter tank; (6) an evaporator high liquid level sensor; and (7) the product water conductivity monitor. A computer based control monitor instrumentation carries out operating mode change sequences, monitors and displays subsystem parameters, maintains intramode controls, and stores and displays fault detection information. The mechanical hardware occupies 0.467 m3, requires 171 W of electrical power, and has a dry weight of 143 kg. The subsystem recovers potable water at a rate of 1.59 kg/hr, which is equivalent to a duty cycle of approximately 30% for a crew of three. The product water has no foul taste or odor. Continued development of the subsystem is recommended for reclaiming water for human consumption as well as for flash evaporator heat rejection, urinal flushing, washing, and other on-board water requirements.

  20. The Main Subsystems Involved in Defining the Quality Management System in a Hospital

    Directory of Open Access Journals (Sweden)

    Dobrea Valentina Alina

    2010-06-01

    Full Text Available The hospital is the most important organization in health field, so they have to improve the quality in all the activities deployed. A very suitable way to show the hospital’s preoccupation for quality of health services is the quality management system certificate according ISO 9001/2000. In understanding the architecture of the hospital quality management system is necessary to decompose this system in subsystems and analyze each separately: the managerial subsystem, the human subsystem, the social subsystem, thetechnical subsystem, the informative subsystem. The relationship between those subsystems leads to the continuous improvement of quality in health services.

  1. Mathematical modeling of control subsystems for CELSS: Application to diet

    Science.gov (United States)

    Waleh, Ahmad; Nguyen, Thoi K.; Kanevsky, Valery

    1991-01-01

    The dynamic control of a Closed Ecological Life Support System (CELSS) in a closed space habitat is of critical importance. The development of a practical method of control is also a necessary step for the selection and design of realistic subsystems and processors for a CELSS. Diet is one of the dynamic factors that strongly influences, and is influenced, by the operational states of all major CELSS subsystems. The problems of design and maintenance of a stable diet must be obtained from well characterized expert subsystems. The general description of a mathematical model that forms the basis of an expert control program for a CELSS is described. The formulation is expressed in terms of a complete set of time dependent canonical variables. System representation is dynamic and includes time dependent storage buffers. The details of the algorithm are described. The steady state results of the application of the method for representative diets made from wheat, potato, and soybean are presented.

  2. Double Shell Tank (DST) Transfer Piping Subsystem Specification

    International Nuclear Information System (INIS)

    GRAVES, C.E.

    2000-01-01

    This specification establishes the performance requirements and provides references to the requisite codes and standards to be applied during design of the Double-Shell Tank (DST) Transfer Piping Subsystem that supports the first phase of Waste Feed Delivery. This specification establishes the performance requirements and provides references to the requisite codes and standards to be applied during design of the Double-Shell Tank (DST) Transfer Piping Subsystem that supports the first phase of waste feed delivery. This subsystem transfers waste between transfer-associated structures (pits) and to the River Protection Project (RPP) Privatization Contractor Facility where it will be processed into an immobilized waste form. This specification is intended to be the basis for new projects/installations (W-521, etc.). This specification is not intended to retroactively affect previously established project design criteria without specific direction by the program

  3. The complete Heyting algebra of subsystems and contextuality

    International Nuclear Information System (INIS)

    Vourdas, A.

    2013-01-01

    The finite set of subsystems of a finite quantum system with variables in Z(n), is studied as a Heyting algebra. The physical meaning of the logical connectives is discussed. It is shown that disjunction of subsystems is more general concept than superposition. Consequently, the quantum probabilities related to commuting projectors in the subsystems, are incompatible with associativity of the join in the Heyting algebra, unless if the variables belong to the same chain. This leads to contextuality, which in the present formalism has as contexts, the chains in the Heyting algebra. Logical Bell inequalities, which contain “Heyting factors,” are discussed. The formalism is also applied to the infinite set of all finite quantum systems, which is appropriately enlarged in order to become a complete Heyting algebra

  4. Embedded Thermal Control for Subsystems for Next Generation Spacecraft Applications

    Science.gov (United States)

    Didion, Jeffrey R.

    2015-01-01

    Thermal Fluids and Analysis Workshop, Silver Spring MD NCTS 21070-15. NASA, the Defense Department and commercial interests are actively engaged in developing miniaturized spacecraft systems and scientific instruments to leverage smaller cheaper spacecraft form factors such as CubeSats. This paper outlines research and development efforts among Goddard Space Flight Center personnel and its several partners to develop innovative embedded thermal control subsystems. Embedded thermal control subsystems is a cross cutting enabling technology integrating advanced manufacturing techniques to develop multifunctional intelligent structures to reduce Size, Weight and Power (SWaP) consumption of both the thermal control subsystem and overall spacecraft. Embedded thermal control subsystems permit heat acquisition and rejection at higher temperatures than state of the art systems by employing both advanced heat transfer equipment (integrated heat exchangers) and high heat transfer phenomena. The Goddard Space Flight Center Thermal Engineering Branch has active investigations seeking to characterize advanced thermal control systems for near term spacecraft missions. The embedded thermal control subsystem development effort consists of fundamental research as well as development of breadboard and prototype hardware and spaceflight validation efforts. This paper will outline relevant fundamental investigations of micro-scale heat transfer and electrically driven liquid film boiling. The hardware development efforts focus upon silicon based high heat flux applications (electronic chips, power electronics etc.) and multifunctional structures. Flight validation efforts include variable gravity campaigns and a proposed CubeSat based flight demonstration of a breadboard embedded thermal control system. The CubeSat investigation is technology demonstration will characterize in long-term low earth orbit a breadboard embedded thermal subsystem and its individual components to develop

  5. Ground test facility for nuclear testing of space reactor subsystems

    International Nuclear Information System (INIS)

    Quapp, W.J.; Watts, K.D.

    1985-01-01

    Two major reactor facilities at the INEL have been identified as easily adaptable for supporting the nuclear testing of the SP-100 reactor subsystem. They are the Engineering Test Reactor (ETR) and the Loss of Fluid Test Reactor (LOFT). In addition, there are machine shops, analytical laboratories, hot cells, and the supporting services (fire protection, safety, security, medical, waste management, etc.) necessary to conducting a nuclear test program. This paper presents the conceptual approach for modifying these reactor facilities for the ground engineering test facility for the SP-100 nuclear subsystem. 4 figs

  6. Coexistence of uniquely ergodic subsystems of interval mapping

    International Nuclear Information System (INIS)

    Ye Xiangdong.

    1991-10-01

    The purpose of this paper is to show that uniquely ergodic subsystems of interval mapping also coexist in the same way as minimal sets do. To do this we give some notations in section 2. In section 3 we define D-function of a uniquely ergodic system and show its basic properties. We prove the coexistence of uniquely ergodic subsystems of interval mapping in section 4. Lastly we give the examples of uniquely ergodic systems with given D-functions in section 5. 27 refs

  7. Integrated flight/propulsion control - Subsystem specifications for performance

    Science.gov (United States)

    Neighbors, W. K.; Rock, Stephen M.

    1993-01-01

    A procedure is presented for calculating multiple subsystem specifications given a number of performance requirements on the integrated system. This procedure applies to problems where the control design must be performed in a partitioned manner. It is based on a structured singular value analysis, and generates specifications as magnitude bounds on subsystem uncertainties. The performance requirements should be provided in the form of bounds on transfer functions of the integrated system. This form allows the expression of model following, command tracking, and disturbance rejection requirements. The procedure is demonstrated on a STOVL aircraft design.

  8. Optomechanical design of TMT NFIRAOS Subsystems at INO

    Science.gov (United States)

    Lamontagne, Frédéric; Desnoyers, Nichola; Grenier, Martin; Cottin, Pierre; Leclerc, Mélanie; Martin, Olivier; Buteau-Vaillancourt, Louis; Boucher, Marc-André; Nash, Reston; Lardière, Olivier; Andersen, David; Atwood, Jenny; Hill, Alexis; Byrnes, Peter W. G.; Herriot, Glen; Fitzsimmons, Joeleff; Véran, Jean-Pierre

    2017-08-01

    The adaptive optics system for the Thirty Meter Telescope (TMT) is the Narrow-Field InfraRed Adaptive Optics System (NFIRAOS). Recently, INO has been involved in the optomechanical design of several subsystems of NFIRAOS, including the Instrument Selection Mirror (ISM), the NFIRAOS Beamsplitters (NBS), and the NFIRAOS Source Simulator system (NSS) comprising the Focal Plane Mask (FPM), the Laser Guide Star (LGS) sources, and the Natural Guide Star (NGS) sources. This paper presents an overview of these subsystems and the optomechanical design approaches used to meet the optical performance requirements under environmental constraints.

  9. Software Testbed for Developing and Evaluating Integrated Autonomous Subsystems

    Science.gov (United States)

    Ong, James; Remolina, Emilio; Prompt, Axel; Robinson, Peter; Sweet, Adam; Nishikawa, David

    2015-01-01

    To implement fault tolerant autonomy in future space systems, it will be necessary to integrate planning, adaptive control, and state estimation subsystems. However, integrating these subsystems is difficult, time-consuming, and error-prone. This paper describes Intelliface/ADAPT, a software testbed that helps researchers develop and test alternative strategies for integrating planning, execution, and diagnosis subsystems more quickly and easily. The testbed's architecture, graphical data displays, and implementations of the integrated subsystems support easy plug and play of alternate components to support research and development in fault-tolerant control of autonomous vehicles and operations support systems. Intelliface/ADAPT controls NASA's Advanced Diagnostics and Prognostics Testbed (ADAPT), which comprises batteries, electrical loads (fans, pumps, and lights), relays, circuit breakers, invertors, and sensors. During plan execution, an experimentor can inject faults into the ADAPT testbed by tripping circuit breakers, changing fan speed settings, and closing valves to restrict fluid flow. The diagnostic subsystem, based on NASA's Hybrid Diagnosis Engine (HyDE), detects and isolates these faults to determine the new state of the plant, ADAPT. Intelliface/ADAPT then updates its model of the ADAPT system's resources and determines whether the current plan can be executed using the reduced resources. If not, the planning subsystem generates a new plan that reschedules tasks, reconfigures ADAPT, and reassigns the use of ADAPT resources as needed to work around the fault. The resource model, planning domain model, and planning goals are expressed using NASA's Action Notation Modeling Language (ANML). Parts of the ANML model are generated automatically, and other parts are constructed by hand using the Planning Model Integrated Development Environment, a visual Eclipse-based IDE that accelerates ANML model development. Because native ANML planners are currently

  10. Novel Design Aspects of the Space Technology 5 Mechanical Subsystem

    Science.gov (United States)

    Rossoni, Peter; McGill, William

    2003-01-01

    This paper describes several novel design elements of the Space Technology 5 (ST5) spacecraft mechanical subsystem. The spacecraft structure itself takes a significant step in integrating electronics into the primary structure. The deployment system restrains the spacecraft during launch and imparts a predetermined spin rate upon release from its secondary payload accommodations. The deployable instrument boom incorporates some traditional as well as new techniques for lightweight and stiffness. Analysis and test techniques used to validate these technologies are described. Numerous design choices were necessitated due to the compact spacecraft size and strict mechanical subsystem requirements.

  11. Definition of an arcjet propulsion sub-system

    International Nuclear Information System (INIS)

    Price, T.W.

    1989-01-01

    An engineering flight demonstration of a 100 kW3 Space Reactor Power System is planned for the mid to late 1990s. An arcjet based propulsion subsystem will be included on the flight demonstraction as a secondary experiment. Two studies, sponsored by the Kay Technologies Directorate of the SDI Organization and managed by the Jet Propulsion Laboratory are currently under way to define that propulsion subsystem. The principal tasks of those contracts and the plans for two later phases, an experimental verification of the concept and a flight qualification/delivery of a flight unit, are described. 9 refs

  12. Seismic Safety Margins Research Program. Phase 1. Project V. Structural sub-system response: subsystem response review

    International Nuclear Information System (INIS)

    Fogelquist, J.; Kaul, M.K.; Koppe, R.; Tagart, S.W. Jr.; Thailer, H.; Uffer, R.

    1980-03-01

    This project is directed toward a portion of the Seismic Safety Margins Research Program which includes one link in the seismic methodology chain. The link addressed here is the structural subsystem dynamic response which consists of those components and systems whose behavior is often determined decoupled from the major structural response. Typically the mathematical model utilized for the major structural response will include only the mass effects of the subsystem and the main model is used to produce the support motion inputs for subsystem seismic qualification. The main questions addressed in this report have to do with the seismic response uncertainty of safety-related components or equipment whose seismic qualification is performed by (a) analysis, (b) tests, or (c) combinations of analysis and tests, and where the seismic input is assumed to have no uncertainty

  13. Input data preprocessing method for exchange rate forecasting via neural network

    Directory of Open Access Journals (Sweden)

    Antić Dragan S.

    2014-01-01

    Full Text Available The aim of this paper is to present a method for neural network input parameters selection and preprocessing. The purpose of this network is to forecast foreign exchange rates using artificial intelligence. Two data sets are formed for two different economic systems. Each system is represented by six categories with 70 economic parameters which are used in the analysis. Reduction of these parameters within each category was performed by using the principal component analysis method. Component interdependencies are established and relations between them are formed. Newly formed relations were used to create input vectors of a neural network. The multilayer feed forward neural network is formed and trained using batch training. Finally, simulation results are presented and it is concluded that input data preparation method is an effective way for preprocessing neural network data. [Projekat Ministarstva nauke Republike Srbije, br.TR 35005, br. III 43007 i br. III 44006

  14. Parallel finite elements with domain decomposition and its pre-processing

    International Nuclear Information System (INIS)

    Yoshida, A.; Yagawa, G.; Hamada, S.

    1993-01-01

    This paper describes a parallel finite element analysis using a domain decomposition method, and the pre-processing for the parallel calculation. Computer simulations are about to replace experiments in various fields, and the scale of model to be simulated tends to be extremely large. On the other hand, computational environment has drastically changed in these years. Especially, parallel processing on massively parallel computers or computer networks is considered to be promising techniques. In order to achieve high efficiency on such parallel computation environment, large granularity of tasks, a well-balanced workload distribution are key issues. It is also important to reduce the cost of pre-processing in such parallel FEM. From the point of view, the authors developed the domain decomposition FEM with the automatic and dynamic task-allocation mechanism and the automatic mesh generation/domain subdivision system for it. (author)

  15. Protein from preprocessed waste activated sludge as a nutritional supplement in chicken feed.

    Science.gov (United States)

    Chirwa, Evans M N; Lebitso, Moses T

    2014-01-01

    Five groups of broiler chickens were raised on feed containing varying substitutions of single cell protein from preprocessed waste activated sludge (pWAS) in varying compositions of 0:100, 25:75, 50:50, 75:25, and 100:0 pWAS: fishmeal by mass. Forty chickens per batch were evaluated for growth rate, mortality rate, and feed conversion efficiency (ηє). The initial mass gain rate, mortality rate, initial and operational cost analyses showed that protein from pWAS could successfully replace the commercial feed supplements with a significant cost saving without adversely affecting the health of the birds. The chickens raised on preprocessed WAS weighed 19% more than those raised on fishmeal protein supplement over a 45 day test period. Growing chickens on pWAS translated into a 46% cost saving due to the fast growth rate and minimal death losses before maturity.

  16. Application of preprocessing filtering on Decision Tree C4.5 and rough set theory

    Science.gov (United States)

    Chan, Joseph C. C.; Lin, Tsau Y.

    2001-03-01

    This paper compares two artificial intelligence methods: the Decision Tree C4.5 and Rough Set Theory on the stock market data. The Decision Tree C4.5 is reviewed with the Rough Set Theory. An enhanced window application is developed to facilitate the pre-processing filtering by introducing the feature (attribute) transformations, which allows users to input formulas and create new attributes. Also, the application produces three varieties of data set with delaying, averaging, and summation. The results prove the improvement of pre-processing by applying feature (attribute) transformations on Decision Tree C4.5. Moreover, the comparison between Decision Tree C4.5 and Rough Set Theory is based on the clarity, automation, accuracy, dimensionality, raw data, and speed, which is supported by the rules sets generated by both algorithms on three different sets of data.

  17. Acoustic Biometric System Based on Preprocessing Techniques and Linear Support Vector Machines.

    Science.gov (United States)

    del Val, Lara; Izquierdo-Fuente, Alberto; Villacorta, Juan J; Raboso, Mariano

    2015-06-17

    Drawing on the results of an acoustic biometric system based on a MSE classifier, a new biometric system has been implemented. This new system preprocesses acoustic images, extracts several parameters and finally classifies them, based on Support Vector Machine (SVM). The preprocessing techniques used are spatial filtering, segmentation-based on a Gaussian Mixture Model (GMM) to separate the person from the background, masking-to reduce the dimensions of images-and binarization-to reduce the size of each image. An analysis of classification error and a study of the sensitivity of the error versus the computational burden of each implemented algorithm are presented. This allows the selection of the most relevant algorithms, according to the benefits required by the system. A significant improvement of the biometric system has been achieved by reducing the classification error, the computational burden and the storage requirements.

  18. ENDF/B Pre-Processing Codes: Implementing and testing on a Personal Computer

    International Nuclear Information System (INIS)

    McLaughlin, P.K.

    1987-05-01

    This document describes the contents of the diskettes containing the ENDF/B Pre-Processing codes by D.E. Cullen, and example data for use in implementing and testing these codes on a Personal Computer of the type IBM-PC/AT. Upon request the codes are available from the IAEA Nuclear Data Section, free of charge, on a series of 7 diskettes. (author)

  19. The Influence of Preprocessing Steps on Graph Theory Measures Derived from Resting State fMRI.

    Science.gov (United States)

    Gargouri, Fatma; Kallel, Fathi; Delphine, Sebastien; Ben Hamida, Ahmed; Lehéricy, Stéphane; Valabregue, Romain

    2018-01-01

    Resting state functional MRI (rs-fMRI) is an imaging technique that allows the spontaneous activity of the brain to be measured. Measures of functional connectivity highly depend on the quality of the BOLD signal data processing. In this study, our aim was to study the influence of preprocessing steps and their order of application on small-world topology and their efficiency in resting state fMRI data analysis using graph theory. We applied the most standard preprocessing steps: slice-timing, realign, smoothing, filtering, and the tCompCor method. In particular, we were interested in how preprocessing can retain the small-world economic properties and how to maximize the local and global efficiency of a network while minimizing the cost. Tests that we conducted in 54 healthy subjects showed that the choice and ordering of preprocessing steps impacted the graph measures. We found that the csr (where we applied realignment, smoothing, and tCompCor as a final step) and the scr (where we applied realignment, tCompCor and smoothing as a final step) strategies had the highest mean values of global efficiency (eg) . Furthermore, we found that the fscr strategy (where we applied realignment, tCompCor, smoothing, and filtering as a final step), had the highest mean local efficiency (el) values. These results confirm that the graph theory measures of functional connectivity depend on the ordering of the processing steps, with the best results being obtained using smoothing and tCompCor as the final steps for global efficiency with additional filtering for local efficiency.

  20. A clinical evaluation of the RNCA study using Fourier filtering as a preprocessing method

    Energy Technology Data Exchange (ETDEWEB)

    Robeson, W.; Alcan, K.E.; Graham, M.C.; Palestro, C.; Oliver, F.H.; Benua, R.S.

    1984-06-01

    Forty-one patients (25 male, 16 female) were studied by Radionuclide Cardangiography (RNCA) in our institution. There were 42 rest studies and 24 stress studies (66 studies total). Sixteen patients were normal, 15 had ASHD, seven had a cardiomyopathy, and three had left-sided valvular regurgitation. Each study was preprocessed using both the standard nine-point smoothing method and Fourier filtering. Amplitude and phase images were also generated. Both preprocessing methods were compared with respect to image quality, border definition, reliability and reproducibility of the LVEF, and cine wall motion interpretation. Image quality and border definition were judged superior by the consensus of two independent observers in 65 of 66 studies (98%) using Fourier filtered data. The LVEF differed between the two processes by greater than .05 in 17 of 66 studies (26%) including five studies in which the LVEF could not be determined using nine-point smoothed data. LV wall motion was normal by both techniques in all control patients by cine analysis. However, cine wall motion analysis using Fourier filtered data demonstrated additional abnormalities in 17 of 25 studies (68%) in the ASHD group, including three uninterpretable studies using nine-point smoothed data. In the cardiomyopathy/valvular heart disease group, ten of 18 studies (56%) had additional wall motion abnormalities using Fourier filtered data (including four uninterpretable studies using nine-point smoothed data). We conclude that Fourier filtering is superior to the nine-point smooth preprocessing method now in general use in terms of image quality, border definition, generation of an LVEF, and cine wall motion analysis. The advent of the array processor makes routine preprocessing by Fourier filtering a feasible technologic advance in the development of the RNCA study.

  1. Review of Data Preprocessing Methods for Sign Language Recognition Systems based on Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Zorins Aleksejs

    2016-12-01

    Full Text Available The article presents an introductory analysis of relevant research topic for Latvian deaf society, which is the development of the Latvian Sign Language Recognition System. More specifically the data preprocessing methods are discussed in the paper and several approaches are shown with a focus on systems based on artificial neural networks, which are one of the most successful solutions for sign language recognition task.

  2. Evaluation of a Stereo Music Preprocessing Scheme for Cochlear Implant Users.

    Science.gov (United States)

    Buyens, Wim; van Dijk, Bas; Moonen, Marc; Wouters, Jan

    2018-01-01

    Although for most cochlear implant (CI) users good speech understanding is reached (at least in quiet environments), the perception and the appraisal of music are generally unsatisfactory. The improvement in music appraisal was evaluated in CI participants by using a stereo music preprocessing scheme implemented on a take-home device, in a comfortable listening environment. The preprocessing allowed adjusting the balance among vocals/bass/drums and other instruments, and was evaluated for different genres of music. The correlation between the preferred settings and the participants' speech and pitch detection performance was investigated. During the initial visit preceding the take-home test, the participants' speech-in-noise perception and pitch detection performance were measured, and a questionnaire about their music involvement was completed. The take-home device was provided, including the stereo music preprocessing scheme and seven playlists with six songs each. The participants were asked to adjust the balance by means of a turning wheel to make the music sound most enjoyable, and to repeat this three times for all songs. Twelve postlingually deafened CI users participated in the study. The data were collected by means of a take-home device, which preserved all the preferred settings for the different songs. Statistical analysis was done with a Friedman test (with post hoc Wilcoxon signed-rank test) to check the effect of "Genre." The correlations were investigated with Pearson's and Spearman's correlation coefficients. All participants preferred a balance significantly different from the original balance. Differences across participants were observed which could not be explained by perceptual abilities. An effect of "Genre" was found, showing significantly smaller preferred deviation from the original balance for Golden Oldies compared to the other genres. The stereo music preprocessing scheme showed an improvement in music appraisal with complex music and

  3. Hyperspectral imaging in medicine: image pre-processing problems and solutions in Matlab.

    Science.gov (United States)

    Koprowski, Robert

    2015-11-01

    The paper presents problems and solutions related to hyperspectral image pre-processing. New methods of preliminary image analysis are proposed. The paper shows problems occurring in Matlab when trying to analyse this type of images. Moreover, new methods are discussed which provide the source code in Matlab that can be used in practice without any licensing restrictions. The proposed application and sample result of hyperspectral image analysis. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. A clinical evaluation of the RNCA study using Fourier filtering as a preprocessing method

    International Nuclear Information System (INIS)

    Robeson, W.; Alcan, K.E.; Graham, M.C.; Palestro, C.; Oliver, F.H.; Benua, R.S.

    1984-01-01

    Forty-one patients (25 male, 16 female) were studied by Radionuclide Cardangiography (RNCA) in our institution. There were 42 rest studies and 24 stress studies (66 studies total). Sixteen patients were normal, 15 had ASHD, seven had a cardiomyopathy, and three had left-sided valvular regurgitation. Each study was preprocessed using both the standard nine-point smoothing method and Fourier filtering. Amplitude and phase images were also generated. Both preprocessing methods were compared with respect to image quality, border definition, reliability and reproducibility of the LVEF, and cine wall motion interpretation. Image quality and border definition were judged superior by the consensus of two independent observers in 65 of 66 studies (98%) using Fourier filtered data. The LVEF differed between the two processes by greater than .05 in 17 of 66 studies (26%) including five studies in which the LVEF could not be determined using nine-point smoothed data. LV wall motion was normal by both techniques in all control patients by cine analysis. However, cine wall motion analysis using Fourier filtered data demonstrated additional abnormalities in 17 of 25 studies (68%) in the ASHD group, including three uninterpretable studies using nine-point smoothed data. In the cardiomyopathy/valvular heart disease group, ten of 18 studies (56%) had additional wall motion abnormalities using Fourier filtered data (including four uninterpretable studies using nine-point smoothed data). We conclude that Fourier filtering is superior to the nine-point smooth preprocessing method now in general use in terms of image quality, border definition, generation of an LVEF, and cine wall motion analysis. The advent of the array processor makes routine preprocessing by Fourier filtering a feasible technologic advance in the development of the RNCA study

  5. The Influence of Preprocessing Steps on Graph Theory Measures Derived from Resting State fMRI

    Directory of Open Access Journals (Sweden)

    Fatma Gargouri

    2018-02-01

    Full Text Available Resting state functional MRI (rs-fMRI is an imaging technique that allows the spontaneous activity of the brain to be measured. Measures of functional connectivity highly depend on the quality of the BOLD signal data processing. In this study, our aim was to study the influence of preprocessing steps and their order of application on small-world topology and their efficiency in resting state fMRI data analysis using graph theory. We applied the most standard preprocessing steps: slice-timing, realign, smoothing, filtering, and the tCompCor method. In particular, we were interested in how preprocessing can retain the small-world economic properties and how to maximize the local and global efficiency of a network while minimizing the cost. Tests that we conducted in 54 healthy subjects showed that the choice and ordering of preprocessing steps impacted the graph measures. We found that the csr (where we applied realignment, smoothing, and tCompCor as a final step and the scr (where we applied realignment, tCompCor and smoothing as a final step strategies had the highest mean values of global efficiency (eg. Furthermore, we found that the fscr strategy (where we applied realignment, tCompCor, smoothing, and filtering as a final step, had the highest mean local efficiency (el values. These results confirm that the graph theory measures of functional connectivity depend on the ordering of the processing steps, with the best results being obtained using smoothing and tCompCor as the final steps for global efficiency with additional filtering for local efficiency.

  6. The Influence of Preprocessing Steps on Graph Theory Measures Derived from Resting State fMRI

    Science.gov (United States)

    Gargouri, Fatma; Kallel, Fathi; Delphine, Sebastien; Ben Hamida, Ahmed; Lehéricy, Stéphane; Valabregue, Romain

    2018-01-01

    Resting state functional MRI (rs-fMRI) is an imaging technique that allows the spontaneous activity of the brain to be measured. Measures of functional connectivity highly depend on the quality of the BOLD signal data processing. In this study, our aim was to study the influence of preprocessing steps and their order of application on small-world topology and their efficiency in resting state fMRI data analysis using graph theory. We applied the most standard preprocessing steps: slice-timing, realign, smoothing, filtering, and the tCompCor method. In particular, we were interested in how preprocessing can retain the small-world economic properties and how to maximize the local and global efficiency of a network while minimizing the cost. Tests that we conducted in 54 healthy subjects showed that the choice and ordering of preprocessing steps impacted the graph measures. We found that the csr (where we applied realignment, smoothing, and tCompCor as a final step) and the scr (where we applied realignment, tCompCor and smoothing as a final step) strategies had the highest mean values of global efficiency (eg). Furthermore, we found that the fscr strategy (where we applied realignment, tCompCor, smoothing, and filtering as a final step), had the highest mean local efficiency (el) values. These results confirm that the graph theory measures of functional connectivity depend on the ordering of the processing steps, with the best results being obtained using smoothing and tCompCor as the final steps for global efficiency with additional filtering for local efficiency. PMID:29497372

  7. Cascade Distillation Subsystem Development: Progress Toward a Distillation Comparison Test

    Science.gov (United States)

    Callahan, M. R.; Lubman, A.; Pickering, Karen D.

    2009-01-01

    Recovery of potable water from wastewater is essential for the success of long-duration manned missions to the Moon and Mars. Honeywell International and a team from NASA Johnson Space Center (JSC) are developing a wastewater processing subsystem that is based on centrifugal vacuum distillation. The wastewater processor, referred to as the Cascade Distillation Subsystem (CDS), utilizes an innovative and efficient multistage thermodynamic process to produce purified water. The rotary centrifugal design of the system also provides gas/liquid phase separation and liquid transport under microgravity conditions. A five-stage subsystem unit has been designed, built, delivered and integrated into the NASA JSC Advanced Water Recovery Systems Development Facility for performance testing. A major test objective of the project is to demonstrate the advancement of the CDS technology from the breadboard level to a subsystem level unit. An initial round of CDS performance testing was completed in fiscal year (FY) 2008. Based on FY08 testing, the system is now in development to support an Exploration Life Support (ELS) Project distillation comparison test expected to begin in early 2009. As part of the project objectives planned for FY09, the system will be reconfigured to support the ELS comparison test. The CDS will then be challenged with a series of human-gene-rated waste streams representative of those anticipated for a lunar outpost. This paper provides a description of the CDS technology, a status of the current project activities, and data on the system s performance to date.

  8. Mark 4A DSN receiver-exciter and transmitter subsystems

    Science.gov (United States)

    Wick, M. R.

    1986-01-01

    The present configuration of the Mark 4A DSN Receiver-Exciter and Transmitter Subsystems is described. Functional requirements and key characteristics are given to show the differences in the capabilities required by the Networks Consolidation task for combined High Earth Orbiter and Deep Space Network tracking support.

  9. Effector-Triggered Self-Replication in Coupled Subsystems

    NARCIS (Netherlands)

    Komáromy, Dávid; Tezcan, Meniz; Schaeffer, Gaël; Marić, Ivana; Otto, Sijbren

    2017-01-01

    In living systems processes like genome duplication and cell division are carefully synchronized through subsystem coupling. If we are to create life de novo, similar control over essential processes such as self-replication need to be developed. Here we report that coupling two dynamic

  10. Computer Simulation of the Circulation Subsystem of a Library

    Science.gov (United States)

    Shaw, W. M., Jr.

    1975-01-01

    When circulation data are used as input parameters for a computer simulation of a library's circulation subsystem, the results of the simulation provide information on book availability and delays. The model may be used to simulate alternative loan policies. (Author/LS)

  11. Double-Shell Tank (DST) Diluent and Flush Subsystem Specification

    International Nuclear Information System (INIS)

    GRAVES, C.E.

    2000-01-01

    The Double-Shell Tank (DST) Diluent and Flush Subsystem is intended to support Waste Feed Delivery. The DST Diluent and Flush Subsystem specification describes the relationship of this system with the DST System, describes the functions that must be performed by the system, and establishes the performance requirements to be applied to the design of the system. It also provides references for the requisite codes and standards. The DST Diluent and Flush Subsystem will treat the waste for a more favorable waste transfer. This will be accomplished by diluting the waste, dissolving the soluble portion of the waste, and flushing waste residuals from the transfer line. The Diluent and Flush Subsystem will consist of the following: The Diluent and Flush Station(s) where chemicals will be off-loaded, temporarily stored, mixed as necessary, heated, and metered to the delivery system; and A piping delivery system to deliver the chemicals to the appropriate valve or pump pit Associated support structures. This specification is intended to be the basis for new projects/installations. This specification is not intended to retroactively affect previously established project design criteria without specific direction by the program

  12. A shell-model calculation in terms of correlated subsystems

    International Nuclear Information System (INIS)

    Boisson, J.P.; Silvestre-Brac, B.

    1979-01-01

    A method for solving the shell-model equations in terms of a basis which includes correlated subsystems is presented. It is shown that the method allows drastic truncations of the basis to be made. The corresponding calculations are easy to perform and can be carried out rapidly

  13. Compliance with NRC subsystem requirements in the repository licensing process

    International Nuclear Information System (INIS)

    Minwalla, H.

    1994-01-01

    Section 121 of the Nuclear Waste Policy Act of 1982 requires the Nuclear Regulatory Commission (Commission) to issue technical requirements and criteria, for the use of a system of multiple barriers in the design of the repository, that are not inconsistent with any comparable standard promulgated by the Environmental Protection Agency (EPA). The Administrator of the EPA is required to promulgate generally applicable standards for protection of the general environment from offsite releases from radioactive material in repositories. The Commission's regulations pertaining to geologic repositories are provided in 10 CFR part 60. The Commission has provided in 10 CFR 60.112 the overall post-closure system performance objective which is used to demonstrate compliance with the EPA high-level waste (HLW) disposal standard. In addition, the Commission has provided, in 10 CFR 60.113, subsystem performance requirements for substantially complete containment, fractional release rate, and groundwater travel time; however, none of these subsystem performance requirements have a causal technical nexus with the EPA HLW disposal standard. This paper examines the issue of compliance with the conflicting dual regulatory role of subsystem performance requirements in the repository licensing process and recommends several approaches that would appropriately define the role of subsystem performance requirements in the repository licensing process

  14. Charactering lidar optical subsystem using four quadrants method

    Science.gov (United States)

    Tian, Xiaomin; Liu, Dong; Xu, Jiwei; Wang, Zhenzhu; Wang, Bangxin; Wu, Decheng; Zhong, Zhiqing; Xie, Chenbo; Wang, Yingjian

    2018-02-01

    Lidar is a kind of active optical remote sensing instruments , can be applied to sound atmosphere with a high spatial and temporal resolution. Many parameter of atmosphere can be get by using different inverse algorithm with lidar backscatter signal. The basic setup of a lidar consist of a transmitter and a receiver. To make sure the quality of lidar signal data, the lidar must be calibrated before being used to measure the atmospheric variables. It is really significant to character and analyze lidar optical subsystem because a well equiped lidar optical subsystem contributes to high quality lidar signal data. we pay close attention to telecover test to character and analyze lidar optical subsystem.The telecover test is called four quadrants method consisting in dividing the telescope aperture in four quarants. when a lidar is well configured with lidar optical subsystem, the normalized signal from four qudrants will agree with each other on some level. Testing our WARL-II lidar by four quadrants method ,we find the signals of the four basically consistent with each other both in near range and in far range. But in detail, the signals in near range have some slight distinctions resulting from overlap function, some signals distinctions are induced by atmospheric instability.

  15. Supervised pre-processing approaches in multiple class variables classification for fish recruitment forecasting

    KAUST Repository

    Fernandes, José Antonio

    2013-02-01

    A multi-species approach to fisheries management requires taking into account the interactions between species in order to improve recruitment forecasting of the fish species. Recent advances in Bayesian networks direct the learning of models with several interrelated variables to be forecasted simultaneously. These models are known as multi-dimensional Bayesian network classifiers (MDBNs). Pre-processing steps are critical for the posterior learning of the model in these kinds of domains. Therefore, in the present study, a set of \\'state-of-the-art\\' uni-dimensional pre-processing methods, within the categories of missing data imputation, feature discretization and feature subset selection, are adapted to be used with MDBNs. A framework that includes the proposed multi-dimensional supervised pre-processing methods, coupled with a MDBN classifier, is tested with synthetic datasets and the real domain of fish recruitment forecasting. The correctly forecasting of three fish species (anchovy, sardine and hake) simultaneously is doubled (from 17.3% to 29.5%) using the multi-dimensional approach in comparison to mono-species models. The probability assessments also show high improvement reducing the average error (estimated by means of Brier score) from 0.35 to 0.27. Finally, these differences are superior to the forecasting of species by pairs. © 2012 Elsevier Ltd.

  16. Preprocessing with Photoshop Software on Microscopic Images of A549 Cells in Epithelial-Mesenchymal Transition.

    Science.gov (United States)

    Ren, Zhou-Xin; Yu, Hai-Bin; Shen, Jun-Ling; Li, Ya; Li, Jian-Sheng

    2015-06-01

    To establish a preprocessing method for cell morphometry in microscopic images of A549 cells in epithelial-mesenchymal transition (EMT). Adobe Photoshop CS2 (Adobe Systems, Inc.) was used for preprocessing the images. First, all images were processed for size uniformity and high distinguishability between the cell and background area. Then, a blank image with the same size and grids was established and cross points of the grids were added into a distinct color. The blank image was merged into a processed image. In the merged images, the cells with 1 or more cross points were chosen, and then the cell areas were enclosed and were replaced in a distinct color. Except for chosen cellular areas, all areas were changed into a unique hue. Three observers quantified roundness of cells in images with the image preprocess (IPP) or without the method (Controls), respectively. Furthermore, 1 observer measured the roundness 3 times with the 2 methods, respectively. The results between IPPs and Controls were compared for repeatability and reproducibility. As compared with the Control method, among 3 observers, use of the IPP method resulted in a higher number and a higher percentage of same-chosen cells in an image. The relative average deviation values of roundness, either for 3 observers or 1 observer, were significantly higher in Controls than in IPPs (p Photoshop, a chosen cell from an image was more objective, regular, and accurate, creating an increase of reproducibility and repeatability on morphometry of A549 cells in epithelial to mesenchymal transition.

  17. Characterizing the continuously acquired cardiovascular time series during hemodialysis, using median hybrid filter preprocessing noise reduction

    Directory of Open Access Journals (Sweden)

    Wilson S

    2015-01-01

    Full Text Available Scott Wilson,1,2 Andrea Bowyer,3 Stephen B Harrap4 1Department of Renal Medicine, The Alfred Hospital, 2Baker IDI, Melbourne, 3Department of Anaesthesia, Royal Melbourne Hospital, 4University of Melbourne, Parkville, VIC, Australia Abstract: The clinical characterization of cardiovascular dynamics during hemodialysis (HD has important pathophysiological implications in terms of diagnostic, cardiovascular risk assessment, and treatment efficacy perspectives. Currently the diagnosis of significant intradialytic systolic blood pressure (SBP changes among HD patients is imprecise and opportunistic, reliant upon the presence of hypotensive symptoms in conjunction with coincident but isolated noninvasive brachial cuff blood pressure (NIBP readings. Considering hemodynamic variables as a time series makes a continuous recording approach more desirable than intermittent measures; however, in the clinical environment, the data signal is susceptible to corruption due to both impulsive and Gaussian-type noise. Signal preprocessing is an attractive solution to this problem. Prospectively collected continuous noninvasive SBP data over the short-break intradialytic period in ten patients was preprocessed using a novel median hybrid filter (MHF algorithm and compared with 50 time-coincident pairs of intradialytic NIBP measures from routine HD practice. The median hybrid preprocessing technique for continuously acquired cardiovascular data yielded a dynamic regression without significant noise and artifact, suitable for high-level profiling of time-dependent SBP behavior. Signal accuracy is highly comparable with standard NIBP measurement, with the added clinical benefit of dynamic real-time hemodynamic information. Keywords: continuous monitoring, blood pressure

  18. Characterizing the continuously acquired cardiovascular time series during hemodialysis, using median hybrid filter preprocessing noise reduction.

    Science.gov (United States)

    Wilson, Scott; Bowyer, Andrea; Harrap, Stephen B

    2015-01-01

    The clinical characterization of cardiovascular dynamics during hemodialysis (HD) has important pathophysiological implications in terms of diagnostic, cardiovascular risk assessment, and treatment efficacy perspectives. Currently the diagnosis of significant intradialytic systolic blood pressure (SBP) changes among HD patients is imprecise and opportunistic, reliant upon the presence of hypotensive symptoms in conjunction with coincident but isolated noninvasive brachial cuff blood pressure (NIBP) readings. Considering hemodynamic variables as a time series makes a continuous recording approach more desirable than intermittent measures; however, in the clinical environment, the data signal is susceptible to corruption due to both impulsive and Gaussian-type noise. Signal preprocessing is an attractive solution to this problem. Prospectively collected continuous noninvasive SBP data over the short-break intradialytic period in ten patients was preprocessed using a novel median hybrid filter (MHF) algorithm and compared with 50 time-coincident pairs of intradialytic NIBP measures from routine HD practice. The median hybrid preprocessing technique for continuously acquired cardiovascular data yielded a dynamic regression without significant noise and artifact, suitable for high-level profiling of time-dependent SBP behavior. Signal accuracy is highly comparable with standard NIBP measurement, with the added clinical benefit of dynamic real-time hemodynamic information.

  19. Validation of DWI pre-processing procedures for reliable differentiation between human brain gliomas.

    Science.gov (United States)

    Vellmer, Sebastian; Tonoyan, Aram S; Suter, Dieter; Pronin, Igor N; Maximov, Ivan I

    2018-02-01

    Diffusion magnetic resonance imaging (dMRI) is a powerful tool in clinical applications, in particular, in oncology screening. dMRI demonstrated its benefit and efficiency in the localisation and detection of different types of human brain tumours. Clinical dMRI data suffer from multiple artefacts such as motion and eddy-current distortions, contamination by noise, outliers etc. In order to increase the image quality of the derived diffusion scalar metrics and the accuracy of the subsequent data analysis, various pre-processing approaches are actively developed and used. In the present work we assess the effect of different pre-processing procedures such as a noise correction, different smoothing algorithms and spatial interpolation of raw diffusion data, with respect to the accuracy of brain glioma differentiation. As a set of sensitive biomarkers of the glioma malignancy grades we chose the derived scalar metrics from diffusion and kurtosis tensor imaging as well as the neurite orientation dispersion and density imaging (NODDI) biophysical model. Our results show that the application of noise correction, anisotropic diffusion filtering, and cubic-order spline interpolation resulted in the highest sensitivity and specificity for glioma malignancy grading. Thus, these pre-processing steps are recommended for the statistical analysis in brain tumour studies. Copyright © 2017. Published by Elsevier GmbH.

  20. [Study of near infrared spectral preprocessing and wavelength selection methods for endometrial cancer tissue].

    Science.gov (United States)

    Zhao, Li-Ting; Xiang, Yu-Hong; Dai, Yin-Mei; Zhang, Zhuo-Yong

    2010-04-01

    Near infrared spectroscopy was applied to measure the tissue slice of endometrial tissues for collecting the spectra. A total of 154 spectra were obtained from 154 samples. The number of normal, hyperplasia, and malignant samples was 36, 60, and 58, respectively. Original near infrared spectra are composed of many variables, for example, interference information including instrument errors and physical effects such as particle size and light scatter. In order to reduce these influences, original spectra data should be performed with different spectral preprocessing methods to compress variables and extract useful information. So the methods of spectral preprocessing and wavelength selection have played an important role in near infrared spectroscopy technique. In the present paper the raw spectra were processed using various preprocessing methods including first derivative, multiplication scatter correction, Savitzky-Golay first derivative algorithm, standard normal variate, smoothing, and moving-window median. Standard deviation was used to select the optimal spectral region of 4 000-6 000 cm(-1). Then principal component analysis was used for classification. Principal component analysis results showed that three types of samples could be discriminated completely and the accuracy almost achieved 100%. This study demonstrated that near infrared spectroscopy technology and chemometrics method could be a fast, efficient, and novel means to diagnose cancer. The proposed methods would be a promising and significant diagnosis technique of early stage cancer.

  1. Data pre-processing for web log mining: Case study of commercial bank website usage analysis

    Directory of Open Access Journals (Sweden)

    Jozef Kapusta

    2013-01-01

    Full Text Available We use data cleaning, integration, reduction and data conversion methods in the pre-processing level of data analysis. Data processing techniques improve the overall quality of the patterns mined. The paper describes using of standard pre-processing methods for preparing data of the commercial bank website in the form of the log file obtained from the web server. Data cleaning, as the simplest step of data pre-processing, is non–trivial as the analysed content is highly specific. We had to deal with the problem of frequent changes of the content and even frequent changes of the structure. Regular changes in the structure make use of the sitemap impossible. We presented approaches how to deal with this problem. We were able to create the sitemap dynamically just based on the content of the log file. In this case study, we also examined just the one part of the website over the standard analysis of an entire website, as we did not have access to all log files for the security reason. As the result, the traditional practices had to be adapted for this special case. Analysing just the small fraction of the website resulted in the short session time of regular visitors. We were not able to use recommended methods to determine the optimal value of session time. Therefore, we proposed new methods based on outliers identification for raising the accuracy of the session length in this paper.

  2. Characterization of the power and efficiency of Stirling engine subsystems

    International Nuclear Information System (INIS)

    García, D.; González, M.A.; Prieto, J.I.; Herrero, S.; López, S.; Mesonero, I.; Villasante, C.

    2014-01-01

    Highlights: • We review experimental data from a V160 engine developed for cogeneration. • We also investigate the V161 solar engine. • The possible margin of improvement is evaluated for each subsystem. • The procedure is based on similarity models and thermodynamic models. • The procedure may be of general interest for other prototypes. - Abstract: The development of systems based on Stirling machines is limited by the lack of data about the performance of the various subsystems that are located between the input and output power sections. The measurement of some of the variables used to characterise these internal subsystems presents difficulties, particularly in the working gas circuit and the drive mechanism, which causes experimental reports to rarely be comprehensive enough for analysing the whole performance of the machine. In this article, we review experimental data from a V160 engine developed for cogeneration to evaluate the general validity; we also investigate one of the most successful prototypes used in dish-Stirling systems, the V161 engine, for which a seemingly small mechanical efficiency value has been recently predicted. The procedure described in this article allows the possible margin of improvement to be evaluated for each subsystem. The procedure is based on similarity models, which have been previously developed through experimental data from very different prototypes. Thermodynamic models for the gas circuit are also considered. Deduced characteristic curves show that both prototypes have an advanced degree of development as evidenced by relatively high efficiencies for each subsystem. The analyses are examples that demonstrate the qualities of dimensionless numbers in representing physical phenomena with maximum generality and physical meaning

  3. Thresholding: A Pixel-Level Image Processing Methodology Preprocessing Technique for an OCR System for the Brahmi Script

    Directory of Open Access Journals (Sweden)

    H. K. Anasuya Devi

    2006-12-01

    Full Text Available In this paper we study the methodology employed for preprocessing the archaeological images. We present the various algorithms used in the low-level processing stage of image analysis for Optical Character Recognition System for Brahmi Script. The image preprocessing technique covered in this paper is thresholding. We also try to analyze the results obtained by the pixel-level processing algorithms.

  4. Optimal preprocessing of serum and urine metabolomic data fusion for staging prostate cancer through design of experiment

    International Nuclear Information System (INIS)

    Zheng, Hong; Cai, Aimin; Zhou, Qi; Xu, Pengtao; Zhao, Liangcai; Li, Chen; Dong, Baijun; Gao, Hongchang

    2017-01-01

    Accurate classification of cancer stages will achieve precision treatment for cancer. Metabolomics presents biological phenotypes at the metabolite level and holds a great potential for cancer classification. Since metabolomic data can be obtained from different samples or analytical techniques, data fusion has been applied to improve classification accuracy. Data preprocessing is an essential step during metabolomic data analysis. Therefore, we developed an innovative optimization method to select a proper data preprocessing strategy for metabolomic data fusion using a design of experiment approach for improving the classification of prostate cancer (PCa) stages. In this study, urine and serum samples were collected from participants at five phases of PCa and analyzed using a 1 H NMR-based metabolomic approach. Partial least squares-discriminant analysis (PLS-DA) was used as a classification model and its performance was assessed by goodness of fit (R 2 ) and predictive ability (Q 2 ). Results show that data preprocessing significantly affect classification performance and depends on data properties. Using the fused metabolomic data from urine and serum, PLS-DA model with the optimal data preprocessing (R 2  = 0.729, Q 2  = 0.504, P < 0.0001) can effectively improve model performance and achieve a better classification result for PCa stages as compared with that without data preprocessing (R 2  = 0.139, Q 2  = 0.006, P = 0.450). Therefore, we propose that metabolomic data fusion integrated with an optimal data preprocessing strategy can significantly improve the classification of cancer stages for precision treatment. - Highlights: • NMR metabolomic analysis of body fluids can be used for staging prostate cancer. • Data preprocessing is an essential step for metabolomic analysis. • Data fusion improves information recovery for cancer classification. • Design of experiment achieves optimal preprocessing of metabolomic data fusion.

  5. E-cigarette use and smoking reduction or cessation in the 2010/2011 TUS-CPS longitudinal cohort

    Directory of Open Access Journals (Sweden)

    Yuyan Shi

    2016-10-01

    Full Text Available Abstract Background Electronic cigarettes (e-cigarettes are heavily marketed and widely perceived as helpful for quitting or reducing smoking intensity. We test whether ever-use of e-cigarettes among early adopters was associated with: 1 increased cigarette smoking cessation; and 2 reduced cigarette consumption. Methods A representative cohort of U.S. smokers (N = 2454 from the 2010 Tobacco Use Supplement to the Current Population Survey (TUS-CPS was re-interviewed 1 year later. Outcomes were smoking cessation for 30+ days and change in cigarette consumption at follow-up. E-cigarettes use was categorized as for cessation purposes or for another reason. Multivariate regression was used to adjust for demographics and baseline cigarette dependence level. Results In 2011, an estimated 12 % of adult U.S. smokers had ever used e-cigarettes, and 41 % of these reported use to help quit smoking. Smokers who had used e-cigarettes for cessation were less likely to be quit for 30+ days at follow-up, compared to never-users who tried to quit (11.1 % vs 21.6 %; ORadj = 0.44, 95 % CI = 0.2–0.8. Among heavier smokers at baseline (15+ cigarettes per day (CPD, ever-use of e-cigarettes was not associated with change in smoking consumption. Lighter smokers (<15 CPD who had ever used e-cigarettes for quitting had stable consumption, while increased consumption was observed among all other lighter smokers, although this difference was not statistically significant. Conclusions Among early adopters, ever-use of first generation e-cigarettes to aid quitting cigarette smoking was not associated with improved cessation or with reduced consumption, even among heavier smokers.

  6. E-cigarette use and smoking reduction or cessation in the 2010/2011 TUS-CPS longitudinal cohort.

    Science.gov (United States)

    Shi, Yuyan; Pierce, John P; White, Martha; Vijayaraghavan, Maya; Compton, Wilson; Conway, Kevin; Hartman, Anne M; Messer, Karen

    2016-10-21

    Electronic cigarettes (e-cigarettes) are heavily marketed and widely perceived as helpful for quitting or reducing smoking intensity. We test whether ever-use of e-cigarettes among early adopters was associated with: 1) increased cigarette smoking cessation; and 2) reduced cigarette consumption. A representative cohort of U.S. smokers (N = 2454) from the 2010 Tobacco Use Supplement to the Current Population Survey (TUS-CPS) was re-interviewed 1 year later. Outcomes were smoking cessation for 30+ days and change in cigarette consumption at follow-up. E-cigarettes use was categorized as for cessation purposes or for another reason. Multivariate regression was used to adjust for demographics and baseline cigarette dependence level. In 2011, an estimated 12 % of adult U.S. smokers had ever used e-cigarettes, and 41 % of these reported use to help quit smoking. Smokers who had used e-cigarettes for cessation were less likely to be quit for 30+ days at follow-up, compared to never-users who tried to quit (11.1 % vs 21.6 %; ORadj = 0.44, 95 % CI = 0.2-0.8). Among heavier smokers at baseline (15+ cigarettes per day (CPD)), ever-use of e-cigarettes was not associated with change in smoking consumption. Lighter smokers (<15 CPD) who had ever used e-cigarettes for quitting had stable consumption, while increased consumption was observed among all other lighter smokers, although this difference was not statistically significant. Among early adopters, ever-use of first generation e-cigarettes to aid quitting cigarette smoking was not associated with improved cessation or with reduced consumption, even among heavier smokers.

  7. Phase variable expression of a single phage receptor in Campylobacter jejuni NCTC12662 influences sensitivity toward several diverse CPS-dependent phages

    DEFF Research Database (Denmark)

    Gencay, Yilmaz Emre; Sørensen, Martine C.H.; Wenzel, Cory Q.

    2018-01-01

    Campylobacter jejuni NCTC12662 is sensitive to infection by many Campylobacter bacteriophages. Here we used this strain to investigate the molecular mechanism behind phage resistance development when exposed to a single phage and demonstrate how phase variable expression of one surface component...... influences phage sensitivity against many diverse C. jejuni phages. When C. jejuni NCTC12662 was exposed to phage F207 overnight, 25% of the bacterial cells were able to grow on a lawn of phage F207, suggesting that resistance develops at a high frequency. One resistant variant, 12662R, was further...... characterized and shown to be an adsorption mutant. Plaque assays using our large phage collection showed that seven out of 36 diverse capsular polysaccharide (CPS)-dependent phages could not infect 12662R, whereas the remaining phages formed plaques on 12662R with reduced efficiencies. Analysis of the CPS...

  8. Optimal production scheduling for energy efficiency improvement in biofuel feedstock preprocessing considering work-in-process particle separation

    International Nuclear Information System (INIS)

    Li, Lin; Sun, Zeyi; Yao, Xufeng; Wang, Donghai

    2016-01-01

    Biofuel is considered a promising alternative to traditional liquid transportation fuels. The large-scale substitution of biofuel can greatly enhance global energy security and mitigate greenhouse gas emissions. One major concern of the broad adoption of biofuel is the intensive energy consumption in biofuel manufacturing. This paper focuses on the energy efficiency improvement of biofuel feedstock preprocessing, a major process of cellulosic biofuel manufacturing. An improved scheme of the feedstock preprocessing considering work-in-process particle separation is introduced to reduce energy waste and improve energy efficiency. A scheduling model based on the improved scheme is also developed to identify an optimal production schedule that can minimize the energy consumption of the feedstock preprocessing under production target constraint. A numerical case study is used to illustrate the effectiveness of the proposed method. The research outcome is expected to improve the energy efficiency and enhance the environmental sustainability of biomass feedstock preprocessing. - Highlights: • A novel method to schedule production in biofuel feedstock preprocessing process. • Systems modeling approach is used. • Capable of optimize preprocessing to reduce energy waste and improve energy efficiency. • A numerical case is used to illustrate the effectiveness of the method. • Energy consumption per unit production can be significantly reduced.

  9. Discrete pre-processing step effects in registration-based pipelines, a preliminary volumetric study on T1-weighted images.

    Science.gov (United States)

    Muncy, Nathan M; Hedges-Muncy, Ariana M; Kirwan, C Brock

    2017-01-01

    Pre-processing MRI scans prior to performing volumetric analyses is common practice in MRI studies. As pre-processing steps adjust the voxel intensities, the space in which the scan exists, and the amount of data in the scan, it is possible that the steps have an effect on the volumetric output. To date, studies have compared between and not within pipelines, and so the impact of each step is unknown. This study aims to quantify the effects of pre-processing steps on volumetric measures in T1-weighted scans within a single pipeline. It was our hypothesis that pre-processing steps would significantly impact ROI volume estimations. One hundred fifteen participants from the OASIS dataset were used, where each participant contributed three scans. All scans were then pre-processed using a step-wise pipeline. Bilateral hippocampus, putamen, and middle temporal gyrus volume estimations were assessed following each successive step, and all data were processed by the same pipeline 5 times. Repeated-measures analyses tested for a main effects of pipeline step, scan-rescan (for MRI scanner consistency) and repeated pipeline runs (for algorithmic consistency). A main effect of pipeline step was detected, and interestingly an interaction between pipeline step and ROI exists. No effect for either scan-rescan or repeated pipeline run was detected. We then supply a correction for noise in the data resulting from pre-processing.

  10. National Ingition Facility subsystem design requirements optical mounts SSDR 1.4.4

    International Nuclear Information System (INIS)

    Richardson, M.

    1996-01-01

    This SSDR establishes the performance, design, development and test requirements for NIF Beam Transport Optomechanical Subsystems. optomechanical Subsystems includes the mounts for the beam transport mirrors, LMl - LM8, the polarizer mount, and the spatial filter lens mounts

  11. Double Shell Tank (DST) Transfer Pump Subsystem Specification

    International Nuclear Information System (INIS)

    GRAVES, C.E.

    2001-01-01

    This specification establishes the performance requirements and provides the references to the requisite codes and standards to be applied during the design of the Double-Shell Tank (DST) Transfer Pump Subsystem that supports the first phase of waste feed delivery (WFD). The DST Transfer Pump Subsystem consists of a pump for supernatant and/or slurry transfer for the DSTs that will be retrieved during the Phase 1 WFD operations. This system is used to transfer low-activity waste (LAW) and high-level waste (HLW) to designated DST staging tanks. It also will deliver blended LAW and HLW feed from these staging tanks to the River Protection Project (RPP) Waste Treatment Plant where it will be processed into an immobilized waste form. This specification is intended to be the basis for new projects/installations (W-521, etc.). This specification is not intended to retroactively affect previously established project design criteria without specific direction by the program

  12. Recovering Intrinsic Fragmental Vibrations Using the Generalized Subsystem Vibrational Analysis.

    Science.gov (United States)

    Tao, Yunwen; Tian, Chuan; Verma, Niraj; Zou, Wenli; Wang, Chao; Cremer, Dieter; Kraka, Elfi

    2018-05-08

    Normal vibrational modes are generally delocalized over the molecular system, which makes it difficult to assign certain vibrations to specific fragments or functional groups. We introduce a new approach, the Generalized Subsystem Vibrational Analysis (GSVA), to extract the intrinsic fragmental vibrations of any fragment/subsystem from the whole system via the evaluation of the corresponding effective Hessian matrix. The retention of the curvature information with regard to the potential energy surface for the effective Hessian matrix endows our approach with a concrete physical basis and enables the normal vibrational modes of different molecular systems to be legitimately comparable. Furthermore, the intrinsic fragmental vibrations act as a new link between the Konkoli-Cremer local vibrational modes and the normal vibrational modes.

  13. Measurement system as a subsystem of the quality management system

    Directory of Open Access Journals (Sweden)

    Ľubica Floreková

    2006-12-01

    Full Text Available Each measurement system and a control principle must be based on certain facts about the system behaviour (what, operation (how and structure (why. Each system is distributed into subsystems that provide an input for the next subsystem. For each system, start is important the begin, that means system characteristics, collecting of data, its hierarchy and the processes distribution.A measurement system (based on the chapter 8 of the standard ISO 9001:2000 Quality management system, requirements defines the measurement, analysis and improvement for each organization in order to present the products conformity, the quality management system conformity guarantee and for the continuously permanent improvement of effectivity, efficiency and economy of quality management system.

  14. Subsystem software for TSTA [Tritium Systems Test Assembly

    International Nuclear Information System (INIS)

    Mann, L.W.; Claborn, G.W.; Nielson, C.W.

    1987-01-01

    The Subsystem Control Software at the Tritium System Test Assembly (TSTA) must control sophisticated chemical processes through the physical operation of valves, motor controllers, gas sampling devices, thermocouples, pressure transducers, and similar devices. Such control software has to be capable of passing stringent quality assurance (QA) criteria to provide for the safe handling of significant amounts of tritium on a routine basis. Since many of the chemical processes and physical components are experimental, the control software has to be flexible enough to allow for trial/error learning curve, but still protect the environment and personnel from exposure to unsafe levels of radiation. The software at TSTA is implemented in several levels as described in a preceding paper in these proceedings. This paper depends on information given in the preceding paper for understanding. The top level is the Subsystem Control level

  15. LightNVM: The Linux Open-Channel SSD Subsystem

    DEFF Research Database (Denmark)

    Bjørling, Matias; Gonzalez, Javier; Bonnet, Philippe

    2017-01-01

    resource utilization. We propose that SSD management trade-offs should be handled through Open-Channel SSDs, a new class of SSDs, that give hosts control over their internals. We present our experience building LightNVM, the Linux Open-Channel SSD subsystem. We introduce a new Physical Page Ad- dress I...... to limit read latency variability and that it can be customized to achieve predictable I/O latencies....

  16. CAMAC subsystem and user context utilities in ngdp framework

    International Nuclear Information System (INIS)

    Isupov, A.Yu.

    2010-01-01

    The ngdp framework advanced topics are described. Namely, we consider work with CAMAC hardware, 'selfflow' nodes for the data acquisition systems with the As-Soon-As-Possible policy, ng m m(4) as an alternative to ng s ocket(4), the control subsystem, user context utilities, events representation for the ROOT package, test and debug nodes, possible advancements for netgraph(4), etc. It is shown that the ngdp is suitable for building lightweight DAQ systems to handle CAMAC

  17. Subsystem for control of isotope production with linear electron accelerator

    CERN Document Server

    Karasyov, S P; Uvarov, V L

    2001-01-01

    In this report the high-current LINAC subsystem for diagnostic and monitoring the basic technological parameters of isotope production (energy flux of Bremsstrahlung photons and absorbed doze in the target,target activity, temperature and consumption of water cooling the converter and target) is described.T he parallel printer port (LPT) of the personal computer is proposed to use as an interface with the measurement channels.

  18. Photovoltaic subsystem optimization and design tradeoff study. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Stolte, W.J.

    1982-03-01

    Tradeoffs and subsystem choices are examined in photovoltaic array subfield design, power-conditioning sizing and selection, roof- and ground-mounted structure installation, energy loss, operating voltage, power conditioning cost, and subfield size. Line- and self-commutated power conditioning options are analyzed to determine the most cost-effective technology in the megawatt power range. Methods for reducing field installation of flat panels and roof mounting of intermediate load centers are discussed, including the cost of retrofit installations.

  19. Attitude Control Subsystem for the Advanced Communications Technology Satellite

    Science.gov (United States)

    Hewston, Alan W.; Mitchell, Kent A.; Sawicki, Jerzy T.

    1996-01-01

    This paper provides an overview of the on-orbit operation of the Attitude Control Subsystem (ACS) for the Advanced Communications Technology Satellite (ACTS). The three ACTS control axes are defined, including the means for sensing attitude and determining the pointing errors. The desired pointing requirements for various modes of control as well as the disturbance torques that oppose the control are identified. Finally, the hardware actuators and control loops utilized to reduce the attitude error are described.

  20. Radioisotope thermoelectric generator transportation system subsystem 143 software development plan

    International Nuclear Information System (INIS)

    King, D.A.

    1994-01-01

    This plan describes the activities to be performed and the controls to be applied to the process of specifying, developing, and qualifying the data acquisition software for the Radioisotope Thermoelectric Generator (RTG) Transportation System Subsystem 143 Instrumentation and Data Acquisition System (IDAS). This plan will serve as a software quality assurance plan, a verification and validation (V and V) plan, and a configuration management plan

  1. FireSignal application Node for subsystem control

    International Nuclear Information System (INIS)

    Duarte, A.S.; Santos, B.; Pereira, T.; Carvalho, B.B.; Fernandes, H.; Neto, A.; Janky, F.; Cahyna, P.; Pisacka, J.; Hron, M.

    2010-01-01

    Modern fusion experiments require the presence of several subsystems, responsible for the different parameters involved in the operation of the machine. With the migration from the pre-programmed to the real-time control paradigm, their integration in Control, Data Acquisition, and Communication (CODAC) systems became an important issue, as this implies not only the connection to a main central coordination system, but also communications with related diagnostics and actuators. A subsystem for the control and operation of the vacuum, gas injection and baking was developed and installed in the COMPASS tokamak. These tasks are performed by dsPIC microcontrollers that receive commands from a hub computer and send information regarding the status of the operation. Communications are done in the serial protocol RS-232 through fibre optics. Java software, with an intuitive graphical user interface, for controlling and monitoring of the subsystem was developed and installed in a hub computer. In order to allow operators to perform these tasks remotely besides locally, this was integrated in the FireSignal system. Taking advantage of FireSignal features, it was possible to provide the users with, not only the same functionalities of the local application but also a similar user interface. An independent FireSignal Java Node bridges the central server and the control application. This design makes possible to easily reuse the Node for other subsystems or integrate the vacuum slow control in the other CODAC systems. The complete system, with local and remote control, has been installed successfully on COMPASS and has been in operation since April this year.

  2. Frozen density embedding with non-integer subsystems' particle numbers.

    Science.gov (United States)

    Fabiano, Eduardo; Laricchia, Savio; Della Sala, Fabio

    2014-03-21

    We extend the frozen density embedding theory to non-integer subsystems' particles numbers. Different features of this formulation are discussed, with special concern for approximate embedding calculations. In particular, we highlight the relation between the non-integer particle-number partition scheme and the resulting embedding errors. Finally, we provide a discussion of the implications of the present theory for the derivative discontinuity issue and the calculation of chemical reactivity descriptors.

  3. Stability of subsystem solutions in agent-based models

    Science.gov (United States)

    Perc, Matjaž

    2018-01-01

    The fact that relatively simple entities, such as particles or neurons, or even ants or bees or humans, give rise to fascinatingly complex behaviour when interacting in large numbers is the hallmark of complex systems science. Agent-based models are frequently employed for modelling and obtaining a predictive understanding of complex systems. Since the sheer number of equations that describe the behaviour of an entire agent-based model often makes it impossible to solve such models exactly, Monte Carlo simulation methods must be used for the analysis. However, unlike pairwise interactions among particles that typically govern solid-state physics systems, interactions among agents that describe systems in biology, sociology or the humanities often involve group interactions, and they also involve a larger number of possible states even for the most simplified description of reality. This begets the question: when can we be certain that an observed simulation outcome of an agent-based model is actually stable and valid in the large system-size limit? The latter is key for the correct determination of phase transitions between different stable solutions, and for the understanding of the underlying microscopic processes that led to these phase transitions. We show that a satisfactory answer can only be obtained by means of a complete stability analysis of subsystem solutions. A subsystem solution can be formed by any subset of all possible agent states. The winner between two subsystem solutions can be determined by the average moving direction of the invasion front that separates them, yet it is crucial that the competing subsystem solutions are characterised by a proper composition and spatiotemporal structure before the competition starts. We use the spatial public goods game with diverse tolerance as an example, but the approach has relevance for a wide variety of agent-based models.

  4. Subsystem for control of isotope production with linear electron accelerator

    International Nuclear Information System (INIS)

    Karasyov, S.P.; Pomatsalyuk, R.I.; Uvarov, V.L.

    2001-01-01

    In this report the high-current LINAC subsystem for diagnostic and monitoring the basic technological parameters of isotope production (energy flux of Bremsstrahlung photons and absorbed doze in the target,target activity, temperature and consumption of water cooling the converter and target) is described.T he parallel printer port (LPT) of the personal computer is proposed to use as an interface with the measurement channels

  5. Functional Analysis for Double Shell Tank (DST) Subsystems

    International Nuclear Information System (INIS)

    SMITH, D.F.

    2000-01-01

    This functional analysis identifies the hierarchy and describes the subsystem functions that support the Double-Shell Tank (DST) System described in HNF-SD-WM-TRD-007, System Specification for the Double-Shell Tank System. Because of the uncertainty associated with the need for upgrades of the existing catch tanks supporting the Waste Feed Delivery (WFD) mission, catch tank functions are not addressed in this document. The functions identified herein are applicable to the Phase 1 WFD mission only

  6. FireSignal Application Node for Subsystem Control

    Energy Technology Data Exchange (ETDEWEB)

    Duarte, A.; Santos, B.; Pereira, T.; Carvalho, B.; Fernandes, H. [Instituto de Plasmas e Fusao Nuclear - Instituto Superior Tecnico, Lisbon (Portugal); Cahyna, P.; Pisacka, J.; Hron, M. [Institute of Plasma Physics AS CR, Association EURATOM/IPP.CR, Prague (Czech Republic)

    2009-07-01

    Modern fusion experiments require the presence of several sub-systems, responsible for the different parameters involved in the operation of the machine. With the migration from the pre-programmed to the real-time control paradigm, their integration in Control, Data Acquisition, and Communication (CODAC) systems became an important issue, as this implies not only the connection to a main central coordination system, but also communications with related diagnostics and actuators. A sub-system for the control and operation of the vacuum, gas injection and baking was developed and installed in the COMPASS tokamak. These tasks are performed by 'dsPIC' micro-controllers that receive commands from a computer and send information regarding the status of the operation. Communications are done in the serial protocol RS-232 through fibre optics at speeds up to 1 Mbaud. A Java software, with an intuitive graphical user interface, for controlling and monitoring the sub-system was developed and installed in a hub computer. In order to allow operators to perform these tasks remotely besides locally, this was integrated in the FireSignal system. Taking advantage of FireSignal features, it was possible to provide the users with, not only the same functionalities of the local application but also a similar user interface. An independent FireSignal Java node bridges the central server and the control application. This design makes possible to easily reuse the node for other subsystems or integrate the vacuum slow control in the other CODAC systems. This document is composed of an abstract and a poster. (authors)

  7. Design of nanophotonic circuits for autonomous subsystem quantum error correction

    Energy Technology Data Exchange (ETDEWEB)

    Kerckhoff, J; Pavlichin, D S; Chalabi, H; Mabuchi, H, E-mail: jkerc@stanford.edu [Edward L Ginzton Laboratory, Stanford University, Stanford, CA 94305 (United States)

    2011-05-15

    We reapply our approach to designing nanophotonic quantum memories in order to formulate an optical network that autonomously protects a single logical qubit against arbitrary single-qubit errors. Emulating the nine-qubit Bacon-Shor subsystem code, the network replaces the traditionally discrete syndrome measurement and correction steps by continuous, time-independent optical interactions and coherent feedback of unitarily processed optical fields.

  8. SING-dialoque subsystem for graphical representation of one-dimensional array contents

    International Nuclear Information System (INIS)

    Karlov, A.A.; Kirilov, A.S.

    1979-01-01

    General principles of organization and main features of dialogue subsystem for graphical representation of one-dimensional array contents are considered. The subsystem is developed for remote display station of the JINR BESM-6 computer. Some examples of using the subsystem for drawing curves and histograms are given. The subsystem is developed according to modern dialogue systems requirements. It is ''open'' for extension and could be installed into other computers [ru

  9. EARLINET Single Calculus Chain - technical - Part 1: Pre-processing of raw lidar data

    Science.gov (United States)

    D'Amico, Giuseppe; Amodeo, Aldo; Mattis, Ina; Freudenthaler, Volker; Pappalardo, Gelsomina

    2016-02-01

    In this paper we describe an automatic tool for the pre-processing of aerosol lidar data called ELPP (EARLINET Lidar Pre-Processor). It is one of two calculus modules of the EARLINET Single Calculus Chain (SCC), the automatic tool for the analysis of EARLINET data. ELPP is an open source module that executes instrumental corrections and data handling of the raw lidar signals, making the lidar data ready to be processed by the optical retrieval algorithms. According to the specific lidar configuration, ELPP automatically performs dead-time correction, atmospheric and electronic background subtraction, gluing of lidar signals, and trigger-delay correction. Moreover, the signal-to-noise ratio of the pre-processed signals can be improved by means of configurable time integration of the raw signals and/or spatial smoothing. ELPP delivers the statistical uncertainties of the final products by means of error propagation or Monte Carlo simulations. During the development of ELPP, particular attention has been payed to make the tool flexible enough to handle all lidar configurations currently used within the EARLINET community. Moreover, it has been designed in a modular way to allow an easy extension to lidar configurations not yet implemented. The primary goal of ELPP is to enable the application of quality-assured procedures in the lidar data analysis starting from the raw lidar data. This provides the added value of full traceability of each delivered lidar product. Several tests have been performed to check the proper functioning of ELPP. The whole SCC has been tested with the same synthetic data sets, which were used for the EARLINET algorithm inter-comparison exercise. ELPP has been successfully employed for the automatic near-real-time pre-processing of the raw lidar data measured during several EARLINET inter-comparison campaigns as well as during intense field campaigns.

  10. Cloudy Solar Software - Enhanced Capabilities for Finding, Pre-processing, and Visualizing Solar Data

    Science.gov (United States)

    Istvan Etesi, Laszlo; Tolbert, K.; Schwartz, R.; Zarro, D.; Dennis, B.; Csillaghy, A.

    2010-05-01

    In our project "Extending the Virtual Solar Observatory (VSO)” we have combined some of the features available in Solar Software (SSW) to produce an integrated environment for data analysis, supporting the complete workflow from data location, retrieval, preparation, and analysis to creating publication-quality figures. Our goal is an integrated analysis experience in IDL, easy-to-use but flexible enough to allow more sophisticated procedures such as multi-instrument analysis. To that end, we have made the transition from a locally oriented setting where all the analysis is done on the user's computer, to an extended analysis environment where IDL has access to services available on the Internet. We have implemented a form of Cloud Computing that uses the VSO search and a new data retrieval and pre-processing server (PrepServer) that provides remote execution of instrument-specific data preparation. We have incorporated the interfaces to the VSO search and the PrepServer into an IDL widget (SHOW_SYNOP) that provides user-friendly searching and downloading of raw solar data and optionally sends search results for pre-processing to the PrepServer prior to downloading the data. The raw and pre-processed data can be displayed with our plotting suite, PLOTMAN, which can handle different data types (light curves, images, and spectra) and perform basic data operations such as zooming, image overlays, solar rotation, etc. PLOTMAN is highly configurable and suited for visual data analysis and for creating publishable figures. PLOTMAN and SHOW_SYNOP work hand-in-hand for a convenient working environment. Our environment supports a growing number of solar instruments that currently includes RHESSI, SOHO/EIT, TRACE, SECCHI/EUVI, HINODE/XRT, and HINODE/EIS.

  11. Static Feed Water Electrolysis Subsystem Testing and Component Development

    Science.gov (United States)

    Koszenski, E. P.; Schubert, F. H.; Burke, K. A.

    1983-01-01

    A program was carried out to develop and test advanced electrochemical cells/modules and critical electromechanical components for a static feed (alkaline electrolyte) water electrolysis oxygen generation subsystem. The accomplishments were refurbishment of a previously developed subsystem and successful demonstration for a total of 2980 hours of normal operation; achievement of sustained one-person level oxygen generation performance with state-of-the-art cell voltages averaging 1.61 V at 191 ASF for an operating temperature of 128F (equivalent to 1.51V when normalized to 180F); endurance testing and demonstration of reliable performance of the three-fluid pressure controller for 8650 hours; design and development of a fluid control assembly for this subsystem and demonstration of its performance; development and demonstration at the single cell and module levels of a unitized core composite cell that provides expanded differential pressure tolerance capability; fabrication and evaluation of a feed water electrolyte elimination five-cell module; and successful demonstration of an electrolysis module pressurization technique that can be used in place of nitrogen gas during the standby mode of operation to maintain system pressure and differential pressures.

  12. Quantitative risk analysis of a space shuttle subsystem

    International Nuclear Information System (INIS)

    Frank, M.V.

    1989-01-01

    This paper reports that in an attempt to investigate methods for risk management other than qualitative analysis techniques, NASA has funded pilot study quantitative risk analyses for space shuttle subsystems. The authors performed one such study of two shuttle subsystems with McDonnell Douglas Astronautics Company. The subsystems were the auxiliary power units (APU) on the orbiter, and the hydraulic power units on the solid rocket booster. The technology and results of the APU study are presented in this paper. Drawing from a rich in-flight database as well as from a wealth of tests and analyses, the study quantitatively assessed the risk of APU-initiated scenarios on the shuttle during all phases of a flight mission. Damage states of interest were loss of crew/vehicle, aborted mission, and launch scrub. A quantitative risk analysis approach to deciding on important items for risk management was contrasted with the current NASA failure mode and effects analysis/critical item list approach

  13. Double Shell Tank (DST) Transfer Pump Subsystem Specification

    International Nuclear Information System (INIS)

    LESHIKAR, G.A.

    2000-01-01

    This specification establishes the performance requirements and provides references to the requisite codes and standards to be applied to the Double-Shell Tank (DST) Transfer Pump Subsystem which supports the first phase of Waste Feed Delivery (WFD). This specification establishes the performance requirements and provides the references to the requisite codes and standards to be applied during the design of the DST Transfer Pump Subsystem that supports the first phase of (WFD). The DST Transfer Pump Subsystem consists of a pump for supernatant and or slurry transfer for the DSTs that will be retrieved during the Phase 1 WFD operations. This system is used to transfer low-activity waste (LAW) and high-level waste (HLW) to designated DST staging tanks. It also will deliver blended LAW and HLW feed from these staging tanks to the River Protection Project (RPP) Privatization Contractor facility where it will be processed into an immobilized waste form. This specification is intended to be the basis for new projects/installations (W-521, etc.). This specification is not intended to retroactively affect previously established project design criteria without specific direction by the program

  14. Fast randomized point location without preprocessing in two- and three-dimensional Delaunay triangulations

    Energy Technology Data Exchange (ETDEWEB)

    Muecke, E.P.; Saias, I.; Zhu, B.

    1996-05-01

    This paper studies the point location problem in Delaunay triangulations without preprocessing and additional storage. The proposed procedure finds the query point simply by walking through the triangulation, after selecting a good starting point by random sampling. The analysis generalizes and extends a recent result of d = 2 dimensions by proving this procedure to take expected time close to O(n{sup 1/(d+1)}) for point location in Delaunay triangulations of n random points in d = 3 dimensions. Empirical results in both two and three dimensions show that this procedure is efficient in practice.

  15. Preprocessing Raw Data in Clinical Medicine for a Data Mining Purpose

    Directory of Open Access Journals (Sweden)

    Peterková Andrea

    2016-12-01

    Full Text Available Dealing with data from the field of medicine is nowadays very current and difficult. On a global scale, a large amount of medical data is produced on an everyday basis. For the purpose of our research, we understand medical data as data about patients like results from laboratory analysis, results from screening examinations (CT, ECHO and clinical parameters. This data is usually in a raw format, difficult to understand, non-standard and not suitable for further processing or analysis. This paper aims to describe the possible method of data preparation and preprocessing of such raw medical data into a form, where further analysis algorithms can be applied.

  16. Classification-based comparison of pre-processing methods for interpretation of mass spectrometry generated clinical datasets

    Directory of Open Access Journals (Sweden)

    Hoefsloot Huub CJ

    2009-05-01

    Full Text Available Abstract Background Mass spectrometry is increasingly being used to discover proteins or protein profiles associated with disease. Experimental design of mass-spectrometry studies has come under close scrutiny and the importance of strict protocols for sample collection is now understood. However, the question of how best to process the large quantities of data generated is still unanswered. Main challenges for the analysis are the choice of proper pre-processing and classification methods. While these two issues have been investigated in isolation, we propose to use the classification of patient samples as a clinically relevant benchmark for the evaluation of pre-processing methods. Results Two in-house generated clinical SELDI-TOF MS datasets are used in this study as an example of high throughput mass-spectrometry data. We perform a systematic comparison of two commonly used pre-processing methods as implemented in Ciphergen ProteinChip Software and in the Cromwell package. With respect to reproducibility, Ciphergen and Cromwell pre-processing are largely comparable. We find that the overlap between peaks detected by either Ciphergen ProteinChip Software or Cromwell is large. This is especially the case for the more stringent peak detection settings. Moreover, similarity of the estimated intensities between matched peaks is high. We evaluate the pre-processing methods using five different classification methods. Classification is done in a double cross-validation protocol using repeated random sampling to obtain an unbiased estimate of classification accuracy. No pre-processing method significantly outperforms the other for all peak detection settings evaluated. Conclusion We use classification of patient samples as a clinically relevant benchmark for the evaluation of pre-processing methods. Both pre-processing methods lead to similar classification results on an ovarian cancer and a Gaucher disease dataset. However, the settings for pre-processing

  17. Predicting Speech Intelligibility with a Multiple Speech Subsystems Approach in Children with Cerebral Palsy

    Science.gov (United States)

    Lee, Jimin; Hustad, Katherine C.; Weismer, Gary

    2014-01-01

    Purpose: Speech acoustic characteristics of children with cerebral palsy (CP) were examined with a multiple speech subsystems approach; speech intelligibility was evaluated using a prediction model in which acoustic measures were selected to represent three speech subsystems. Method: Nine acoustic variables reflecting different subsystems, and…

  18. Fiscal 1974 research report. General research on hydrogen energy subsystems; 1974 nendo suiso riyo subsystem sogoteki kento hokokusho

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1975-03-01

    Based on the contract research 'General research on hydrogen energy subsystems and their peripheral technologies' with Agency of Industrial Science and Technology, each of 7 organizations including Denki Kagaku Kyokai (Electrochemical Association) promoted the research on hydrogen energy subsystem, combustion, fuel cell, car engine, aircraft engine, gas turbine and chemical energy, respectively. This report summarizes the research result on the former of 2 committees on hydrogen energy and peripheral technologies promoted by Denki Kagaku Kyokai. The first part describes the merit, demerit, domestic and overseas R and D states, technical problems, and future research issue for every use form of hydrogen. This part also outlines the short-, medium- and long-term prospects for use of hydrogen and oxygen energy, and describes the whole future research issue. The second part summarizes the content of each committee report. Although on details the original reports of each committee should be lead, this report is useful for obtaining the outline of utilization of hydrogen energy. (NEDO)

  19. On DESTINY Science Instrument Electrical and Electronics Subsystem Framework

    Science.gov (United States)

    Kizhner, Semion; Benford, Dominic J.; Lauer, Tod R.

    2009-01-01

    Future space missions are going to require large focal planes with many sensing arrays and hundreds of millions of pixels all read out at high data rates'' . This will place unique demands on the electrical and electronics (EE) subsystem design and it will be critically important to have high technology readiness level (TRL) EE concepts ready to support such missions. One such omission is the Joint Dark Energy Mission (JDEM) charged with making precise measurements of the expansion rate of the universe to reveal vital clues about the nature of dark energy - a hypothetical form of energy that permeates all of space and tends to increase the rate of the expansion. One of three JDEM concept studies - the Dark Energy Space Telescope (DESTINY) was conducted in 2008 at the NASA's Goddard Space Flight Center (GSFC) in Greenbelt, Maryland. This paper presents the EE subsystem framework, which evolved from the DESTINY science instrument study. It describes the main challenges and implementation concepts related to the design of an EE subsystem featuring multiple focal planes populated with dozens of large arrays and millions of pixels. The focal planes are passively cooled to cryogenic temperatures (below 140 K). The sensor mosaic is controlled by a large number of Readout Integrated Circuits and Application Specific Integrated Circuits - the ROICs/ASICs in near proximity to their sensor focal planes. The ASICs, in turn, are serviced by a set of "warm" EE subsystem boxes performing Field Programmable Gate Array (FPGA) based digital signal processing (DSP) computations of complex algorithms, such as sampling-up-the-ramp algorithm (SUTR), over large volumes of fast data streams. The SUTR boxes are supported by the Instrument Control/Command and Data Handling box (ICDH Primary and Backup boxes) for lossless data compression, command and low volume telemetry handling, power conversion and for communications with the spacecraft. The paper outlines how the JDEM DESTINY concept

  20. Effects of Preprocessing on Multi-Direction Properties of Aluminum Alloy Cold-Spray Deposits

    Science.gov (United States)

    Rokni, M. R.; Nardi, A. T.; Champagne, V. K.; Nutt, S. R.

    2018-05-01

    The effects of powder preprocessing (degassing at 400 °C for 6 h) on microstructure and mechanical properties of 5056 aluminum deposits produced by high-pressure cold spray were investigated. To investigate directionality of the mechanical properties, microtensile coupons were excised from different directions of the deposit, i.e., longitudinal, short transverse, long transverse, and diagonal and then tested. The results were compared to properties of wrought 5056 and the coating deposited with as-received 5056 Al powder and correlated with the observed microstructures. Preprocessing softened the particles and eliminated the pores within them, resulting in more extensive and uniform deformation upon impact with the substrate and with underlying deposited material. Microstructural characterization and finite element simulation indicated that upon particle impact, the peripheral regions experienced more extensive deformation and higher temperatures than the central contact zone. This led to more recrystallization and stronger bonding at peripheral regions relative to the contact zone area and yielded superior properties in the longitudinal direction compared with the short transverse direction. Fractography revealed that crack propagation takes place along the particle-particle interfaces in the transverse directions (caused by insufficient bonding and recrystallization), whereas through the deposited particles, fracture is dominant in the longitudinal direction.

  1. A review of blood sample handling and pre-processing for metabolomics studies.

    Science.gov (United States)

    Hernandes, Vinicius Veri; Barbas, Coral; Dudzik, Danuta

    2017-09-01

    Metabolomics has been found to be applicable to a wide range of clinical studies, bringing a new era for improving clinical diagnostics, early disease detection, therapy prediction and treatment efficiency monitoring. A major challenge in metabolomics, particularly untargeted studies, is the extremely diverse and complex nature of biological specimens. Despite great advances in the field there still exist fundamental needs for considering pre-analytical variability that can introduce bias to the subsequent analytical process and decrease the reliability of the results and moreover confound final research outcomes. Many researchers are mainly focused on the instrumental aspects of the biomarker discovery process, and sample related variables sometimes seem to be overlooked. To bridge the gap, critical information and standardized protocols regarding experimental design and sample handling and pre-processing are highly desired. Characterization of a range variation among sample collection methods is necessary to prevent results misinterpretation and to ensure that observed differences are not due to an experimental bias caused by inconsistencies in sample processing. Herein, a systematic discussion of pre-analytical variables affecting metabolomics studies based on blood derived samples is performed. Furthermore, we provide a set of recommendations concerning experimental design, collection, pre-processing procedures and storage conditions as a practical review that can guide and serve for the standardization of protocols and reduction of undesirable variation. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Influence of Averaging Preprocessing on Image Analysis with a Markov Random Field Model

    Science.gov (United States)

    Sakamoto, Hirotaka; Nakanishi-Ohno, Yoshinori; Okada, Masato

    2018-02-01

    This paper describes our investigations into the influence of averaging preprocessing on the performance of image analysis. Averaging preprocessing involves a trade-off: image averaging is often undertaken to reduce noise while the number of image data available for image analysis is decreased. We formulated a process of generating image data by using a Markov random field (MRF) model to achieve image analysis tasks such as image restoration and hyper-parameter estimation by a Bayesian approach. According to the notions of Bayesian inference, posterior distributions were analyzed to evaluate the influence of averaging. There are three main results. First, we found that the performance of image restoration with a predetermined value for hyper-parameters is invariant regardless of whether averaging is conducted. We then found that the performance of hyper-parameter estimation deteriorates due to averaging. Our analysis of the negative logarithm of the posterior probability, which is called the free energy based on an analogy with statistical mechanics, indicated that the confidence of hyper-parameter estimation remains higher without averaging. Finally, we found that when the hyper-parameters are estimated from the data, the performance of image restoration worsens as averaging is undertaken. We conclude that averaging adversely influences the performance of image analysis through hyper-parameter estimation.

  3. Statistical Downscaling Output GCM Modeling with Continuum Regression and Pre-Processing PCA Approach

    Directory of Open Access Journals (Sweden)

    Sutikno Sutikno

    2010-08-01

    Full Text Available One of the climate models used to predict the climatic conditions is Global Circulation Models (GCM. GCM is a computer-based model that consists of different equations. It uses numerical and deterministic equation which follows the physics rules. GCM is a main tool to predict climate and weather, also it uses as primary information source to review the climate change effect. Statistical Downscaling (SD technique is used to bridge the large-scale GCM with a small scale (the study area. GCM data is spatial and temporal data most likely to occur where the spatial correlation between different data on the grid in a single domain. Multicollinearity problems require the need for pre-processing of variable data X. Continuum Regression (CR and pre-processing with Principal Component Analysis (PCA methods is an alternative to SD modelling. CR is one method which was developed by Stone and Brooks (1990. This method is a generalization from Ordinary Least Square (OLS, Principal Component Regression (PCR and Partial Least Square method (PLS methods, used to overcome multicollinearity problems. Data processing for the station in Ambon, Pontianak, Losarang, Indramayu and Yuntinyuat show that the RMSEP values and R2 predict in the domain 8x8 and 12x12 by uses CR method produces results better than by PCR and PLS.

  4. ITSG-Grace2016 data preprocessing methodologies revisited: impact of using Level-1A data products

    Science.gov (United States)

    Klinger, Beate; Mayer-Gürr, Torsten

    2017-04-01

    For the ITSG-Grace2016 release, the gravity field recovery is based on the use of official GRACE (Gravity Recovery and Climate Experiment) Level-1B data products, generated by the Jet Propulsion Laboratory (JPL). Before gravity field recovery, the Level-1B instrument data are preprocessed. This data preprocessing step includes the combination of Level-1B star camera (SCA1B) and angular acceleration (ACC1B) data for an improved attitude determination (sensor fusion), instrument data screening and ACC1B data calibration. Based on a Level-1A test dataset, provided for individual month throughout the GRACE period by the Center of Space Research at the University of Texas at Austin (UTCSR), the impact of using Level-1A instead of Level-1B data products within the ITSG-Grace2016 processing chain is analyzed. We discuss (1) the attitude determination through an optimal combination of SCA1A and ACC1A data using our sensor fusion approach, (2) the impact of the new attitude product on temporal gravity field solutions, and (3) possible benefits of using Level-1A data for instrument data screening and calibration. As the GRACE mission is currently reaching its end-of-life, the presented work aims not only at a better understanding of GRACE science data to reduce the impact of possible error sources on the gravity field recovery, but it also aims at preparing Level-1A data handling capabilities for the GRACE Follow-On mission.

  5. Automated pre-processing and multivariate vibrational spectra analysis software for rapid results in clinical settings

    Science.gov (United States)

    Bhattacharjee, T.; Kumar, P.; Fillipe, L.

    2018-02-01

    Vibrational spectroscopy, especially FTIR and Raman, has shown enormous potential in disease diagnosis, especially in cancers. Their potential for detecting varied pathological conditions are regularly reported. However, to prove their applicability in clinics, large multi-center multi-national studies need to be undertaken; and these will result in enormous amount of data. A parallel effort to develop analytical methods, including user-friendly software that can quickly pre-process data and subject them to required multivariate analysis is warranted in order to obtain results in real time. This study reports a MATLAB based script that can automatically import data, preprocess spectra— interpolation, derivatives, normalization, and then carry out Principal Component Analysis (PCA) followed by Linear Discriminant Analysis (LDA) of the first 10 PCs; all with a single click. The software has been verified on data obtained from cell lines, animal models, and in vivo patient datasets, and gives results comparable to Minitab 16 software. The software can be used to import variety of file extensions, asc, .txt., .xls, and many others. Options to ignore noisy data, plot all possible graphs with PCA factors 1 to 5, and save loading factors, confusion matrices and other parameters are also present. The software can provide results for a dataset of 300 spectra within 0.01 s. We believe that the software will be vital not only in clinical trials using vibrational spectroscopic data, but also to obtain rapid results when these tools get translated into clinics.

  6. 3-D image pre-processing algorithms for improved automated tracing of neuronal arbors.

    Science.gov (United States)

    Narayanaswamy, Arunachalam; Wang, Yu; Roysam, Badrinath

    2011-09-01

    The accuracy and reliability of automated neurite tracing systems is ultimately limited by image quality as reflected in the signal-to-noise ratio, contrast, and image variability. This paper describes a novel combination of image processing methods that operate on images of neurites captured by confocal and widefield microscopy, and produce synthetic images that are better suited to automated tracing. The algorithms are based on the curvelet transform (for denoising curvilinear structures and local orientation estimation), perceptual grouping by scalar voting (for elimination of non-tubular structures and improvement of neurite continuity while preserving branch points), adaptive focus detection, and depth estimation (for handling widefield images without deconvolution). The proposed methods are fast, and capable of handling large images. Their ability to handle images of unlimited size derives from automated tiling of large images along the lateral dimension, and processing of 3-D images one optical slice at a time. Their speed derives in part from the fact that the core computations are formulated in terms of the Fast Fourier Transform (FFT), and in part from parallel computation on multi-core computers. The methods are simple to apply to new images since they require very few adjustable parameters, all of which are intuitive. Examples of pre-processing DIADEM Challenge images are used to illustrate improved automated tracing resulting from our pre-processing methods.

  7. Safe and sensible preprocessing and baseline correction of pupil-size data.

    Science.gov (United States)

    Mathôt, Sebastiaan; Fabius, Jasper; Van Heusden, Elle; Van der Stigchel, Stefan

    2018-02-01

    Measurement of pupil size (pupillometry) has recently gained renewed interest from psychologists, but there is little agreement on how pupil-size data is best analyzed. Here we focus on one aspect of pupillometric analyses: baseline correction, i.e., analyzing changes in pupil size relative to a baseline period. Baseline correction is useful in experiments that investigate the effect of some experimental manipulation on pupil size. In such experiments, baseline correction improves statistical power by taking into account random fluctuations in pupil size over time. However, we show that baseline correction can also distort data if unrealistically small pupil sizes are recorded during the baseline period, which can easily occur due to eye blinks, data loss, or other distortions. Divisive baseline correction (corrected pupil size = pupil size/baseline) is affected more strongly by such distortions than subtractive baseline correction (corrected pupil size = pupil size - baseline). We discuss the role of baseline correction as a part of preprocessing of pupillometric data, and make five recommendations: (1) before baseline correction, perform data preprocessing to mark missing and invalid data, but assume that some distortions will remain in the data; (2) use subtractive baseline correction; (3) visually compare your corrected and uncorrected data; (4) be wary of pupil-size effects that emerge faster than the latency of the pupillary response allows (within ±220 ms after the manipulation that induces the effect); and (5) remove trials on which baseline pupil size is unrealistically small (indicative of blinks and other distortions).

  8. THE IMAGE REGISTRATION OF FOURIER-MELLIN BASED ON THE COMBINATION OF PROJECTION AND GRADIENT PREPROCESSING

    Directory of Open Access Journals (Sweden)

    D. Gao

    2017-09-01

    Full Text Available Image registration is one of the most important applications in the field of image processing. The method of Fourier Merlin transform, which has the advantages of high precision and good robustness to change in light and shade, partial blocking, noise influence and so on, is widely used. However, not only this method can’t obtain the unique mutual power pulse function for non-parallel image pairs, even part of image pairs also can’t get the mutual power function pulse. In this paper, an image registration method based on Fourier-Mellin transformation in the view of projection-gradient preprocessing is proposed. According to the projection conformational equation, the method calculates the matrix of image projection transformation to correct the tilt image; then, gradient preprocessing and Fourier-Mellin transformation are performed on the corrected image to obtain the registration parameters. Eventually, the experiment results show that the method makes the image registration of Fourier-Mellin transformation not only applicable to the registration of the parallel image pairs, but also to the registration of non-parallel image pairs. What’s more, the better registration effect can be obtained

  9. Applying Enhancement Filters in the Pre-processing of Images of Lymphoma

    International Nuclear Information System (INIS)

    Silva, Sérgio Henrique; Do Nascimento, Marcelo Zanchetta; Neves, Leandro Alves; Batista, Valério Ramos

    2015-01-01

    Lymphoma is a type of cancer that affects the immune system, and is classified as Hodgkin or non-Hodgkin. It is one of the ten types of cancer that are the most common on earth. Among all malignant neoplasms diagnosed in the world, lymphoma ranges from three to four percent of them. Our work presents a study of some filters devoted to enhancing images of lymphoma at the pre-processing step. Here the enhancement is useful for removing noise from the digital images. We have analysed the noise caused by different sources like room vibration, scraps and defocusing, and in the following classes of lymphoma: follicular, mantle cell and B-cell chronic lymphocytic leukemia. The filters Gaussian, Median and Mean-Shift were applied to different colour models (RGB, Lab and HSV). Afterwards, we performed a quantitative analysis of the images by means of the Structural Similarity Index. This was done in order to evaluate the similarity between the images. In all cases we have obtained a certainty of at least 75%, which rises to 99% if one considers only HSV. Namely, we have concluded that HSV is an important choice of colour model at pre-processing histological images of lymphoma, because in this case the resulting image will get the best enhancement

  10. Automated cleaning and pre-processing of immunoglobulin gene sequences from high-throughput sequencing

    Directory of Open Access Journals (Sweden)

    Miri eMichaeli

    2012-12-01

    Full Text Available High throughput sequencing (HTS yields tens of thousands to millions of sequences that require a large amount of pre-processing work to clean various artifacts. Such cleaning cannot be performed manually. Existing programs are not suitable for immunoglobulin (Ig genes, which are variable and often highly mutated. This paper describes Ig-HTS-Cleaner (Ig High Throughput Sequencing Cleaner, a program containing a simple cleaning procedure that successfully deals with pre-processing of Ig sequences derived from HTS, and Ig-Indel-Identifier (Ig Insertion – Deletion Identifier, a program for identifying legitimate and artifact insertions and/or deletions (indels. Our programs were designed for analyzing Ig gene sequences obtained by 454 sequencing, but they are applicable to all types of sequences and sequencing platforms. Ig-HTS-Cleaner and Ig-Indel-Identifier have been implemented in Java and saved as executable JAR files, supported on Linux and MS Windows. No special requirements are needed in order to run the programs, except for correctly constructing the input files as explained in the text. The programs' performance has been tested and validated on real and simulated data sets.

  11. A Technical Review on Biomass Processing: Densification, Preprocessing, Modeling and Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Jaya Shankar Tumuluru; Christopher T. Wright

    2010-06-01

    It is now a well-acclaimed fact that burning fossil fuels and deforestation are major contributors to climate change. Biomass from plants can serve as an alternative renewable and carbon-neutral raw material for the production of bioenergy. Low densities of 40–60 kg/m3 for lignocellulosic and 200–400 kg/m3 for woody biomass limits their application for energy purposes. Prior to use in energy applications these materials need to be densified. The densified biomass can have bulk densities over 10 times the raw material helping to significantly reduce technical limitations associated with storage, loading and transportation. Pelleting, briquetting, or extrusion processing are commonly used methods for densification. The aim of the present research is to develop a comprehensive review of biomass processing that includes densification, preprocessing, modeling and optimization. The specific objective include carrying out a technical review on (a) mechanisms of particle bonding during densification; (b) methods of densification including extrusion, briquetting, pelleting, and agglomeration; (c) effects of process and feedstock variables and biomass biochemical composition on the densification (d) effects of preprocessing such as grinding, preheating, steam explosion, and torrefaction on biomass quality and binding characteristics; (e) models for understanding the compression characteristics; and (f) procedures for response surface modeling and optimization.

  12. Preprocessing of A-scan GPR data based on energy features

    Science.gov (United States)

    Dogan, Mesut; Turhan-Sayan, Gonul

    2016-05-01

    There is an increasing demand for noninvasive real-time detection and classification of buried objects in various civil and military applications. The problem of detection and annihilation of landmines is particularly important due to strong safety concerns. The requirement for a fast real-time decision process is as important as the requirements for high detection rates and low false alarm rates. In this paper, we introduce and demonstrate a computationally simple, timeefficient, energy-based preprocessing approach that can be used in ground penetrating radar (GPR) applications to eliminate reflections from the air-ground boundary and to locate the buried objects, simultaneously, at one easy step. The instantaneous power signals, the total energy values and the cumulative energy curves are extracted from the A-scan GPR data. The cumulative energy curves, in particular, are shown to be useful to detect the presence and location of buried objects in a fast and simple way while preserving the spectral content of the original A-scan data for further steps of physics-based target classification. The proposed method is demonstrated using the GPR data collected at the facilities of IPA Defense, Ankara at outdoor test lanes. Cylindrically shaped plastic containers were buried in fine-medium sand to simulate buried landmines. These plastic containers were half-filled by ammonium nitrate including metal pins. Results of this pilot study are demonstrated to be highly promising to motivate further research for the use of energy-based preprocessing features in landmine detection problem.

  13. Change detection using landsat time series: A review of frequencies, preprocessing, algorithms, and applications

    Science.gov (United States)

    Zhu, Zhe

    2017-08-01

    The free and open access to all archived Landsat images in 2008 has completely changed the way of using Landsat data. Many novel change detection algorithms based on Landsat time series have been developed We present a comprehensive review of four important aspects of change detection studies based on Landsat time series, including frequencies, preprocessing, algorithms, and applications. We observed the trend that the more recent the study, the higher the frequency of Landsat time series used. We reviewed a series of image preprocessing steps, including atmospheric correction, cloud and cloud shadow detection, and composite/fusion/metrics techniques. We divided all change detection algorithms into six categories, including thresholding, differencing, segmentation, trajectory classification, statistical boundary, and regression. Within each category, six major characteristics of different algorithms, such as frequency, change index, univariate/multivariate, online/offline, abrupt/gradual change, and sub-pixel/pixel/spatial were analyzed. Moreover, some of the widely-used change detection algorithms were also discussed. Finally, we reviewed different change detection applications by dividing these applications into two categories, change target and change agent detection.

  14. The effects of pre-processing strategies in sentiment analysis of online movie reviews

    Science.gov (United States)

    Zin, Harnani Mat; Mustapha, Norwati; Murad, Masrah Azrifah Azmi; Sharef, Nurfadhlina Mohd

    2017-10-01

    With the ever increasing of internet applications and social networking sites, people nowadays can easily express their feelings towards any products and services. These online reviews act as an important source for further analysis and improved decision making. These reviews are mostly unstructured by nature and thus, need processing like sentiment analysis and classification to provide a meaningful information for future uses. In text analysis tasks, the appropriate selection of words/features will have a huge impact on the effectiveness of the classifier. Thus, this paper explores the effect of the pre-processing strategies in the sentiment analysis of online movie reviews. In this paper, supervised machine learning method was used to classify the reviews. The support vector machine (SVM) with linear and non-linear kernel has been considered as classifier for the classification of the reviews. The performance of the classifier is critically examined based on the results of precision, recall, f-measure, and accuracy. Two different features representations were used which are term frequency and term frequency-inverse document frequency. Results show that the pre-processing strategies give a significant impact on the classification process.

  15. Data Acquisition and Preprocessing in Studies on Humans: What Is Not Taught in Statistics Classes?

    Science.gov (United States)

    Zhu, Yeyi; Hernandez, Ladia M; Mueller, Peter; Dong, Yongquan; Forman, Michele R

    2013-01-01

    The aim of this paper is to address issues in research that may be missing from statistics classes and important for (bio-)statistics students. In the context of a case study, we discuss data acquisition and preprocessing steps that fill the gap between research questions posed by subject matter scientists and statistical methodology for formal inference. Issues include participant recruitment, data collection training and standardization, variable coding, data review and verification, data cleaning and editing, and documentation. Despite the critical importance of these details in research, most of these issues are rarely discussed in an applied statistics program. One reason for the lack of more formal training is the difficulty in addressing the many challenges that can possibly arise in the course of a study in a systematic way. This article can help to bridge this gap between research questions and formal statistical inference by using an illustrative case study for a discussion. We hope that reading and discussing this paper and practicing data preprocessing exercises will sensitize statistics students to these important issues and achieve optimal conduct, quality control, analysis, and interpretation of a study.

  16. National Ignition Facility, subsystem design requirements beam control and laser diagnostics SSDR 1.7

    International Nuclear Information System (INIS)

    Bliss, E.

    1996-01-01

    This Subsystem Design Requirement document is a development specification that establishes the performance, design, development, and test requirements for the Alignment subsystem (WBS 1.7.1), Beam Diagnostics (WBS 1.7.2), and the Wavefront Control subsystem (WBS 1.7. 3) of the NIF Laser System (WBS 1.3). These three subsystems are collectively referred to as the Beam Control ampersand Laser Diagnostics Subsystem. The NIF is a multi-pass, 192-beam, high-power, neodymium-glass laser that meets requirements set forth in the NIF SDR 002 (Laser System). 3 figs., 3 tabs

  17. Autonomous navigation - The ARMMS concept. [Autonomous Redundancy and Maintenance Management Subsystem

    Science.gov (United States)

    Wood, L. J.; Jones, J. B.; Mease, K. D.; Kwok, J. H.; Goltz, G. L.; Kechichian, J. A.

    1984-01-01

    A conceptual design is outlined for the navigation subsystem of the Autonomous Redundancy and Maintenance Management Subsystem (ARMMS). The principal function of this navigation subsystem is to maintain the spacecraft over a specified equatorial longitude to within + or - 3 deg. In addition, the navigation subsystem must detect and correct internal faults. It comprises elements for a navigation executive and for orbit determination, trajectory, maneuver planning, and maneuver command. Each of these elements is described. The navigation subsystem is to be used in the DSCS III spacecraft.

  18. Systems and methods for an integrated electrical sub-system powered by wind energy

    Science.gov (United States)

    Liu, Yan [Ballston Lake, NY; Garces, Luis Jose [Niskayuna, NY

    2008-06-24

    Various embodiments relate to systems and methods related to an integrated electrically-powered sub-system and wind power system including a wind power source, an electrically-powered sub-system coupled to and at least partially powered by the wind power source, the electrically-powered sub-system being coupled to the wind power source through power converters, and a supervisory controller coupled to the wind power source and the electrically-powered sub-system to monitor and manage the integrated electrically-powered sub-system and wind power system.

  19. Comparison of planar images and SPECT with bayesean preprocessing for the demonstration of facial anatomy and craniomandibular disorders

    International Nuclear Information System (INIS)

    Kircos, L.T.; Ortendahl, D.A.; Hattner, R.S.; Faulkner, D.; Taylor, R.L.

    1984-01-01

    Craniomandiublar disorders involving the facial anatomy may be difficult to demonstrate in planar images. Although bone scanning is generally more sensitive than radiography, facial bone anatomy is complex and focal areas of increased or decreased radiotracer may become obscured by overlapping structures in planar images. Thus SPECT appears ideally suited to examination of the facial skeleton. A series of patients with craniomandibular disorders of unknown origin were imaged using 20 mCi Tc-99m MDP. Planar and SPECT (Siemens 7500 ZLC Orbiter) images were obtained four hours after injection. The SPECT images were reconstructed with a filtered back-projection algorithm. In order to improve image contrast and resolution in SPECT images, the rotation views were pre-processed with a Bayesean deblurring algorithm which has previously been show to offer improved contrast and resolution in planar images. SPECT images using the pre-processed rotation views were obtained and compared to the SPECT images without pre-processing and the planar images. TMJ arthropathy involving either the glenoid fossa or the mandibular condyle, orthopedic changes involving the mandible or maxilla, localized dental pathosis, as well as changes in structures peripheral to the facial skeleton were identified. Bayesean pre-processed SPECT depicted the facial skeleton more clearly as well as providing a more obvious demonstration of the bony changes associated with craniomandibular disorders than either planar images or SPECT without pre-processing

  20. Development of the CsI Calorimeter Subsystem for AMEGO

    Science.gov (United States)

    Grove, J. Eric; Woolf, Richard; Johnson, W. Neil; Phlips, Bernard

    2018-01-01

    We report on the development of the thallium-doped cesium iodide (CsI:Tl) calorimeter subsystem for the All-Sky Medium-Energy Gamma-ray Observatory (AMEGO). The CsI calorimeter is one of the three main subsystems that comprise the AMEGO instrument suite; the others include the double-sided silicon strip detector (DSSD) tracker/converter and a cadmium zinc telluride (CZT) calorimeter. Similar to the LAT instrument on Fermi, the hodoscopic calorimeter consists of orthogonally layered CsI bars. Unlike the LAT, which uses PIN photodiodes, the scintillation light readout from each end of the CsI bar is done with recently developed large-area silicon photomultiplier (SiPM) arrays. We currently have an APRA program to develop the calorimeter technology for a larger, future space-based gamma-ray observatory. Under this program, we are building and testing a prototype calorimeter consisting of 24 CsI bars (16.7 mm x 16.7 mm x 100 mm) arranged in 4 layers with 6 bars per layer. The ends of each bar are read out with a 2 x 2 array of 6 mm x 6 mm SensL J series SiPMs. Signal readout and processing is done with the IDEAS SIPHRA (IDE3380) ASIC. Performance testing of this prototype will be done with laboratory sources, a beam test, and a balloon flight in conjunction with the other subsystems led by NASA GSFC. Additionally, we will test 16.7 mm x 16.7 mm x 450 mm CsI bars with SiPM readout to understand the performance of longer bars in advance of the developing the full instrument.Acknowledgement: This work was sponsored by the Chief of Naval Research (CNR) and NASA-APRA (NNH15ZDA001N-APRA).

  1. RF communications subsystem for the Radiation Belt Storm Probes mission

    Science.gov (United States)

    Srinivasan, Dipak K.; Artis, David; Baker, Ben; Stilwell, Robert; Wallis, Robert

    2009-12-01

    The NASA Radiation Belt Storm Probes (RBSP) mission, currently in Phase B, is a two-spacecraft, Earth-orbiting mission, which will launch in 2012. The spacecraft's S-band radio frequency (RF) telecommunications subsystem has three primary functions: provide spacecraft command capability, provide spacecraft telemetry and science data return, and provide accurate Doppler data for navigation. The primary communications link to the ground is via the Johns Hopkins University Applied Physics Laboratory's (JHU/APL) 18 m dish, with secondary links to the NASA 13 m Ground Network and the Tracking and Data Relay Spacecraft System (TDRSS) in single-access mode. The on-board RF subsystem features the APL-built coherent transceiver and in-house builds of a solid-state power amplifier and conical bifilar helix broad-beam antennas. The coherent transceiver provides coherency digitally, and controls the downlink data rate and encoding within its field-programmable gate array (FPGA). The transceiver also provides a critical command decoder (CCD) function, which is used to protect against box-level upsets in the C&DH subsystem. Because RBSP is a spin-stabilized mission, the antennas must be symmetric about the spin axis. Two broad-beam antennas point along both ends of the spin axis, providing communication coverage from boresight to 70°. An RF splitter excites both antennas; therefore, the mission is designed such that no communications are required close to 90° from the spin axis due to the interferometer effect from the two antennas. To maximize the total downlink volume from the spacecraft, the CCSDS File Delivery Protocol (CFDP) has been baselined for the RBSP mission. During real-time ground contacts with the APL ground station, downlinked files are checked for errors. Handshaking between flight and ground CFDP software results in requests to retransmit only the file fragments lost due to dropouts. This allows minimization of RF link margins, thereby maximizing data rate and

  2. Electric and hybrid vehicle environmental control subsystem study

    Science.gov (United States)

    Heitner, K. L.

    1980-01-01

    An environmental control subsystem (ECS) in electric and hybrid vehicles is studied. A combination of a combustion heater and gasoline engine (Otto cycle) driven vapor compression air conditioner is selected. The combustion heater, the small gasoline engine, and the vapor compression air conditioner are commercially available. These technologies have good cost and performance characteristics. The cost for this ECS is relatively close to the cost of current ECS's. Its effect on the vehicle's propulsion battery is minimal and the ECS size and weight do not have significant impact on the vehicle's range.

  3. Electric and hybrid vehicles environmental control subsystem study

    Science.gov (United States)

    1981-01-01

    An environmental control subsystem (ECS) in the passenger compartment of electric and hybrid vehicles is studied. Various methods of obtaining the desired temperature control for the battery pack is also studied. The functional requirements of ECS equipment is defined. Following categorization by methodology, technology availability and risk, all viable ECS concepts are evaluated. Each is assessed independently for benefits versus risk, as well as for its feasibility to short, intermediate and long term product development. Selection of the preferred concept is made against these requirements, as well as the study's major goal of providing safe, highly efficient and thermally confortable ECS equipment.

  4. The New York Public Library Automated Book Catalog Subsystem

    Directory of Open Access Journals (Sweden)

    S. Michael Malinconico

    1973-03-01

    Full Text Available A comprehensive automated bibliographic control system has been developed by the New York Public Library. This system is unique in its use of an automated authority system and highly sophisticated machine filing algorithms. The primary aim was the rigorous control of established forms and their cross-reference structure. The original impetus for creation of the system, and its most highly visible product, is a photocomposed book catalog. The book catalog subsystem supplies automatic punctuation of condensed entries and contains the ability to pmduce cumulation/ supplement book catalogs in installments without loss of control of the crossreferencing structure.

  5. Recent developments for the Large Binocular Telescope Guiding Control Subsystem

    Science.gov (United States)

    Golota, T.; De La Peña, M. D.; Biddick, C.; Lesser, M.; Leibold, T.; Miller, D.; Meeks, R.; Hahn, T.; Storm, J.; Sargent, T.; Summers, D.; Hill, J.; Kraus, J.; Hooper, S.; Fisher, D.

    2014-07-01

    The Large Binocular Telescope (LBT) has eight Acquisition, Guiding, and wavefront Sensing Units (AGw units). They provide guiding and wavefront sensing capability at eight different locations at both direct and bent Gregorian focal stations. Recent additions of focal stations for PEPSI and MODS instruments doubled the number of focal stations in use including respective motion, camera controller server computers, and software infrastructure communicating with Guiding Control Subsystem (GCS). This paper describes the improvements made to the LBT GCS and explains how these changes have led to better maintainability and contributed to increased reliability. This paper also discusses the current GCS status and reviews potential upgrades to further improve its performance.

  6. Architecture of the software for LAMOST fiber positioning subsystem

    Science.gov (United States)

    Peng, Xiaobo; Xing, Xiaozheng; Hu, Hongzhuan; Zhai, Chao; Li, Weimin

    2004-09-01

    The architecture of the software which controls the LAMOST fiber positioning sub-system is described. The software is composed of two parts as follows: a main control program in a computer and a unit controller program in a MCS51 single chip microcomputer ROM. And the function of the software includes: Client/Server model establishment, observation planning, collision handling, data transmission, pulse generation, CCD control, image capture and processing, and data analysis etc. Particular attention is paid to the ways in which different parts of the software can communicate. Also software techniques for multi threads, SOCKET programming, Microsoft Windows message response, and serial communications are discussed.

  7. Analysis of subsystems in wavelength-division-multiplexing networks

    DEFF Research Database (Denmark)

    Liu, Fenghai

    2001-01-01

    Wavelength division multiplexing (WDM) technology together with optical amplification has created a new era for optical communication. Transmission capacity is greatly increased by adding more and more wavelength channels into a single fiber, as well as by increasing the line rate of each channel...... in semiconductor optical amplifiers (SOAs), and dispersion managed fiber sections. New subsystems are also proposed in the thesis: a modular 2×2 multiwavelength cross-connect using wavelength switching blocks, a wavelength converter based on cross phase modulation in a semiconductor modulator, a wavelength...

  8. Information measuring subsystem oil pumping station “Parabel”

    Directory of Open Access Journals (Sweden)

    Nyashina Galina S.

    2014-01-01

    Full Text Available Information-measurement subsystem oil pumping station (OPS “Parabel”, located on the site of the main pipeline “Alexandrov-Anzhero” (OJSC “AK” Transneft "”. Developed on the basis of a modern microprocessor equipment, automation, as well as high-speed digital data channels. The simple solution to meet the requirements set out in the guidance document "Automation and remote control of trunk pipelines. «General provisions» (RD-35.240.0000-KTN-207-08.

  9. Reactor Subsystem Simulation for Nuclear Hybrid Energy Systems

    Energy Technology Data Exchange (ETDEWEB)

    Shannon Bragg-Sitton; J. Michael Doster; Alan Rominger

    2012-09-01

    Preliminary system models have been developed by Idaho National Laboratory researchers and are currently being enhanced to assess integrated system performance given multiple sources (e.g., nuclear + wind) and multiple applications (i.e., electricity + process heat). Initial efforts to integrate a Fortran-based simulation of a small modular reactor (SMR) with the balance of plant model have been completed in FY12. This initial effort takes advantage of an existing SMR model developed at North Carolina State University to provide initial integrated system simulation for a relatively low cost. The SMR subsystem simulation details are discussed in this report.

  10. Data Management Applications for the Service Preparation Subsystem

    Science.gov (United States)

    Luong, Ivy P.; Chang, George W.; Bui, Tung; Allen, Christopher; Malhotra, Shantanu; Chen, Fannie C.; Bui, Bach X.; Gutheinz, Sandy C.; Kim, Rachel Y.; Zendejas, Silvino C.; hide

    2009-01-01

    These software applications provide intuitive User Interfaces (UIs) with a consistent look and feel for interaction with, and control of, the Service Preparation Subsystem (SPS). The elements of the UIs described here are the File Manager, Mission Manager, and Log Monitor applications. All UIs provide access to add/delete/update data entities in a complex database schema without requiring technical expertise on the part of the end users. These applications allow for safe, validated, catalogued input of data. Also, the software has been designed in multiple, coherent layers to promote ease of code maintenance and reuse in addition to reducing testing and accelerating maturity.

  11. Nuclear data for fusion: Validation of typical pre-processing methods for radiation transport calculations

    International Nuclear Information System (INIS)

    Hutton, T.; Sublet, J.C.; Morgan, L.; Leadbeater, T.W.

    2015-01-01

    Highlights: • We quantify the effect of processing nuclear data from ENDF to ACE format. • We consider the differences between fission and fusion angular distributions. • C-nat(n,el) at 2.0 MeV has a 0.6% deviation between original and processed data. • Fe-56(n,el) at 14.1 MeV has a 11.0% deviation between original and processed data. • Processed data do not accurately depict ENDF distributions for fusion energies. - Abstract: Nuclear data form the basis of the radiation transport codes used to design and simulate the behaviour of nuclear facilities, such as the ITER and DEMO fusion reactors. Typically these data and codes are biased towards fission and high-energy physics applications yet are still applied to fusion problems. With increasing interest in fusion applications, the lack of fusion specific codes and relevant data libraries is becoming increasingly apparent. Industry standard radiation transport codes require pre-processing of the evaluated data libraries prior to use in simulation. Historically these methods focus on speed of simulation at the cost of accurate data representation. For legacy applications this has not been a major concern, but current fusion needs differ significantly. Pre-processing reconstructs the differential and double differential interaction cross sections with a coarse binned structure, or more recently as a tabulated cumulative distribution function. This work looks at the validity of applying these processing methods to data used in fusion specific calculations in comparison to fission. The relative effects of applying this pre-processing mechanism, to both fission and fusion relevant reaction channels are demonstrated, and as such the poor representation of these distributions for the fusion energy regime. For the nat C(n,el) reaction at 2.0 MeV, the binned differential cross section deviates from the original data by 0.6% on average. For the 56 Fe(n,el) reaction at 14.1 MeV, the deviation increases to 11.0%. We

  12. Bayesian Optimization for Neuroimaging Pre-processing in Brain Age Classification and Prediction

    Directory of Open Access Journals (Sweden)

    Jenessa Lancaster

    2018-02-01

    Full Text Available Neuroimaging-based age prediction using machine learning is proposed as a biomarker of brain aging, relating to cognitive performance, health outcomes and progression of neurodegenerative disease. However, even leading age-prediction algorithms contain measurement error, motivating efforts to improve experimental pipelines. T1-weighted MRI is commonly used for age prediction, and the pre-processing of these scans involves normalization to a common template and resampling to a common voxel size, followed by spatial smoothing. Resampling parameters are often selected arbitrarily. Here, we sought to improve brain-age prediction accuracy by optimizing resampling parameters using Bayesian optimization. Using data on N = 2003 healthy individuals (aged 16–90 years we trained support vector machines to (i distinguish between young (<22 years and old (>50 years brains (classification and (ii predict chronological age (regression. We also evaluated generalisability of the age-regression model to an independent dataset (CamCAN, N = 648, aged 18–88 years. Bayesian optimization was used to identify optimal voxel size and smoothing kernel size for each task. This procedure adaptively samples the parameter space to evaluate accuracy across a range of possible parameters, using independent sub-samples to iteratively assess different parameter combinations to arrive at optimal values. When distinguishing between young and old brains a classification accuracy of 88.1% was achieved, (optimal voxel size = 11.5 mm3, smoothing kernel = 2.3 mm. For predicting chronological age, a mean absolute error (MAE of 5.08 years was achieved, (optimal voxel size = 3.73 mm3, smoothing kernel = 3.68 mm. This was compared to performance using default values of 1.5 mm3 and 4mm respectively, resulting in MAE = 5.48 years, though this 7.3% improvement was not statistically significant. When assessing generalisability, best performance was achieved when applying the entire Bayesian

  13. A simpler method of preprocessing MALDI-TOF MS data for differential biomarker analysis: stem cell and melanoma cancer studies

    Directory of Open Access Journals (Sweden)

    Tong Dong L

    2011-09-01

    Full Text Available Abstract Introduction Raw spectral data from matrix-assisted laser desorption/ionisation time-of-flight (MALDI-TOF with MS profiling techniques usually contains complex information not readily providing biological insight into disease. The association of identified features within raw data to a known peptide is extremely difficult. Data preprocessing to remove uncertainty characteristics in the data is normally required before performing any further analysis. This study proposes an alternative yet simple solution to preprocess raw MALDI-TOF-MS data for identification of candidate marker ions. Two in-house MALDI-TOF-MS data sets from two different sample sources (melanoma serum and cord blood plasma are used in our study. Method Raw MS spectral profiles were preprocessed using the proposed approach to identify peak regions in the spectra. The preprocessed data was then analysed using bespoke machine learning algorithms for data reduction and ion selection. Using the selected ions, an ANN-based predictive model was constructed to examine the predictive power of these ions for classification. Results Our model identified 10 candidate marker ions for both data sets. These ion panels achieved over 90% classification accuracy on blind validation data. Receiver operating characteristics analysis was performed and the area under the curve for melanoma and cord blood classifiers was 0.991 and 0.986, respectively. Conclusion The results suggest that our data preprocessing technique removes unwanted characteristics of the raw data, while preserving the predictive components of the data. Ion identification analysis can be carried out using MALDI-TOF-MS data with the proposed data preprocessing technique coupled with bespoke algorithms for data reduction and ion selection.

  14. Evaluation of the robustness of the preprocessing technique improving reversible compressibility of CT images: Tested on various CT examinations

    Energy Technology Data Exchange (ETDEWEB)

    Jeon, Chang Ho; Kim, Bohyoung; Gu, Bon Seung; Lee, Jong Min [Department of Radiology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, 300 Gumi-ro, Bundang-gu, Seongnam-si, Gyeonggi-do 463-707 (Korea, Republic of); Kim, Kil Joong [Department of Radiology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, 300 Gumi-ro, Bundang-gu, Seongnam-si, Gyeonggi-do 463-707, South Korea and Department of Radiation Applied Life Science, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul 110-799 (Korea, Republic of); Lee, Kyoung Ho [Department of Radiology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, 300 Gumi-ro, Bundang-gu, Seongnam-si, Gyeonggi-do 463-707, South Korea and Institute of Radiation Medicine, Seoul National University Medical Research Center, and Clinical Research Institute, Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul 110-744 (Korea, Republic of); Kim, Tae Ki [Medical Information Center, Seoul National University Bundang Hospital, Seoul National University College of Medicine, 300 Gumi-ro, Bundang-gu, Seongnam-si, Gyeonggi-do 463-707 (Korea, Republic of)

    2013-10-15

    Purpose: To modify the preprocessing technique, which was previously proposed, improving compressibility of computed tomography (CT) images to cover the diversity of three dimensional configurations of different body parts and to evaluate the robustness of the technique in terms of segmentation correctness and increase in reversible compression ratio (CR) for various CT examinations.Methods: This study had institutional review board approval with waiver of informed patient consent. A preprocessing technique was previously proposed to improve the compressibility of CT images by replacing pixel values outside the body region with a constant value resulting in maximizing data redundancy. Since the technique was developed aiming at only chest CT images, the authors modified the segmentation method to cover the diversity of three dimensional configurations of different body parts. The modified version was evaluated as follows. In randomly selected 368 CT examinations (352 787 images), each image was preprocessed by using the modified preprocessing technique. Radiologists visually confirmed whether the segmented region covers the body region or not. The images with and without the preprocessing were reversibly compressed using Joint Photographic Experts Group (JPEG), JPEG2000 two-dimensional (2D), and JPEG2000 three-dimensional (3D) compressions. The percentage increase in CR per examination (CR{sub I}) was measured.Results: The rate of correct segmentation was 100.0% (95% CI: 99.9%, 100.0%) for all the examinations. The median of CR{sub I} were 26.1% (95% CI: 24.9%, 27.1%), 40.2% (38.5%, 41.1%), and 34.5% (32.7%, 36.2%) in JPEG, JPEG2000 2D, and JPEG2000 3D, respectively.Conclusions: In various CT examinations, the modified preprocessing technique can increase in the CR by 25% or more without concerning about degradation of diagnostic information.

  15. Revealing electronic open quantum systems with subsystem TDDFT

    Science.gov (United States)

    Krishtal, Alisa; Pavanello, Michele

    2016-03-01

    Open quantum systems (OQSs) are perhaps the most realistic systems one can approach through simulations. In recent years, describing OQSs with Density Functional Theory (DFT) has been a prominent avenue of research with most approaches based on a density matrix partitioning in conjunction with an ad-hoc description of system-bath interactions. We propose a different theoretical approach to OQSs based on partitioning of the electron density. Employing the machinery of subsystem DFT (and its time-dependent extension), we provide a novel way of isolating and analyzing the various terms contributing to the coupling between the system and the surrounding bath. To illustrate the theory, we provide numerical simulations on a toy system (a molecular dimer) and on a condensed phase system (solvated excimer). The simulations show that non-Markovian dynamics in the electronic system-bath interactions are important in chemical applications. For instance, we show that the superexchange mechanism of transport in donor-bridge-acceptor systems is a non-Markovian interaction between the donor-acceptor (OQS) with the bridge (bath) which is fully characterized by real-time subsystem time-dependent DFT.

  16. Lunar Advanced Volatile Analysis Subsystem: Pressure Transducer Trade Study

    Science.gov (United States)

    Kang, Edward Shinuk

    2017-01-01

    In Situ Resource Utilization (ISRU) is a key factor in paving the way for the future of human space exploration. The ability to harvest resources on foreign astronomical objects to produce consumables and propellant offers potential reduction in mission cost and risk. Through previous missions, the existence of water ice at the poles of the moon has been identified, however the feasibility of water extraction for resources remains unanswered. The Resource Prospector (RP) mission is currently in development to provide ground truth, and will enable us to characterize the distribution of water at one of the lunar poles. Regolith & Environment Science and Oxygen & Lunar Volatile Extraction (RESOLVE) is the primary payload on RP that will be used in conjunction with a rover. RESOLVE contains multiple instruments for systematically identifying the presence of water. The main process involves the use of two systems within RESOLVE: the Oxygen Volatile Extraction Node (OVEN) and Lunar Advanced Volatile Analysis (LAVA). Within the LAVA subsystem, there are multiple calculations that depend on accurate pressure readings. One of the most important instances where pressure transducers (PT) are used is for calculating the number of moles in a gas transfer from the OVEN subsystem. As a critical component of the main process, a mixture of custom and commercial off the shelf (COTS) PTs are currently being tested in the expected operating environment to eventually down select an option for integrated testing in the LAVA engineering test unit (ETU).

  17. Principles of control for decoherence-free subsystems.

    Science.gov (United States)

    Cappellaro, P; Hodges, J S; Havel, T F; Cory, D G

    2006-07-28

    Decoherence-free subsystems (DFSs) are a powerful means of protecting quantum information against noise with known symmetry properties. Although Hamiltonians that can implement a universal set of logic gates on DFS encoded qubits without ever leaving the protected subsystem theoretically exist, the natural Hamiltonians that are available in specific implementations do not necessarily have this property. Here we describe some of the principles that can be used in such cases to operate on encoded qubits without losing the protection offered by the DFSs. In particular, we show how dynamical decoupling can be used to control decoherence during the unavoidable excursions outside of the DFS. By means of cumulant expansions, we show how the fidelity of quantum gates implemented by this method on a simple two physical qubit DFS depends on the correlation time of the noise responsible for decoherence. We further show by means of numerical simulations how our previously introduced "strongly modulating pulses" for NMR quantum information processing can permit high-fidelity operations on multiple DFS encoded qubits in practice, provided that the rate at which the system can be modulated is fast compared to the correlation time of the noise. The principles thereby illustrated are expected to be broadly applicable to many implementations of quantum information processors based on DFS encoded qubits.

  18. Revealing electronic open quantum systems with subsystem TDDFT.

    Science.gov (United States)

    Krishtal, Alisa; Pavanello, Michele

    2016-03-28

    Open quantum systems (OQSs) are perhaps the most realistic systems one can approach through simulations. In recent years, describing OQSs with Density Functional Theory (DFT) has been a prominent avenue of research with most approaches based on a density matrix partitioning in conjunction with an ad-hoc description of system-bath interactions. We propose a different theoretical approach to OQSs based on partitioning of the electron density. Employing the machinery of subsystem DFT (and its time-dependent extension), we provide a novel way of isolating and analyzing the various terms contributing to the coupling between the system and the surrounding bath. To illustrate the theory, we provide numerical simulations on a toy system (a molecular dimer) and on a condensed phase system (solvated excimer). The simulations show that non-Markovian dynamics in the electronic system-bath interactions are important in chemical applications. For instance, we show that the superexchange mechanism of transport in donor-bridge-acceptor systems is a non-Markovian interaction between the donor-acceptor (OQS) with the bridge (bath) which is fully characterized by real-time subsystem time-dependent DFT.

  19. Image Processing of Welding Procedure Specification and Pre-process program development for Finite Element Modelling

    International Nuclear Information System (INIS)

    Kim, K. S.; Lee, H. J.

    2009-11-01

    PRE-WELD program, which generates automatically the input file for the finite element analysis on the 2D butt welding at the dissimilar metal weld part, was developed. This program is pre-process program of the FEM code for analyzing the residual stress at the welding parts. Even if the users have not the detail knowledge for the FEM modelling, the users can make the ABAQUS INPUT easily by inputting the shape data of welding part, the weld current and voltage of welding parameters. By using PRE-WELD program, we can save the time and the effort greatly for preparing the ABAQUS INPUT for the residual stress analysis at the welding parts, and make the exact input without the human error

  20. Joint preprocesser-based detector for cooperative networks with limited hardware processing capability

    KAUST Repository

    Abuzaid, Abdulrahman I.

    2015-02-01

    In this letter, a joint detector for cooperative communication networks is proposed when the destination has limited hardware processing capability. The transmitter sends its symbols with the help of L relays. As the destination has limited hardware, only U out of L signals are processed and the energy of the remaining relays is lost. To solve this problem, a joint preprocessing based detector is proposed. This joint preprocessor based detector operate on the principles of minimizing the symbol error rate (SER). For a realistic assessment, pilot symbol aided channel estimation is incorporated for this proposed detector. From our simulations, it can be observed that our proposed detector achieves the same SER performance as that of the maximum likelihood (ML) detector with all participating relays. Additionally, our detector outperforms selection combining (SC), channel shortening (CS) scheme and reduced-rank techniques when using the same U. Our proposed scheme has low computational complexity.

  1. Flexible high-speed FASTBUS master for data read-out and preprocessing

    International Nuclear Information System (INIS)

    Wurz, A.; Manner, R.

    1990-01-01

    This paper describes a single slot FASTBUS master module. It can be used for read-out and preprocessing of data that are read out from FASTBUS modules, e.g., and ADC system. The module consists of a 25 MHz, 32-bit processor MC 68030 with cache memory and memory management, a floating point coprocessor MC68882, 4 MBytes of main memory, and FASTBUS master and slave interfaces. In addition, a DMA controller for read-out of FASTBUS data is provided. The processor allows I/O via serial ports, a 16-bit parallel port, and a transputer link. Additional interfaces are planned. The main memory is multi-ported and can be accessed directly by the CPU, the FASTBUS, and external masters via the high-speed local bus that is accessible by way of a connector. The FASTBUS interface supports most of the standard operations in master and slave mode

  2. Combined principal component preprocessing and n-tuple neural networks for improved classification

    DEFF Research Database (Denmark)

    Høskuldsson, Agnar; Linneberg, Christian

    2000-01-01

    We present a combined principal component analysis/neural network scheme for classification. The data used to illustrate the method consist of spectral fluorescence recordings from seven different production facilities, and the task is to relate an unknown sample to one of these seven factories....... The data are first preprocessed by performing an individual principal component analysis on each of the seven groups of data. The components found are then used for classifying the data, but instead of making a single multiclass classifier, we follow the ideas of turning a multiclass problem into a number...... of two-class problems. For each possible pair of classes we further apply a transformation to the calculated principal components in order to increase the separation between the classes. Finally we apply the so-called n-tuple neural network to the transformed data in order to give the classification...

  3. The development of the intrinsic functional connectivity of default network subsystems from age 3 to 5.

    Science.gov (United States)

    Xiao, Yaqiong; Zhai, Hongchang; Friederici, Angela D; Jia, Fucang

    2016-03-01

    In recent years, research on human functional brain imaging using resting-state fMRI techniques has been increasingly prevalent. The term "default mode" was proposed to describe a baseline or default state of the brain during rest. Recent studies suggested that the default mode network (DMN) is comprised of two functionally distinct subsystems: a dorsal-medial prefrontal cortex (DMPFC) subsystem involved in self-oriented cognition (i.e., theory of mind) and a medial temporal lobe (MTL) subsystem engaged in memory and scene construction; both subsystems interact with the anterior medial prefrontal cortex (aMPFC) and posterior cingulate (PCC) as the core regions of DMN. The present study explored the development of DMN core regions and these two subsystems in both hemispheres from 3- to 5-year-old children. The analysis of the intrinsic activity showed strong developmental changes in both subsystems, and significant changes were specifically found in MTL subsystem, but not in DMPFC subsystem, implying distinct developmental trajectories for DMN subsystems. We found stronger interactions between the DMPFC and MTL subsystems in 5-year-olds, particularly in the left subsystems that support the development of environmental adaptation and relatively complex mental activities. These results also indicate that there is stronger right hemispheric lateralization at age 3, which then changes as bilateral development gradually increases through to age 5, suggesting in turn the hemispheric dominance in DMN subsystems changing with age. The present results provide primary evidence for the development of DMN subsystems in early life, which might be closely related to the development of social cognition in childhood.

  4. Image preprocessing for improving computational efficiency in implementation of restoration and superresolution algorithms.

    Science.gov (United States)

    Sundareshan, Malur K; Bhattacharjee, Supratik; Inampudi, Radhika; Pang, Ho-Yuen

    2002-12-10

    Computational complexity is a major impediment to the real-time implementation of image restoration and superresolution algorithms in many applications. Although powerful restoration algorithms have been developed within the past few years utilizing sophisticated mathematical machinery (based on statistical optimization and convex set theory), these algorithms are typically iterative in nature and require a sufficient number of iterations to be executed to achieve the desired resolution improvement that may be needed to meaningfully perform postprocessing image exploitation tasks in practice. Additionally, recent technological breakthroughs have facilitated novel sensor designs (focal plane arrays, for instance) that make it possible to capture megapixel imagery data at video frame rates. A major challenge in the processing of these large-format images is to complete the execution of the image processing steps within the frame capture times and to keep up with the output rate of the sensor so that all data captured by the sensor can be efficiently utilized. Consequently, development of novel methods that facilitate real-time implementation of image restoration and superresolution algorithms is of significant practical interest and is the primary focus of this study. The key to designing computationally efficient processing schemes lies in strategically introducing appropriate preprocessing steps together with the superresolution iterations to tailor optimized overall processing sequences for imagery data of specific formats. For substantiating this assertion, three distinct methods for tailoring a preprocessing filter and integrating it with the superresolution processing steps are outlined. These methods consist of a region-of-interest extraction scheme, a background-detail separation procedure, and a scene-derived information extraction step for implementing a set-theoretic restoration of the image that is less demanding in computation compared with the

  5. A base composition analysis of natural patterns for the preprocessing of metagenome sequences.

    Science.gov (United States)

    Bonham-Carter, Oliver; Ali, Hesham; Bastola, Dhundy

    2013-01-01

    On the pretext that sequence reads and contigs often exhibit the same kinds of base usage that is also observed in the sequences from which they are derived, we offer a base composition analysis tool. Our tool uses these natural patterns to determine relatedness across sequence data. We introduce spectrum sets (sets of motifs) which are permutations of bacterial restriction sites and the base composition analysis framework to measure their proportional content in sequence data. We suggest that this framework will increase the efficiency during the pre-processing stages of metagenome sequencing and assembly projects. Our method is able to differentiate organisms and their reads or contigs. The framework shows how to successfully determine the relatedness between these reads or contigs by comparison of base composition. In particular, we show that two types of organismal-sequence data are fundamentally different by analyzing their spectrum set motif proportions (coverage). By the application of one of the four possible spectrum sets, encompassing all known restriction sites, we provide the evidence to claim that each set has a different ability to differentiate sequence data. Furthermore, we show that the spectrum set selection having relevance to one organism, but not to the others of the data set, will greatly improve performance of sequence differentiation even if the fragment size of the read, contig or sequence is not lengthy. We show the proof of concept of our method by its application to ten trials of two or three freshly selected sequence fragments (reads and contigs) for each experiment across the six organisms of our set. Here we describe a novel and computationally effective pre-processing step for metagenome sequencing and assembly tasks. Furthermore, our base composition method has applications in phylogeny where it can be used to infer evolutionary distances between organisms based on the notion that related organisms often have much conserved code.

  6. chipPCR: an R package to pre-process raw data of amplification curves.

    Science.gov (United States)

    Rödiger, Stefan; Burdukiewicz, Michał; Schierack, Peter

    2015-09-01

    Both the quantitative real-time polymerase chain reaction (qPCR) and quantitative isothermal amplification (qIA) are standard methods for nucleic acid quantification. Numerous real-time read-out technologies have been developed. Despite the continuous interest in amplification-based techniques, there are only few tools for pre-processing of amplification data. However, a transparent tool for precise control of raw data is indispensable in several scenarios, for example, during the development of new instruments. chipPCR is an R: package for the pre-processing and quality analysis of raw data of amplification curves. The package takes advantage of R: 's S4 object model and offers an extensible environment. chipPCR contains tools for raw data exploration: normalization, baselining, imputation of missing values, a powerful wrapper for amplification curve smoothing and a function to detect the start and end of an amplification curve. The capabilities of the software are enhanced by the implementation of algorithms unavailable in R: , such as a 5-point stencil for derivative interpolation. Simulation tools, statistical tests, plots for data quality management, amplification efficiency/quantification cycle calculation, and datasets from qPCR and qIA experiments are part of the package. Core functionalities are integrated in GUIs (web-based and standalone shiny applications), thus streamlining analysis and report generation. http://cran.r-project.org/web/packages/chipPCR. Source code: https://github.com/michbur/chipPCR. stefan.roediger@b-tu.de Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  7. Internet use during childhood and the ecological techno-subsystem

    Directory of Open Access Journals (Sweden)

    Genevieve Marie Johnson

    2008-12-01

    Full Text Available Research findings suggest both positive and negative developmental consequences of Internet use during childhood (e.g., playing video games have been associated with enhanced visual skills as well as increased aggression. Several studies have concluded that environmental factors mediate the developmental impact of childhood online behaviour. From an ecological perspective, we propose the techno-subsystem, a dimension of the microsystem (i.e., immediate environments. The techno-subsystem includes child interaction with both living (e.g., peers and nonliving (e.g., hardware elements of communication, information, and recreation technologies in direct environments. By emphasizing the role of technology in child development, the ecological techno-subsystem encourages holistic exploration of the developmental consequences of Internet use (and future technological advances during childhood. L’usage d’Internet chez les enfants et le sous-système Techno écologique Résumé : Les résultats de recherche semblent indiquer que l’usage d’Internet chez les enfants aurait des conséquences développementales qui soit à la fois positives et négatives (ex. : l’usage des jeux vidéo auraient été associés à un accroissement des habileté visuelles ainsi qu’à un accroissement de l’agressivité. Plusieurs études ont aussi conclue que l’impact du comportement des enfants quand il sont en ligne sur leur développement serait affecté par des facteurs environnementaux. Dans une perspective écologique, nous proposons le sous-système Techno, une dimension du microsystème (ex :. les environnements immédiats. Le sous-système Techno comprend l’interaction de l’enfant avec des éléments vivants (e. : les paires et non vivants (ex; les ordinateurs de communication, d’information et de technologie de jeux dans des environnements directes.

  8. A development and integration analysis of commercial and in-house control subsystems

    International Nuclear Information System (INIS)

    Moore, D.M.; Dalesio, L.R.

    1998-01-01

    The acquisition and integration of commercial automation and control subsystems in physics research is becoming more common. It is presumed these systems present lower risk and less cost. This paper studies four subsystems used in the Accelerator Production of Tritium (APT) Low Energy Demonstration Accelerator (LEDA) at the Los Alamos National Laboratory (LANL). The radio frequency quadrupole (RFQ) resonance-control cooling subsystem (RCCS), the high-power RF subsystem and the RFQ vacuum subsystem were outsourced; the low-level RF (LLRF) subsystem was developed in-house. Based on the authors experience a careful evaluation of the costs and risks in acquisition, implementation, integration, and maintenance associated with these approaches is given

  9. Designing RF control subsystems using the VXIbus standard

    International Nuclear Information System (INIS)

    Stepp, J.D.; Vong, F.C.; Bridges, J.F.

    1993-01-01

    Various components are being designed to control the RF system of the 7-GeV Advanced Photon Source (APS). The associated control electronics (phase shifters, amplitude modulators, phase detectors, automatic tuning control, and local feedback control) are designed as modular cards with multiple channels for ease of replacement as well as for compact design. Various specifications of the VXIbus are listed and the method used to simplify the design of the control subsystem is shown. A commercial VXI interface board was used to speed the design cycle. Required manpower and actual task times are included. A discussion of the computer architecture and software development of the device drivers which allowed computer control from a VME processor located in a remote crate operating under the Experimental Physics and Industrial Controls Software (EPICS) program is also presented

  10. Lacie phase 1 Classification and Mensuration Subsystem (CAMS) rework experiment

    Science.gov (United States)

    Chhikara, R. S.; Hsu, E. M.; Liszcz, C. J.

    1976-01-01

    An experiment was designed to test the ability of the Classification and Mensuration Subsystem rework operations to improve wheat proportion estimates for segments that had been processed previously. Sites selected for the experiment included three in Kansas and three in Texas, with the remaining five distributed in Montana and North and South Dakota. The acquisition dates were selected to be representative of imagery available in actual operations. No more than one acquisition per biophase were used, and biophases were determined by actual crop calendars. All sites were worked by each of four Analyst-Interpreter/Data Processing Analyst Teams who reviewed the initial processing of each segment and accepted or reworked it for an estimate of the proportion of small grains in the segment. Classification results, acquisitions and classification errors and performance results between CAMS regular and ITS rework are tabulated.

  11. The precision segmented reflectors: Moderate mission figure control subsystem

    Science.gov (United States)

    Sevaston, G.; Redding, D.; Lau, K.; Breckenridge, W.; Levine, B.; Nerheim, N.; Sirlin, S.; Kadogawa, H.

    1991-01-01

    A system concept for a space based segmented reflector telescope figure control subsystem is described. The concept employs a two phase architecture in which figure initialization and figure maintenance are independent functions. Figure initialization is accomplished by image sharpening using natural reference targets. Figure maintenance is performed by monitoring the relative positions and alignments of the telescope components using an optical truss. Actuation is achieved using precision positioners. Computer simulation results of figure initialization by pairwise segment coalignment/cophasing and simulated annealing are presented along with figure maintenance results using a wavefront error regulation algorithm. Both functions are shown to perform at acceptable levels for the class of submillimeter telescopes that are serving as the focus of this technology development effort. Component breadboard work as well as plans for a system testbed are discussed.

  12. The Earth Observing System AM Spacecraft - Thermal Control Subsystem

    Science.gov (United States)

    Chalmers, D.; Fredley, J.; Scott, C.

    1993-01-01

    Mission requirements for the EOS-AM Spacecraft intended to monitor global changes of the entire earth system are considered. The spacecraft is based on an instrument set containing the Advanced Spaceborne Thermal Emission and Reflection radiometer (ASTER), Clouds and Earth's Radiant Energy System (CERES), Multiangle Imaging Spectro-Radiometer (MISR), Moderate-Resolution Imaging Spectrometer (MODIS), and Measurements of Pollution in the Troposphere (MOPITT). Emphasis is placed on the design, analysis, development, and verification plans for the unique EOS-AM Thermal Control Subsystem (TCS) aimed at providing the required environments for all the onboard equipment in a densely packed layout. The TCS design maximizes the use of proven thermal design techniques and materials, in conjunction with a capillary pumped two-phase heat transport system for instrument thermal control.

  13. FireSignal application Node for subsystem control

    Czech Academy of Sciences Publication Activity Database

    Duarte, A.S.; Santos, B.; Pereira, T.; Carvalho, B.B.; Fernandes, H.; Neto, A.; Janky, Filip; Cahyna, Pavel; Písačka, Jan; Hron, Martin

    2010-01-01

    Roč. 85, 3-4 (2010), s. 496-499 ISSN 0920-3796. [IAEA Technical Meeting on Control , Data Acquisition and Remote Participation for Fusion Research/7th./. Aix – en – Provence, 15.06.2009-19.06.2009] Institutional research plan: CEZ:AV0Z20430508 Keywords : Subsystems * CODAC * FireSignal * Java * Remote operation Subject RIV: BL - Plasma and Gas Discharge Physics Impact factor: 1.143, year: 2010 http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6V3C-4YYGPR8-4&_user=6542793&_coverDate=07%2F31%2F2010&_rdoc=1&_fmt=high&_orig=search&_origin=search&_sort=d&_docanchor=&view=c&_acct=C000070123&_version=1&_urlVersion=0&_userid=6542793&md5=899631b6e2f4d05b21b04bde3cfb8e65&searchtype=a

  14. Mid Infrared Instrument cooler subsystem test facility overview

    Science.gov (United States)

    Moore, B.; Zan, J.; Hannah, B.; Chui, T.; Penanen, K.; Weilert, M.

    2017-12-01

    The Cryocooler for the Mid Infrared Instrument (MIRI) on the James Webb Space Telescope (JWST) provides cooling at 6.2K on the instrument interface. The cooler system design has been incrementally documented in previous publications [1][2][3][4][5]. It has components that traverse three primary thermal regions on JWST: Region 1, approximated by 40K; Region 2, approximated by 100K; and Region 3, which is at the allowable flight temperatures for the spacecraft bus. However, there are several sub-regions that exist in the transition between primary regions and at the heat reject interfaces of the Cooler Compressor Assembly (CCA) and Cooler Control Electronics Assembly (CCEA). The design and performance of the test facility to provide a flight representative thermal environment for acceptance testing and characterization of the complete MIRI cooler subsystem are presented.

  15. Designing a New Raster Sub-System for GRASS-7

    Directory of Open Access Journals (Sweden)

    Martin Hruby

    2012-03-01

    Full Text Available The paper deals with a design of a new raster sub-system intended for modern GIS systems open for client and server operation, database connection and strong application interface (API. Motivation for such a design comes from the current state of API working in GRASS 6. If found attractive, the here presented design and its implementation (referred as RG7 may be integrated to the future new generation of the GRASS Geographical Information System version 7-8. The paper describes in details the concept of raster tiling, computer storage of rasters and basic raster access procedures. Finally, the paper gives a simple benchmarking experiment of random read access to raster files imported from the Spearfish dataset. The experiment compares the early implementation of RG7 with the current implementation of rasters in GRASS 6. As the result, the experiment shows the RG7 to be significantly faster than GRASS in random read access to large raster files.

  16. An inverter/controller subsystem optimized for photovoltaic applications

    Science.gov (United States)

    Pickrell, R. L.; Merrill, W. C.; Osullivan, G.

    1978-01-01

    Conversion of solar array dc power to ac power stimulated the specification, design, and simulation testing of an inverter/controller subsystem tailored to the photovoltaic power source characteristics. This paper discusses the optimization of the inverter/controller design as part of an overall Photovoltaic Power System (PPS) designed for maximum energy extraction from the solar array. The special design requirements for the inverter/controller include: (1) a power system controller (PSC) to control continuously the solar array operating point at the maximum power level based on variable solar insolation and cell temperatures; and (2) an inverter designed for high efficiency at rated load and low losses at light loadings to conserve energy. It must be capable of operating connected to the utility line at a level set by an external controller (PSC).

  17. Progress report for the scintillator plate calorimeter subsystem

    International Nuclear Information System (INIS)

    1990-01-01

    This report covers the work completed in FY90 by ANL staff and those of Westinghouse STC and BICRON Corporation under subcontract to ANL towards the design of a compensating calorimeter based on the use of scintillator plate as the sensitive medium. It is presented as five task sections dealing with respectively mechanical design; simulation studies; optical system design; electronics development; development of rad hard plastic scintillator and wavelength shifter and a summary. The work carried out by the University of Tennessee under a subcontract from ANL is reported separately. Finally, as principal institution with responsibility for the overall management of this subsystem effort, the summary here reports the conclusions resulting from the work of the collaboration and their impact on our proposed direction of effort in FY91. This proposal, for obvious reasons is given separately

  18. Fragmented network subsystem with traffic filtering for microkernel environment

    Directory of Open Access Journals (Sweden)

    Anna Urievna Budkina

    2016-06-01

    Full Text Available The TCP/IP stack in a microkernel operating system executed in a user space, which requires the development of a distributed network infrastructure within a single software environment. Its functions are the organization of interaction between the components of the stack with different processes, as well as the organization of filtering mechanisms and routing of internal network traffic. Use of architectural approaches applicable in monolithic-modular systems is impossible, because the network stack is not a shareable component of the system. As a consequence, the microkernel environment requires development of special network subsystem. In this work we provide overview of major conceptions of network architectures in microkernel environments. Also, we provide own architecture which supports filtering of internal network traffic. We evaluate the architecture by development of high-performance "key-value" store.

  19. The Evaluation of Preprocessing Choices in Single-Subject BOLD fMRI Using NPAIRS Performance Metrics

    DEFF Research Database (Denmark)

    Stephen, LaConte; Rottenberg, David; Strother, Stephen

    2003-01-01

    to obtain cross-validation-based model performance estimates of prediction accuracy and global reproducibility for various degrees of model complexity. We rely on the concept of an analysis chain meta-model in which all parameters of the preprocessing steps along with the final statistical model are treated...

  20. The recursive combination filter approach of pre-processing for the estimation of standard deviation of RR series.

    Science.gov (United States)

    Mishra, Alok; Swati, D

    2015-09-01

    Variation in the interval between the R-R peaks of the electrocardiogram represents the modulation of the cardiac oscillations by the autonomic nervous system. This variation is contaminated by anomalous signals called ectopic beats, artefacts or noise which mask the true behaviour of heart rate variability. In this paper, we have proposed a combination filter of recursive impulse rejection filter and recursive 20% filter, with recursive application and preference of replacement over removal of abnormal beats to improve the pre-processing of the inter-beat intervals. We have tested this novel recursive combinational method with median method replacement to estimate the standard deviation of normal to normal (SDNN) beat intervals of congestive heart failure (CHF) and normal sinus rhythm subjects. This work discusses the improvement in pre-processing over single use of impulse rejection filter and removal of abnormal beats for heart rate variability for the estimation of SDNN and Poncaré plot descriptors (SD1, SD2, and SD1/SD2) in detail. We have found the 22 ms value of SDNN and 36 ms value of SD2 descriptor of Poincaré plot as clinical indicators in discriminating the normal cases from CHF cases. The pre-processing is also useful in calculation of Lyapunov exponent which is a nonlinear index as Lyapunov exponents calculated after proposed pre-processing modified in a way that it start following the notion of less complex behaviour of diseased states.

  1. Preprocessing of 18F-DMFP-PET Data Based on Hidden Markov Random Fields and the Gaussian Distribution

    Directory of Open Access Journals (Sweden)

    Fermín Segovia

    2017-10-01

    Full Text Available 18F-DMFP-PET is an emerging neuroimaging modality used to diagnose Parkinson's disease (PD that allows us to examine postsynaptic dopamine D2/3 receptors. Like other neuroimaging modalities used for PD diagnosis, most of the total intensity of 18F-DMFP-PET images is concentrated in the striatum. However, other regions can also be useful for diagnostic purposes. An appropriate delimitation of the regions of interest contained in 18F-DMFP-PET data is crucial to improve the automatic diagnosis of PD. In this manuscript we propose a novel methodology to preprocess 18F-DMFP-PET data that improves the accuracy of computer aided diagnosis systems for PD. First, the data were segmented using an algorithm based on Hidden Markov Random Field. As a result, each neuroimage was divided into 4 maps according to the intensity and the neighborhood of the voxels. The maps were then individually normalized so that the shape of their histograms could be modeled by a Gaussian distribution with equal parameters for all the neuroimages. This approach was evaluated using a dataset with neuroimaging data from 87 parkinsonian patients. After these preprocessing steps, a Support Vector Machine classifier was used to separate idiopathic and non-idiopathic PD. Data preprocessed by the proposed method provided higher accuracy results than the ones preprocessed with previous approaches.

  2. On image pre-processing for PIV of sinlge- and two-phase flows over reflecting objects

    NARCIS (Netherlands)

    Deen, N.G.; Willems, P.; van Sint Annaland, M.; Kuipers, J.A.M.; Lammertink, Rob G.H.; Kemperman, Antonius J.B.; Wessling, Matthias; van der Meer, Walterus Gijsbertus Joseph

    2010-01-01

    A novel image pre-processing scheme for PIV of single- and two-phase flows over reflecting objects which does not require the use of additional hardware is discussed. The approach for single-phase flow consists of image normalization and intensity stretching followed by background subtraction. For

  3. Conversation on data mining strategies in LC-MS untargeted metabolomics: pre-processing and pre-treatment steps

    CSIR Research Space (South Africa)

    Tugizimana, F

    2016-11-01

    Full Text Available -MS)-based untargeted metabolomic dataset, this study explored the influence of collection parameters in the data pre-processing step, scaling and data transformation on the statistical models generated, and feature selection, thereafter. Data obtained in positive mode...

  4. CudaPre3D: An Alternative Preprocessing Algorithm for Accelerating 3D Convex Hull Computation on the GPU

    Directory of Open Access Journals (Sweden)

    MEI, G.

    2015-05-01

    Full Text Available In the calculating of convex hulls for point sets, a preprocessing procedure that is to filter the input points by discarding non-extreme points is commonly used to improve the computational efficiency. We previously proposed a quite straightforward preprocessing approach for accelerating 2D convex hull computation on the GPU. In this paper, we extend that algorithm to being used in 3D cases. The basic ideas behind these two preprocessing algorithms are similar: first, several groups of extreme points are found according to the original set of input points and several rotated versions of the input set; then, a convex polyhedron is created using the found extreme points; and finally those interior points locating inside the formed convex polyhedron are discarded. Experimental results show that: when employing the proposed preprocessing algorithm, it achieves the speedups of about 4x on average and 5x to 6x in the best cases over the cases where the proposed approach is not used. In addition, more than 95 percent of the input points can be discarded in most experimental tests.

  5. Implementation of amplifiers, control and safety subsystems of radiofrequency system of VINCY Cyclotron

    International Nuclear Information System (INIS)

    Drndarevic, V.; Obradovic, M.; Samardic, B.; Djuric, B.; Bojovic, B.; Trajic, M.I.; Golubicic, Z.; Smiljakovic, V.

    1996-01-01

    Concept and design of power amplifiers, control subsystem and safety subsystems for the RF system of the VINCY cyclotron are described. The power amplifiers subsystem consists of two amplifiers of 30 kW nominal power that operate in class B or class C. High stability of voltage amplitude of 5x10 -4 and phase stability between two resonators better than ± 0.5 0 in the range from 16.5 to 31 MHz is being providing by RF control subsystem. Autonomous safety system serves to protect staff from high voltage and to protect equipment from damage. (author)

  6. Subsystem response determination for the US NRC Seismic Safety Margins Research Program

    International Nuclear Information System (INIS)

    Johnson, J.J.

    1979-01-01

    The initial portion of the task described deals with a definition of the state-of-the-art of seismic qualification methods for subsystems. Too facilitate treatment of this broad class of subsystems, three classifications have been identified: multiply supported subsystems (e.g., piping systems); mechanical components (e.g., valves, pumps, control rod drives, hydraulic systems, etc.); and electrical components (e.g., electrical control panels). Descriptions of the available analysis and/or testing techniques for the above classifications are sought. The results of this assessment will be applied to the development of structural subsystem transfer functions

  7. Local control station for development, testing and maintenance of mirror fusion facility subsystem controls

    International Nuclear Information System (INIS)

    Ables, E.; Kelly, M.F.

    1985-01-01

    A Local Control Station (LCS) was designed and built to provide a simplified ad easily configurable means of controlling any Mirror Fusion Test Facility (MFTF-B) subsystem for the purpose of development, testing and maintenance of the subsystem. All MFTF-B Subsystems incorporate at least one Local Control Computer (LCC) that is connected to and accepts high level commands from one of the Supervisory Control and Diagnostic System (SCDS) computers. The LCS connects directly to the LCC in place of SCDS. The LCS communicates with the subsystem hardware using the same SCDS commands that the local control computer recognizes and as such requires no special configuration of the LCC

  8. THE ANALYSIS OF BEEF CATTLE SUBSYSTEM AGRIBUSINESS IMPLEMENTATION IN CENTRAL JAVA PROVINCE, INDONESIA

    Directory of Open Access Journals (Sweden)

    T. Ekowati

    2014-10-01

    Full Text Available The study aimed to analyze the implementation of subsystem agribusiness on the beef cattlefarming in Central Java. Five districts (Rembang, Blora, Grobogan, Boyolali and Wonogiri werepurposively chosen based on the value of Location Quotient (LQ. The study was conducted usingquota sampling method. Forty respondents of each district were chosen randomly using quota sampling.Data were analyzed through Structural Equation Model (SEM. The results showed that each subsystemagribusiness had adequate potential score. The score of 0.693, 0.721, 0.684, 0.626, and 0.691 were givenfor up-stream subsystem, on-farm, down-stream subsystem, marketing and supporting institution,respectively. The results showed that the SEM model was feasible with Chi-Square value=0.952;RMSEA=0.000; Probability =0.621 and TL1=1.126. The significant results of Critical Ratio (CR were:up-stream subsystem to the on-farm agribusiness; on-farm subsystem to down-stream agribusiness;down-stream subsystem to the farmer’s income; marketing subsystem to the up-stream agribusiness andSupporting Institution to the marketing subsystem and down-stream agribusiness. The conclusion ofresearch indicated that the implementation of beef cattle subsystem agribusiness had adequate index andgive positive effect to the beef cattle agribusiness.

  9. Acquiring and preprocessing leaf images for automated plant identification: understanding the tradeoff between effort and information gain

    Directory of Open Access Journals (Sweden)

    Michael Rzanny

    2017-11-01

    Full Text Available Abstract Background Automated species identification is a long term research subject. Contrary to flowers and fruits, leaves are available throughout most of the year. Offering margin and texture to characterize a species, they are the most studied organ for automated identification. Substantially matured machine learning techniques generate the need for more training data (aka leaf images. Researchers as well as enthusiasts miss guidance on how to acquire suitable training images in an efficient way. Methods In this paper, we systematically study nine image types and three preprocessing strategies. Image types vary in terms of in-situ image recording conditions: perspective, illumination, and background, while the preprocessing strategies compare non-preprocessed, cropped, and segmented images to each other. Per image type-preprocessing combination, we also quantify the manual effort required for their implementation. We extract image features using a convolutional neural network, classify species using the resulting feature vectors and discuss classification accuracy in relation to the required effort per combination. Results The most effective, non-destructive way to record herbaceous leaves is to take an image of the leaf’s top side. We yield the highest classification accuracy using destructive back light images, i.e., holding the plucked leaf against the sky for image acquisition. Cropping the image to the leaf’s boundary substantially improves accuracy, while precise segmentation yields similar accuracy at a substantially higher effort. The permanent use or disuse of a flash light has negligible effects. Imaging the typically stronger textured backside of a leaf does not result in higher accuracy, but notably increases the acquisition cost. Conclusions In conclusion, the way in which leaf images are acquired and preprocessed does have a substantial effect on the accuracy of the classifier trained on them. For the first time, this

  10. Automatic pre-processing for an object-oriented distributed hydrological model using GRASS-GIS

    Science.gov (United States)

    Sanzana, P.; Jankowfsky, S.; Branger, F.; Braud, I.; Vargas, X.; Hitschfeld, N.

    2012-04-01

    Landscapes are very heterogeneous, which impact the hydrological processes occurring in the catchments, especially in the modeling of peri-urban catchments. The Hydrological Response Units (HRUs), resulting from the intersection of different maps, such as land use, soil types and geology, and flow networks, allow the representation of these elements in an explicit way, preserving natural and artificial contours of the different layers. These HRUs are used as model mesh in some distributed object-oriented hydrological models, allowing the application of a topological oriented approach. The connectivity between polygons and polylines provides a detailed representation of the water balance and overland flow in these distributed hydrological models, based on irregular hydro-landscape units. When computing fluxes between these HRUs, the geometrical parameters, such as the distance between the centroid of gravity of the HRUs and the river network, and the length of the perimeter, can impact the realism of the calculated overland, sub-surface and groundwater fluxes. Therefore, it is necessary to process the original model mesh in order to avoid these numerical problems. We present an automatic pre-processing implemented in the open source GRASS-GIS software, for which several Python scripts or some algorithms already available were used, such as the Triangle software. First, some scripts were developed to improve the topology of the various elements, such as snapping of the river network to the closest contours. When data are derived with remote sensing, such as vegetation areas, their perimeter has lots of right angles that were smoothed. Second, the algorithms more particularly address bad-shaped elements of the model mesh such as polygons with narrow shapes, marked irregular contours and/or the centroid outside of the polygons. To identify these elements we used shape descriptors. The convexity index was considered the best descriptor to identify them with a threshold

  11. Pre-processing, registration and selection of adaptive optics corrected retinal images.

    Science.gov (United States)

    Ramaswamy, Gomathy; Devaney, Nicholas

    2013-07-01

    In this paper, the aim is to demonstrate enhanced processing of sequences of fundus images obtained using a commercial AO flood illumination system. The purpose of the work is to (1) correct for uneven illumination at the retina (2) automatically select the best quality images and (3) precisely register the best images. Adaptive optics corrected retinal images are pre-processed to correct uneven illumination using different methods; subtracting or dividing by the average filtered image, homomorphic filtering and a wavelet based approach. These images are evaluated to measure the image quality using various parameters, including sharpness, variance, power spectrum kurtosis and contrast. We have carried out the registration in two stages; a coarse stage using cross-correlation followed by fine registration using two approaches; parabolic interpolation on the peak of the cross-correlation and maximum-likelihood estimation. The angle of rotation of the images is measured using a combination of peak tracking and Procrustes transformation. We have found that a wavelet approach (Daubechies 4 wavelet at 6th level decomposition) provides good illumination correction with clear improvement in image sharpness and contrast. The assessment of image quality using a 'Designer metric' works well when compared to visual evaluation, although it is highly correlated with other metrics. In image registration, sub-pixel translation measured using parabolic interpolation on the peak of the cross-correlation function and maximum-likelihood estimation are found to give very similar results (RMS difference 0.047 pixels). We have confirmed that correcting rotation of the images provides a significant improvement, especially at the edges of the image. We observed that selecting the better quality frames (e.g. best 75% images) for image registration gives improved resolution, at the expense of poorer signal-to-noise. The sharpness map of the registered and de-rotated images shows increased

  12. Japan Meteorological Agency/Meteorological Research Institute-Coupled Prediction System version 2 (JMA/MRI-CPS2): atmosphere-land-ocean-sea ice coupled prediction system for operational seasonal forecasting

    Science.gov (United States)

    Takaya, Yuhei; Hirahara, Shoji; Yasuda, Tamaki; Matsueda, Satoko; Toyoda, Takahiro; Fujii, Yosuke; Sugimoto, Hiroyuki; Matsukawa, Chihiro; Ishikawa, Ichiro; Mori, Hirotoshi; Nagasawa, Ryoji; Kubo, Yutaro; Adachi, Noriyuki; Yamanaka, Goro; Kuragano, Tsurane; Shimpo, Akihiko; Maeda, Shuhei; Ose, Tomoaki

    2018-02-01

    This paper describes the Japan Meteorological Agency/Meteorological Research Institute-Coupled Prediction System version 2 (JMA/MRI-CPS2), which was put into operation in June 2015 for the purpose of performing seasonal predictions. JMA/MRI-CPS2 has various upgrades from its predecessor, JMA/MRI-CPS1, including improved resolution and physics in its atmospheric and oceanic components, introduction of an interactive sea-ice model and realistic initialization of its land component. Verification of extensive re-forecasts covering a 30-year period (1981-2010) demonstrates that JMA/MRI-CPS2 possesses improved seasonal predictive skills for both atmospheric and oceanic interannual variability as well as key coupled variability such as the El Niño-Southern Oscillation (ENSO). For ENSO prediction, the new system better represents the forecast uncertainty and transition/duration of ENSO phases. Our analysis suggests that the enhanced predictive skills are attributable to incremental improvements resulting from all of the changes, as is apparent in the beneficial effects of sea-ice coupling and land initialization on 2-m temperature predictions. JMA/MRI-CPS2 is capable of reasonably representing the seasonal cycle and secular trends of sea ice. The sea-ice coupling remarkably enhances the predictive capability for the Arctic 2-m temperature, indicating the importance of this factor, particularly for seasonal predictions in the Arctic region.

  13. Tools and Databases of the KOMICS Web Portal for Preprocessing, Mining, and Dissemination of Metabolomics Data

    Directory of Open Access Journals (Sweden)

    Nozomu Sakurai

    2014-01-01

    Full Text Available A metabolome—the collection of comprehensive quantitative data on metabolites in an organism—has been increasingly utilized for applications such as data-intensive systems biology, disease diagnostics, biomarker discovery, and assessment of food quality. A considerable number of tools and databases have been developed to date for the analysis of data generated by various combinations of chromatography and mass spectrometry. We report here a web portal named KOMICS (The Kazusa Metabolomics Portal, where the tools and databases that we developed are available for free to academic users. KOMICS includes the tools and databases for preprocessing, mining, visualization, and publication of metabolomics data. Improvements in the annotation of unknown metabolites and dissemination of comprehensive metabolomic data are the primary aims behind the development of this portal. For this purpose, PowerGet and FragmentAlign include a manual curation function for the results of metabolite feature alignments. A metadata-specific wiki-based database, Metabolonote, functions as a hub of web resources related to the submitters' work. This feature is expected to increase citation of the submitters' work, thereby promoting data publication. As an example of the practical use of KOMICS, a workflow for a study on Jatropha curcas is presented. The tools and databases available at KOMICS should contribute to enhanced production, interpretation, and utilization of metabolomic Big Data.

  14. A new approach to pre-processing digital image for wavelet-based watermark

    Science.gov (United States)

    Agreste, Santa; Andaloro, Guido

    2008-11-01

    The growth of the Internet has increased the phenomenon of digital piracy, in multimedia objects, like software, image, video, audio and text. Therefore it is strategic to individualize and to develop methods and numerical algorithms, which are stable and have low computational cost, that will allow us to find a solution to these problems. We describe a digital watermarking algorithm for color image protection and authenticity: robust, not blind, and wavelet-based. The use of Discrete Wavelet Transform is motivated by good time-frequency features and a good match with Human Visual System directives. These two combined elements are important for building an invisible and robust watermark. Moreover our algorithm can work with any image, thanks to the step of pre-processing of the image that includes resize techniques that adapt to the size of the original image for Wavelet transform. The watermark signal is calculated in correlation with the image features and statistic properties. In the detection step we apply a re-synchronization between the original and watermarked image according to the Neyman-Pearson statistic criterion. Experimentation on a large set of different images has been shown to be resistant against geometric, filtering, and StirMark attacks with a low rate of false alarm.

  15. An Advanced Pre-Processing Pipeline to Improve Automated Photogrammetric Reconstructions of Architectural Scenes

    Directory of Open Access Journals (Sweden)

    Marco Gaiani

    2016-02-01

    Full Text Available Automated image-based 3D reconstruction methods are more and more flooding our 3D modeling applications. Fully automated solutions give the impression that from a sample of randomly acquired images we can derive quite impressive visual 3D models. Although the level of automation is reaching very high standards, image quality is a fundamental pre-requisite to produce successful and photo-realistic 3D products, in particular when dealing with large datasets of images. This article presents an efficient pipeline based on color enhancement, image denoising, color-to-gray conversion and image content enrichment. The pipeline stems from an analysis of various state-of-the-art algorithms and aims to adjust the most promising methods, giving solutions to typical failure causes. The assessment evaluation proves how an effective image pre-processing, which considers the entire image dataset, can improve the automated orientation procedure and dense 3D point cloud reconstruction, even in the case of poor texture scenarios.

  16. A comparative analysis of pre-processing techniques in colour retinal images

    International Nuclear Information System (INIS)

    Salvatelli, A; Bizai, G; Barbosa, G; Drozdowicz, B; Delrieux, C

    2007-01-01

    Diabetic retinopathy (DR) is a chronic disease of the ocular retina, which most of the times is only discovered when the disease is on an advanced stage and most of the damage is irreversible. For that reason, early diagnosis is paramount for avoiding the most severe consequences of the DR, of which complete blindness is not uncommon. Unsupervised or supervised image processing of retinal images emerges as a feasible tool for this diagnosis. The preprocessing stages are the key for any further assessment, since these images exhibit several defects, including non uniform illumination, sampling noise, uneven contrast due to pigmentation loss during sampling, and many others. Any feasible diagnosis system should work with images where these defects were compensated. In this work we analyze and test several correction techniques. Non uniform illumination is compensated using morphology and homomorphic filtering; uneven contrast is compensated using morphology and local enhancement. We tested our processing stages using Fuzzy C-Means, and local Hurst (self correlation) coefficient for unsupervised segmentation of the abnormal blood vessels. The results over a standard set of DR images are more than promising

  17. Voice preprocessing system incorporating a real-time spectrum analyzer with programmable switched-capacitor filters

    Science.gov (United States)

    Knapp, G.

    1984-01-01

    As part of a speaker verification program for BISS (Base Installation Security System), a test system is being designed with a flexible preprocessing system for the evaluation of voice spectrum/verification algorithm related problems. The main part of this report covers the design, construction, and testing of a voice analyzer with 16 integrating real-time frequency channels ranging from 300 Hz to 3 KHz. The bandpass filter response of each channel is programmable by NMOS switched capacitor quad filter arrays. Presently, the accuracy of these units is limited to a moderate precision by the finite steps of programming. However, repeatability of characteristics between filter units and sections seems to be excellent for the implemented fourth-order Butterworth bandpass responses. We obtained a 0.1 dB linearity error of signal detection and measured a signal-to-noise ratio of approximately 70 dB. The proprocessing system discussed includes preemphasis filter design, gain normalizer design, and data acquisition system design as well as test results.

  18. Robust preprocessing for stimulus-based functional MRI of the moving fetus.

    Science.gov (United States)

    You, Wonsang; Evangelou, Iordanis E; Zun, Zungho; Andescavage, Nickie; Limperopoulos, Catherine

    2016-04-01

    Fetal motion manifests as signal degradation and image artifact in the acquired time series of blood oxygen level dependent (BOLD) functional magnetic resonance imaging (fMRI) studies. We present a robust preprocessing pipeline to specifically address fetal and placental motion-induced artifacts in stimulus-based fMRI with slowly cycled block design in the living fetus. In the proposed pipeline, motion correction is optimized to the experimental paradigm, and it is performed separately in each phase as well as in each region of interest (ROI), recognizing that each phase and organ experiences different types of motion. To obtain the averaged BOLD signals for each ROI, both misaligned volumes and noisy voxels are automatically detected and excluded, and the missing data are then imputed by statistical estimation based on local polynomial smoothing. Our experimental results demonstrate that the proposed pipeline was effective in mitigating the motion-induced artifacts in stimulus-based fMRI data of the fetal brain and placenta.

  19. The PREP Pipeline: Standardized preprocessing for large-scale EEG analysis

    Directory of Open Access Journals (Sweden)

    Nima eBigdelys Shamlo

    2015-06-01

    Full Text Available The technology to collect brain imaging and physiological measures has become portable and ubiquitous, opening the possibility of large-scale analysis of real-world human imaging. By its nature, such data is large and complex, making automated processing essential. This paper shows how lack of attention to the very early stages of an EEG preprocessing pipeline can reduce the signal-to-noise ratio and introduce unwanted artifacts into the data, particularly for computations done in single precision. We demonstrate that ordinary average referencing improves the signal-to-noise ratio, but that noisy channels can contaminate the results. We also show that identification of noisy channels depends on the reference and examine the complex interaction of filtering, noisy channel identification, and referencing. We introduce a multi-stage robust referencing scheme to deal with the noisy channel-reference interaction. We propose a standardized early-stage EEG processing pipeline (PREP and discuss the application of the pipeline to more than 600 EEG datasets. The pipeline includes an automatically generated report for each dataset processed. Users can download the PREP pipeline as a freely available MATLAB library from http://eegstudy.org/prepcode/.

  20. Improving the performance of streamflow forecasting model using data-preprocessing technique in Dungun River Basin

    Science.gov (United States)

    Khai Tiu, Ervin Shan; Huang, Yuk Feng; Ling, Lloyd

    2018-03-01

    An accurate streamflow forecasting model is important for the development of flood mitigation plan as to ensure sustainable development for a river basin. This study adopted Variational Mode Decomposition (VMD) data-preprocessing technique to process and denoise the rainfall data before putting into the Support Vector Machine (SVM) streamflow forecasting model in order to improve the performance of the selected model. Rainfall data and river water level data for the period of 1996-2016 were used for this purpose. Homogeneity tests (Standard Normal Homogeneity Test, the Buishand Range Test, the Pettitt Test and the Von Neumann Ratio Test) and normality tests (Shapiro-Wilk Test, Anderson-Darling Test, Lilliefors Test and Jarque-Bera Test) had been carried out on the rainfall series. Homogenous and non-normally distributed data were found in all the stations, respectively. From the recorded rainfall data, it was observed that Dungun River Basin possessed higher monthly rainfall from November to February, which was during the Northeast Monsoon. Thus, the monthly and seasonal rainfall series of this monsoon would be the main focus for this research as floods usually happen during the Northeast Monsoon period. The predicted water levels from SVM model were assessed with the observed water level using non-parametric statistical tests (Biased Method, Kendall's Tau B Test and Spearman's Rho Test).

  1. A data preprocessing strategy for metabolomics to reduce the mask effect in data analysis

    Directory of Open Access Journals (Sweden)

    Jun eYang

    2015-02-01

    Full Text Available Metabolomics is a booming research field. Its success highly relies on the discovery of differential metabolites by comparing different data sets (for example, patients vs. controls. One of the challenges is that differences of the low abundant metabolites between groups are often masked by the high variation of abundant metabolites -. In order to solve this challenge, a novel data preprocessing strategy consisting of 3 steps was proposed in this study. In step 1, a ‘modified 80%’ rule was used to reduce effect of missing values; in step 2, unit-variance and Pareto scaling methods were used to reduce the mask effect from the abundant metabolites. In step 3, in order to fix the adverse effect of scaling, stability information of the variables deduced from intensity information and the class information, was used to assign suitable weights to the variables. When applying to an LC/MS based metabolomics dataset from chronic hepatitis B patients study and two simulated datasets, the mask effect was found to be partially eliminated and several new low abundant differential metabolites were rescued.

  2. An Application for Data Preprocessing and Models Extractions in Web Usage Mining

    Directory of Open Access Journals (Sweden)

    Claudia Elena DINUCA

    2011-11-01

    Full Text Available Web servers worldwide generate a vast amount of information on web users’ browsing activities. Several researchers have studied these so-called clickstream or web access log data to better understand and characterize web users. The goal of this application is to analyze user behaviour by mining enriched web access log data. With the continued growth and proliferation of e-commerce, Web services, and Web-based information systems, the volumes of click stream and user data collected by Web-based organizations in their daily operations has reached astronomical proportions. This information can be exploited in various ways, such as enhancing the effectiveness of websites or developing directed web marketing campaigns. The discovered patterns are usually represented as collections of pages, objects, or re-sources that are frequently accessed by groups of users with common needs or interests. In this paper we will focus on displaying the way how it was implemented the application for data preprocessing and extracting different data models from web logs data, finding association as a data mining technique to extract potentially useful knowledge from web usage data. We find different data models navigation patterns by analysing the log files of the web-site. I implemented the application in Java using NetBeans IDE. For exemplification, I used the log files data from a commercial web site www.nice-layouts.com.

  3. A comparative analysis of pre-processing techniques in colour retinal images

    Energy Technology Data Exchange (ETDEWEB)

    Salvatelli, A [Artificial Intelligence Group, Facultad de Ingenieria, Universidad Nacional de Entre Rios (Argentina); Bizai, G [Artificial Intelligence Group, Facultad de Ingenieria, Universidad Nacional de Entre Rios (Argentina); Barbosa, G [Artificial Intelligence Group, Facultad de Ingenieria, Universidad Nacional de Entre Rios (Argentina); Drozdowicz, B [Artificial Intelligence Group, Facultad de Ingenieria, Universidad Nacional de Entre Rios (Argentina); Delrieux, C [Electric and Computing Engineering Department, Universidad Nacional del Sur, Alem 1253, BahIa Blanca, (Partially funded by SECyT-UNS) (Argentina)], E-mail: claudio@acm.org

    2007-11-15

    Diabetic retinopathy (DR) is a chronic disease of the ocular retina, which most of the times is only discovered when the disease is on an advanced stage and most of the damage is irreversible. For that reason, early diagnosis is paramount for avoiding the most severe consequences of the DR, of which complete blindness is not uncommon. Unsupervised or supervised image processing of retinal images emerges as a feasible tool for this diagnosis. The preprocessing stages are the key for any further assessment, since these images exhibit several defects, including non uniform illumination, sampling noise, uneven contrast due to pigmentation loss during sampling, and many others. Any feasible diagnosis system should work with images where these defects were compensated. In this work we analyze and test several correction techniques. Non uniform illumination is compensated using morphology and homomorphic filtering; uneven contrast is compensated using morphology and local enhancement. We tested our processing stages using Fuzzy C-Means, and local Hurst (self correlation) coefficient for unsupervised segmentation of the abnormal blood vessels. The results over a standard set of DR images are more than promising.

  4. Tools and databases of the KOMICS web portal for preprocessing, mining, and dissemination of metabolomics data.

    Science.gov (United States)

    Sakurai, Nozomu; Ara, Takeshi; Enomoto, Mitsuo; Motegi, Takeshi; Morishita, Yoshihiko; Kurabayashi, Atsushi; Iijima, Yoko; Ogata, Yoshiyuki; Nakajima, Daisuke; Suzuki, Hideyuki; Shibata, Daisuke

    2014-01-01

    A metabolome--the collection of comprehensive quantitative data on metabolites in an organism--has been increasingly utilized for applications such as data-intensive systems biology, disease diagnostics, biomarker discovery, and assessment of food quality. A considerable number of tools and databases have been developed to date for the analysis of data generated by various combinations of chromatography and mass spectrometry. We report here a web portal named KOMICS (The Kazusa Metabolomics Portal), where the tools and databases that we developed are available for free to academic users. KOMICS includes the tools and databases for preprocessing, mining, visualization, and publication of metabolomics data. Improvements in the annotation of unknown metabolites and dissemination of comprehensive metabolomic data are the primary aims behind the development of this portal. For this purpose, PowerGet and FragmentAlign include a manual curation function for the results of metabolite feature alignments. A metadata-specific wiki-based database, Metabolonote, functions as a hub of web resources related to the submitters' work. This feature is expected to increase citation of the submitters' work, thereby promoting data publication. As an example of the practical use of KOMICS, a workflow for a study on Jatropha curcas is presented. The tools and databases available at KOMICS should contribute to enhanced production, interpretation, and utilization of metabolomic Big Data.

  5. The PREP pipeline: standardized preprocessing for large-scale EEG analysis.

    Science.gov (United States)

    Bigdely-Shamlo, Nima; Mullen, Tim; Kothe, Christian; Su, Kyung-Min; Robbins, Kay A

    2015-01-01

    The technology to collect brain imaging and physiological measures has become portable and ubiquitous, opening the possibility of large-scale analysis of real-world human imaging. By its nature, such data is large and complex, making automated processing essential. This paper shows how lack of attention to the very early stages of an EEG preprocessing pipeline can reduce the signal-to-noise ratio and introduce unwanted artifacts into the data, particularly for computations done in single precision. We demonstrate that ordinary average referencing improves the signal-to-noise ratio, but that noisy channels can contaminate the results. We also show that identification of noisy channels depends on the reference and examine the complex interaction of filtering, noisy channel identification, and referencing. We introduce a multi-stage robust referencing scheme to deal with the noisy channel-reference interaction. We propose a standardized early-stage EEG processing pipeline (PREP) and discuss the application of the pipeline to more than 600 EEG datasets. The pipeline includes an automatically generated report for each dataset processed. Users can download the PREP pipeline as a freely available MATLAB library from http://eegstudy.org/prepcode.

  6. Constant time distance queries in planar unweighted graphs with subquadratic preprocessing time

    DEFF Research Database (Denmark)

    Wulff-Nilsen, C.

    2013-01-01

    Let G be an n-vertex planar, undirected, and unweighted graph. It was stated as open problems whether the Wiener index, defined as the sum of all-pairs shortest path distances, and the diameter of G can be computed in o(n(2)) time. We show that both problems can be solved in O(n(2) log log n/log n......) time with O(n) space. The techniques that we apply allow us to build, within the same time bound, an oracle for exact distance queries in G. More generally, for any parameter S is an element of [(log n/log log n)(2), n(2/5)], distance queries can be answered in O (root S log S/log n) time per query...... with O(n(2)/root S) preprocessing time and space requirement. With respect to running time, this is better than previous algorithms when log S = o(log n). All algorithms have linear space requirement. Our results generalize to a larger class of graphs including those with a fixed excluded minor. (C) 2012...

  7. Fast data preprocessing for chromatographic fingerprints of tomato cell wall polysaccharides using chemometric methods.

    Science.gov (United States)

    Quéméner, Bernard; Bertrand, Dominique; Marty, Isabelle; Causse, Mathilde; Lahaye, Marc

    2007-02-02

    The variability in the chemistry of cell wall polysaccharides in pericarp tissue of red-ripe tomato fruit (Solanum lycopersicon Mill.) was characterized by chemical methods and enzymatic degradations coupled to high performance anion exchange chromatography (HPAEC) and mass spectrometry analysis. Large fruited line, Levovil (LEV) carrying introgressed chromosome fragments from a cherry tomato line Cervil (CER) on chromosomes 4 (LC4), 9 (LC9), or on chromosomes 1, 2, 4 and 9 (LCX) and containing quantitative trait loci (QTLs) for texture traits, was studied. In order to differentiate cell wall polysaccharide modifications in the tomato fruit collection by multivariate analysis, chromatograms were corrected for baseline drift and shift of the component elution time using an approach derived from image analysis and mathematical morphology. The baseline was first corrected by using a "moving window" approach while the peak-matching method developed was based upon location of peaks as local maxima within a window of a definite size. The fast chromatographic data preprocessing proposed was a prerequisite for the different chemometric treatments, such as variance and principal component analysis applied herein to the analysis. Applied to the tomato collection, the combined enzymatic degradations and HPAEC analyses revealed that the firm LCX and CER genotypes showed a higher proportion of glucuronoxylans and pectic arabinan side chains while the mealy LC9 genotype demonstrated the highest content of pectic galactan side chains. QTLs on tomato chromosomes 1, 2, 4 and 9 contain important genes controlling glucuronoxylan and pectic neutral side chains biosynthesis and/or metabolism.

  8. Min st-cut oracle for planar graphs with near-linear preprocessing time

    DEFF Research Database (Denmark)

    Borradaile, Glencora; Sankowski, Piotr; Wulff-Nilsen, Christian

    2010-01-01

    For an undirected n-vertex planar graph G with non-negative edge-weights, we consider the following type of query: given two vertices s and t in G, what is the weight of a min st-cut in G? We show how to answer such queries in constant time with O(n log5 n) preprocessing time and O(n log n) space....... We use a Gomory-Hu tree to represent all the pairwise min st-cuts implicitly. Previously, no subquadratic time algorithm was known for this problem. Our oracle can be extended to report the min st-cuts in time proportional to their size. Since all-pairs min st-cut and the minimum cycle basis are dual...... problems in planar graphs, we also obtain an implicit representation of a minimum cycle basis in O(n log5 n) time and O(n log n) space and an explicit representation with additional O(C) time and space where C is the size of the basis. To obtain our results, we require that shortest paths be unique...

  9. Preliminary Design of Monitoring and Control Subsystem for GNSS Ground Station

    Directory of Open Access Journals (Sweden)

    Seongkyun Jeong

    2008-06-01

    Full Text Available GNSS (Global Navigation Satellite System Ground Station monitors navigation satellite signal, analyzes navigation result, and uploads correction information to satellite. GNSS Ground Station is considered as a main object for constructing GNSS infra-structure and applied in various fields. ETRI (Electronics and Telecommunications Research Institute is developing Monitoring and Control subsystem, which is subsystem of GNSS Ground Station. Monitoring and Control subsystem acquires GPS and Galileo satellite signal and provides signal monitoring data to GNSS control center. In this paper, the configurations of GNSS Ground Station and Monitoring and Control subsystem are introduced and the preliminary design of Monitoring and Control subsystem is performed. Monitoring and Control subsystem consists of data acquisition module, data formatting and archiving module, data error correction module, navigation solution determination module, independent quality monitoring module, and system operation and maintenance module. The design process uses UML (Unified Modeling Language method which is a standard for developing software and consists of use-case modeling, domain design, software structure design, and user interface structure design. The preliminary design of Monitoring and Control subsystem enhances operation capability of GNSS Ground Station and is used as basic material for detail design of Monitoring and Control subsystem.

  10. Subsystem for processing, storage and editing of the results of the chamber film scanning

    International Nuclear Information System (INIS)

    Balgansurehn, Ya.; Dirner, A.; Ivanov, V.G.

    1987-01-01

    A subsystem which is an element of the high-automated system for film data processing and intended for run with the scanning information is described. The subsystem consists of routines which allow to create, to edit and to print the file of scanning results both in batch and interactive mode on the CDC-6500 computer

  11. Approaches and Tools Used to Teach the Computer Input/Output Subsystem: A Survey

    Science.gov (United States)

    Larraza-Mendiluze, Edurne; Garay-Vitoria, Nestor

    2015-01-01

    This paper surveys how the computer input/output (I/O) subsystem is taught in introductory undergraduate courses. It is important to study the educational process of the computer I/O subsystem because, in the curricula recommendations, it is considered a core topic in the area of knowledge of computer architecture and organization (CAO). It is…

  12. Development and testing of the data automation subsystem for the Mariner Mars 1971 spacecraft

    Science.gov (United States)

    1971-01-01

    The data automation subsystem designed and built as part of the Mariner Mars 1971 program, sequences and controls the science instruments and formats all science data. A description of the subsystem with emphasis on major changes relative to Mariner Mars 1969 is presented. In addition, the complete test phase is described.

  13. THE INFLUENCE OF AGRIBUSINESS SUBSYSTEM ON BEEF CATTLE FATTENING FARMS PROFIT IN CENTRAL JAVA

    Directory of Open Access Journals (Sweden)

    E. Rianto

    2012-06-01

    Full Text Available This study was aimed : (i to know the subsystem implementation and agribusiness planning in beef cattle fattening; (ii to count the profit of beef cattle farming; (iii to analyze the effect of agribusiness subsystem implementation and agribusiness planning to beef cattle fattening profit. This study was carried out using survey method and the elementary units were feedlot farmers. The sample was determined by Purposive Quota Sampling Method on 112 respondents spread across five regencies, namely Blora, Rembang, Grobogan, Wonogiri, and Boyolali. Data were collected from primary and secondary sources. The data analysis used quantitative descriptive and inferential statistics method, which include scoring, financial, and multiple linear regression analysis. The results showed that : (i the implementation of agribusiness subsystem (including preproduction subsystem, marketing, and agribusiness support services and agribusiness planning were not so good category, while the cattle farming subsystem was moderate category; (ii the average of farming scale in each feedlot farmer was 2.95 head of cattle with the profit rate was IDR 1,044,719 per fattening period during 6.68 months (equivalent to IDR 156,395 per month; (iii agribusiness subsystem and agribusiness planning had significant impact on feedlot farmer profit simultaneously, but preproduction subsystem and the agribusiness support services subsystem partially had a significant impact on feedlot farmer profit.

  14. THE INFLUENCE OF AGRIBUSINESS SUBSYSTEM ON BEEF CATTLE FATTENING FARM’S PROFIT IN CENTRAL JAVA

    Directory of Open Access Journals (Sweden)

    E. Prasetyo

    2014-10-01

    Full Text Available This study was aimed : (i to know the subsystem implementation and agribusiness planning inbeef cattle fattening; (ii to count the profit of beef cattle farming; (iii to analyze the effect ofagribusiness subsystem implementation and agribusiness planning to beef cattle fattening profit. Thisstudy was carried out using survey method and the elementary units were feedlot farmers. The samplewas determined by Purposive Quota Sampling Method on 112 respondents spread across five regencies,namely Blora, Rembang, Grobogan, Wonogiri, and Boyolali. Data were collected from primary andsecondary sources. The data analysis used quantitative descriptive and inferential statistics method,which include scoring, financial, and multiple linear regression analysis. The results showed that : (i theimplementation of agribusiness subsystem (including preproduction subsystem, marketing, andagribusiness support services and agribusiness planning were not so good category, while the cattlefarming subsystem was moderate category; (ii the average of farming scale in each feedlot farmer was2.95 head of cattle with the profit rate was IDR 1,044,719 per fattening period during 6.68 months(equivalent to IDR 156,395 per month; (iii agribusiness subsystem and agribusiness planning hadsignificant impact on feedlot farmer profit simultaneously, but preproduction subsystem and theagribusiness support services subsystem partially had a significant impact on feedlot farmer profit.

  15. BEHAVE: fire behavior prediction and fuel modeling system-BURN Subsystem, part 1

    Science.gov (United States)

    Patricia L. Andrews

    1986-01-01

    Describes BURN Subsystem, Part 1, the operational fire behavior prediction subsystem of the BEHAVE fire behavior prediction and fuel modeling system. The manual covers operation of the computer program, assumptions of the mathematical models used in the calculations, and application of the predictions.

  16. [Financing, organization, costs and services performance of the Argentinean health sub-systems.

    Science.gov (United States)

    Yavich, Natalia; Báscolo, Ernesto Pablo; Haggerty, Jeannie

    2016-01-01

    To analyze the relationship between health system financing and services organization models with costs and health services performance in each of Rosario's health sub-systems. The financing and organization models were characterized using secondary data. Costs were calculated using the WHO/SHA methodology. Healthcare quality was measured by a household survey (n=822). Public subsystem:Vertically integrated funding and primary healthcare as a leading strategy to provide services produced low costs and individual-oriented healthcare but with weak accessibility conditions and comprehensiveness. Private subsystem: Contractual integration and weak regulatory and coordination mechanisms produced effects opposed to those of the public sub-system. Social security: Contractual integration and strong regulatory and coordination mechanisms contributed to intermediate costs and overall high performance. Each subsystem financing and services organization model had a strong and heterogeneous influence on costs and health services performance.

  17. Independent operation by subsystems: Strategic behavior for the Brazilian electricity sector

    International Nuclear Information System (INIS)

    Guido Tapia Carpio, Lucio; Olimpio Pereira, Amaro

    2006-01-01

    This article describes the competitive strategies of the subsystems in the Brazilian electricity sector. The objective is to present a model in which the operation of each subsystem is managed independently. As the subsystems correspond to the country's geographic regions, the adoption of this model creates conditions for each region to develop according to its own peculiarities. The decision-making process is described based Game Theory. As such, the players or operators of each subsystem carry out their strategies based on the quantities produced, which results in Nash-Cournot equilibrium. In this model, the importance of the proper transmission line dimensioning is highlighted. It determines the competition level among subsystems and allows for optimization of the whole system without requiring arrangements for managing the congestion of the energy transportation grid. The model was programmed in FORTRAN, using IBM's optimization subroutine library (OSL) package

  18. Preliminary analysis of a membrane-based atmosphere-control subsystem

    Science.gov (United States)

    Mccray, Scott B.; Newbold, David D.; Ray, Rod; Ogle, Kathryn

    1993-01-01

    Controlled ecological life supprot systems will require subsystems for maintaining the consentrations of atmospheric gases within acceptable ranges in human habitat chambers and plant growth chambers. The goal of this work was to develop a membrane-based atmosphere comntrol (MBAC) subsystem that allows the controlled exchange of atmospheric componets (e.g., oxygen, carbon dioxide, and water vapor) between these chambers. The MBAC subsystem promises to offer a simple, nonenergy intensive method to separate, store and exchange atmospheric components, producing optimal concentrations of components in each chamber. In this paper, the results of a preliminary analysis of the MBAC subsystem for control of oxygen and nitrogen are presented. Additionally, the MBAC subsystem and its operation are described.

  19. Default Mode Network Subsystems are Differentially Disrupted in Posttraumatic Stress Disorder.

    Science.gov (United States)

    Miller, Danielle R; Hayes, Scott M; Hayes, Jasmeet P; Spielberg, Jeffrey M; Lafleche, Ginette; Verfaellie, Mieke

    2017-05-01

    Posttraumatic stress disorder (PTSD) is a psychiatric disorder characterized by debilitating re-experiencing, avoidance, and hyperarousal symptoms following trauma exposure. Recent evidence suggests that individuals with PTSD show disrupted functional connectivity in the default mode network, an intrinsic network that consists of a midline core, a medial temporal lobe (MTL) subsystem, and a dorsomedial prefrontal cortex (dMPFC) subsystem. The present study examined whether functional connectivity in these subsystems is differentially disrupted in PTSD. Sixty-nine returning war Veterans with PTSD and 44 trauma-exposed Veterans without PTSD underwent resting state functional MRI (rs-fMRI). To examine functional connectivity, seeds were placed in the core hubs of the default mode network, namely the posterior cingulate cortex (PCC) and anterior medial PFC (aMPFC), and in each subsystem. Compared to controls, individuals with PTSD had reduced functional connectivity between the PCC and the hippocampus, a region of the MTL subsystem. Groups did not differ in connectivity between the PCC and dMPFC subsystem or between the aMPFC and any region within either subsystem. In the PTSD group, connectivity between the PCC and hippocampus was negatively associated with avoidance/numbing symptoms. Examination of the MTL and dMPFC subsystems revealed reduced anticorrelation between the ventromedial PFC (vMPFC) seed of the MTL subsystem and the dorsal anterior cingulate cortex in the PTSD group. Our results suggest that selective alterations in functional connectivity in the MTL subsystem of the default mode network in PTSD may be an important factor in PTSD pathology and symptomatology.

  20. Advanced Space Suit Portable Life Support Subsystem Packaging Design

    Science.gov (United States)

    Howe, Robert; Diep, Chuong; Barnett, Bob; Thomas, Gretchen; Rouen, Michael; Kobus, Jack

    2006-01-01

    This paper discusses the Portable Life Support Subsystem (PLSS) packaging design work done by the NASA and Hamilton Sundstrand in support of the 3 future space missions; Lunar, Mars and zero-g. The goal is to seek ways to reduce the weight of PLSS packaging, and at the same time, develop a packaging scheme that would make PLSS technology changes less costly than the current packaging methods. This study builds on the results of NASA s in-house 1998 study, which resulted in the "Flex PLSS" concept. For this study the present EMU schematic (low earth orbit) was used so that the work team could concentrate on the packaging. The Flex PLSS packaging is required to: protect, connect, and hold the PLSS and its components together internally and externally while providing access to PLSS components internally for maintenance and for technology change without extensive redesign impact. The goal of this study was two fold: 1. Bring the advanced space suit integrated Flex PLSS concept from its current state of development to a preliminary design level and build a proof of concept mockup of the proposed design, and; 2. "Design" a Design Process, which accommodates both the initial Flex PLSS design and the package modifications, required to accommodate new technology.

  1. Upgrade of ESO's FIERA CCD Controller and PULPO Subsystem

    Science.gov (United States)

    Reyes-Moreno, J.; Geimer, C.; Balestra, A.; Haddad, N.

    An overview of FIERA is presented with emphasis on its recent upgrade to PCI. The PCI board hosts two DSPs, one for real time control of the camera and another for on-the-fly processing of the incoming video data. In addition, the board is able to make DMA transfers, to synchronize to other boards alike, to be synchronized by a TIM bus and to control PULPO via RS232. The design is based on the IOP480 chip from PLX, for which we have developed a device driver for both Solaris and Linux. One computer is able to host more than one board and therefore can control an array of FIERA detector electronics. PULPO is a multifunctional subsystem widely used at ESO for the housekeeping of CCD cryostat heads and for shutter control. The upgrade of PULPO is based on an embedded PC running Linux. The upgraded PULPO is able to handle 29 temperature sensors, control 8 heaters and one shutter, read out one vacuum sensor and log any combination of parameters.

  2. The magnetic diagnostics subsystem of the LISA Technology Package

    Energy Technology Data Exchange (ETDEWEB)

    Diaz-Aguilo, M; Garcia-Berro, E [Departament de Fisica Aplicada, Universitat Politecnica de Catalunya, c/Esteve Terrades, 5, 08860 Castelldefels (Spain); Lobo, A; Mateos, N; Sanjuan, J, E-mail: marc.diaz.aguilo@fa.upc.ed [Institut d' Estudis Espacials de Catalunya, c/Gran Capita 2-4, Edif. Nexus 104, 08034 Barcelona (Spain)

    2010-05-01

    The Magnetic Diagnostics Subsystem of the LISA Technology Package (LTP) on board the LISA Pathfinder (LPF) spacecraft includes a set of four tri-axial fluxgate magnetometers, intended to measure with high precision the magnetic field at the positions they occupy. However, their readouts do not provide a direct measurement of the magnetic field at the positions of the test masses. Therefore, an interpolation method must be implemented to obtain this information. However, such interpolation process faces serious difficulties. Indeed, the size of the interpolation region is excessive for a linear interpolation to be reliable, and the number of magnetometer channels does not provide sufficient data to go beyond that poor approximation. Recent research points to a possible alternative to address the magnetic interpolation problem by means of neural network algorithms. The key point of this approach is the ability neural networks have to learn from suitable training data representing the magnetic field behaviour. Despite the large distance to the test masses and the insufficient magnetic readings, artificial neural networks are able to significantly reduce the estimation error to acceptable levels. The learning efficiency can be best improved by making use of data obtained from on-ground measurements prior to mission launch in all relevant satellite locations and under real operation conditions. Reliable information on that appears to be essential for a meaningful assessment of magnetic noise in the LTP.

  3. The magnetic diagnostics subsystem of the LISA Technology Package

    International Nuclear Information System (INIS)

    Diaz-Aguilo, M; Garcia-Berro, E; Lobo, A; Mateos, N; Sanjuan, J

    2010-01-01

    The Magnetic Diagnostics Subsystem of the LISA Technology Package (LTP) on board the LISA Pathfinder (LPF) spacecraft includes a set of four tri-axial fluxgate magnetometers, intended to measure with high precision the magnetic field at the positions they occupy. However, their readouts do not provide a direct measurement of the magnetic field at the positions of the test masses. Therefore, an interpolation method must be implemented to obtain this information. However, such interpolation process faces serious difficulties. Indeed, the size of the interpolation region is excessive for a linear interpolation to be reliable, and the number of magnetometer channels does not provide sufficient data to go beyond that poor approximation. Recent research points to a possible alternative to address the magnetic interpolation problem by means of neural network algorithms. The key point of this approach is the ability neural networks have to learn from suitable training data representing the magnetic field behaviour. Despite the large distance to the test masses and the insufficient magnetic readings, artificial neural networks are able to significantly reduce the estimation error to acceptable levels. The learning efficiency can be best improved by making use of data obtained from on-ground measurements prior to mission launch in all relevant satellite locations and under real operation conditions. Reliable information on that appears to be essential for a meaningful assessment of magnetic noise in the LTP.

  4. Separate valuation subsystems for delay and effort decision costs.

    Science.gov (United States)

    Prévost, Charlotte; Pessiglione, Mathias; Météreau, Elise; Cléry-Melin, Marie-Laure; Dreher, Jean-Claude

    2010-10-20

    Decision making consists of choosing among available options on the basis of a valuation of their potential costs and benefits. Most theoretical models of decision making in behavioral economics, psychology, and computer science propose that the desirability of outcomes expected from alternative options can be quantified by utility functions. These utility functions allow a decision maker to assign subjective values to each option under consideration by weighting the likely benefits and costs resulting from an action and to select the one with the highest subjective value. Here, we used model-based neuroimaging to test whether the human brain uses separate valuation systems for rewards (erotic stimuli) associated with different types of costs, namely, delay and effort. We show that humans devalue rewards associated with physical effort in a strikingly similar fashion to those they devalue that are associated with delays, and that a single computational model derived from economics theory can account for the behavior observed in both delay discounting and effort discounting. However, our neuroimaging data reveal that the human brain uses distinct valuation subsystems for different types of costs, reflecting in opposite fashion delayed reward and future energetic expenses. The ventral striatum and the ventromedial prefrontal cortex represent the increasing subjective value of delayed rewards, whereas a distinct network, composed of the anterior cingulate cortex and the anterior insula, represent the decreasing value of the effortful option, coding the expected expense of energy. Together, these data demonstrate that the valuation processes underlying different types of costs can be fractionated at the cerebral level.

  5. VERIFICATION OF THE SENTINEL-4 FOCAL PLANE SUBSYSTEM

    Directory of Open Access Journals (Sweden)

    C. Williges

    2017-05-01

    Full Text Available The Sentinel-4 payload is a multi-spectral camera system which is designed to monitor atmospheric conditions over Europe. The German Aerospace Center (DLR in Berlin, Germany conducted the verification campaign of the Focal Plane Subsystem (FPS on behalf of Airbus Defense and Space GmbH, Ottobrunn, Germany. The FPS consists, inter alia, of two Focal Plane Assemblies (FPAs, one for the UV-VIS spectral range (305 nm … 500 nm, the second for NIR (750 nm … 775 nm. In this publication, we will present in detail the opto-mechanical laboratory set-up of the verification campaign of the Sentinel-4 Qualification Model (QM which will also be used for the upcoming Flight Model (FM verification. The test campaign consists mainly of radiometric tests performed with an integrating sphere as homogenous light source. The FPAs have mainly to be operated at 215 K ± 5 K, making it necessary to exploit a thermal vacuum chamber (TVC for the test accomplishment. This publication focuses on the challenge to remotely illuminate both Sentinel-4 detectors as well as a reference detector homogeneously over a distance of approximately 1 m from outside the TVC. Furthermore selected test analyses and results will be presented, showing that the Sentinel-4 FPS meets specifications.

  6. Anatomy, histochemistry and immunohistochemistry of the olfactory subsystems in mice

    Directory of Open Access Journals (Sweden)

    Arthur William Barrios

    2014-07-01

    Full Text Available The four regions of the murine nasal cavity featuring olfactory neurons were studied anatomically and by labelling with lectins and relevant antibodies with a view to establishing criteria for the identification of olfactory subsystems that are readily applicable to other mammals. In the main olfactory epithelium and the septal organ the olfactory sensory neurons (OSNs are embedded in quasi-stratified columnar epithelium; vomeronasal OSNs are embedded in epithelium lining the medial interior wall of the vomeronasal duct and do not make contact with the mucosa of the main nasal cavity; and in Grüneberg’s ganglion a small isolated population of OSNs lies adjacent to, but not within, the epithelium. With the exception of Grüneberg’s ganglion, all the tissues expressing olfactory marker protein (OMP (the above four nasal territories, the vomeronasal and main olfactory nerves, and the main and accessory olfactory bulbs are also labelled by Lycopersicum esculentum agglutinin, while Ulex europaeus agglutinin I labels all and only tissues expressing Gi2 (the apical sensory neurons of the vomeronasal organ, their axons, and their glomerular destinations in the anterior accessory olfactory bulb. These staining patterns of UEA-I and LEA may facilitate the characterization of olfactory anatomy in other species. A 710-section atlas of the anatomy of the murine nasal cavity has been made available on line.

  7. Quantum trajectory analysis of multimode subsystem-bath dynamics.

    Science.gov (United States)

    Wyatt, Robert E; Na, Kyungsun

    2002-01-01

    The dynamics of a swarm of quantum trajectories is investigated for systems involving the interaction of an active mode (the subsystem) with an M-mode harmonic reservoir (the bath). Equations of motion for the position, velocity, and action function for elements of the probability fluid are integrated in the Lagrangian (moving with the fluid) picture of quantum hydrodynamics. These fluid elements are coupled through the Bohm quantum potential and as a result evolve as a correlated ensemble. Wave function synthesis along the trajectories permits an exact description of the quantum dynamics for the evolving probability fluid. The approach is fully quantum mechanical and does not involve classical or semiclassical approximations. Computational results are presented for three systems involving the interaction on an active mode with M=1, 10, and 15 bath modes. These results include configuration space trajectory evolution, flux analysis of the evolving ensemble, wave function synthesis along trajectories, and energy partitioning along specific trajectories. These results demonstrate the feasibility of using a small number of quantum trajectories to obtain accurate quantum results on some types of open quantum systems that are not amenable to standard quantum approaches involving basis set expansions or Eulerian space-fixed grids.

  8. Gauge subsystems, separability and robustness in autonomous quantum memories

    International Nuclear Information System (INIS)

    Sarma, Gopal; Mabuchi, Hideo

    2013-01-01

    Quantum error correction provides a fertile context for exploring the interplay of feedback control, microscopic physics and non-commutative probability. In this paper we deepen our understanding of this nexus through high-level analysis of a class of quantum memory models that we have previously proposed, which implement continuous-time versions of well-known stabilizer codes in autonomous nanophotonic circuits that require no external clocking or control. We demonstrate that the presence of the gauge subsystem in the nine-qubit Bacon–Shor code allows for a loss-tolerant layout of the corresponding nanophotonic circuit that substantially ameliorates the effects of optical propagation losses, argue that code separability allows for simplified restoration feedback protocols, and propose a modified fidelity metric for quantifying the performance of realistic quantum memories. Our treatment of these topics exploits the homogeneous modeling framework of autonomous nanophotonic circuits, but the key ideas translate to the traditional setting of discrete time, measurement-based quantum error correction. (paper)

  9. Solid Propulsion Systems, Subsystems, and Components Service Life Extension

    Science.gov (United States)

    Hundley, Nedra H.; Jones, Connor

    2011-01-01

    The service life extension of solid propulsion systems, subsystems, and components will be discussed based on the service life extension of the Space Transportation System Reusable Solid Rocket Motor (RSRM) and Booster Separation Motors (BSM). The RSRM is certified for an age life of five years. In the aftermath of the Columbia accident there were a number of motors that were approaching the end of their five year service life certification. The RSRM Project initiated an assessment to determine if the service life of these motors could be extended. With the advent of the Constellation Program, a flight test was proposed that would utilize one of the RSRMs which had been returned from the launch site due to the expiration of its five year service life certification and twelve surplus Chemical Systems Division BSMs which had exceeded their eight year service life. The RSRM age life tracking philosophy which establishes when the clock starts for age life tracking will be described. The role of the following activities in service life extension will be discussed: subscale testing, accelerated aging, dissecting full scale aged hardware, static testing full scale aged motors, data mining industry data, and using the fleet leader approach. The service life certification and extension of the BSMs will also be presented.

  10. Verification of the Sentinel-4 focal plane subsystem

    Science.gov (United States)

    Williges, Christian; Uhlig, Mathias; Hilbert, Stefan; Rossmann, Hannes; Buchwinkler, Kevin; Babben, Steffen; Sebastian, Ilse; Hohn, Rüdiger; Reulke, Ralf

    2017-09-01

    The Sentinel-4 payload is a multi-spectral camera system, designed to monitor atmospheric conditions over Europe from a geostationary orbit. The German Aerospace Center, DLR Berlin, conducted the verification campaign of the Focal Plane Subsystem (FPS) during the second half of 2016. The FPS consists, of two Focal Plane Assemblies (FPAs), two Front End Electronics (FEEs), one Front End Support Electronic (FSE) and one Instrument Control Unit (ICU). The FPAs are designed for two spectral ranges: UV-VIS (305 nm - 500 nm) and NIR (750 nm - 775 nm). In this publication, we will present in detail the set-up of the verification campaign of the Sentinel-4 Qualification Model (QM). This set up will also be used for the upcoming Flight Model (FM) verification, planned for early 2018. The FPAs have to be operated at 215 K +/- 5 K, making it necessary to exploit a thermal vacuum chamber (TVC) for the test accomplishment. The test campaign consists mainly of radiometric tests. This publication focuses on the challenge to remotely illuminate both Sentinel-4 detectors as well as a reference detector homogeneously over a distance of approximately 1 m from outside the TVC. Selected test analyses and results will be presented.

  11. lop-DWI: A Novel Scheme for Pre-Processing of Diffusion-Weighted Images in the Gradient Direction Domain.

    Science.gov (United States)

    Sepehrband, Farshid; Choupan, Jeiran; Caruyer, Emmanuel; Kurniawan, Nyoman D; Gal, Yaniv; Tieng, Quang M; McMahon, Katie L; Vegh, Viktor; Reutens, David C; Yang, Zhengyi

    2014-01-01

    We describe and evaluate a pre-processing method based on a periodic spiral sampling of diffusion-gradient directions for high angular resolution diffusion magnetic resonance imaging. Our pre-processing method incorporates prior knowledge about the acquired diffusion-weighted signal, facilitating noise reduction. Periodic spiral sampling of gradient direction encodings results in an acquired signal in each voxel that is pseudo-periodic with characteristics that allow separation of low-frequency signal from high frequency noise. Consequently, it enhances local reconstruction of the orientation distribution function used to define fiber tracks in the brain. Denoising with periodic spiral sampling was tested using synthetic data and in vivo human brain images. The level of improvement in signal-to-noise ratio and in the accuracy of local reconstruction of fiber tracks was significantly improved using our method.

  12. PRACTICAL RECOMMENDATIONS OF DATA PREPROCESSING AND GEOSPATIAL MEASURES FOR OPTIMIZING THE NEUROLOGICAL AND OTHER PEDIATRIC EMERGENCIES MANAGEMENT

    Directory of Open Access Journals (Sweden)

    Ionela MANIU

    2017-08-01

    Full Text Available Time management, optimal and timed determination of emergency severity as well as optimizing the use of available human and material resources are crucial areas of emergency services. A starting point for achieving these optimizations can be considered the analysis and preprocess of real data from the emergency services. The benefits of performing this method consist in exposing more useful structures to data modelling algorithms which consequently will reduce overfitting and improves accuracy. This paper aims to offer practical recommendations for data preprocessing measures including feature selection and discretization of numeric attributes regarding age, duration of the case, season, period, week period (workday, weekend and geospatial location of neurological and other pediatric emergencies. An analytical, retrospective study was conducted on a sample consisting of 933 pediatric cases, from UPU-SMURD Sibiu, 01.01.2014 – 27.02.2017 period.

  13. Novel low-power ultrasound digital preprocessing architecture for wireless display.

    Science.gov (United States)

    Levesque, Philippe; Sawan, Mohamad

    2010-03-01

    A complete hardware-based ultrasound preprocessing unit (PPU) is presented as an alternative to available power-hungry devices. Intended to expand the ultrasonic applications, the proposed unit allows replacement of the cable of the ultrasonic probe by a wireless link to transfer data from the probe to a remote monitor. The digital back-end architecture of this PPU is fully pipelined, which permits sampling of ultrasonic signals at a frequency equal to the field-programmable gate array-based system clock, up to 100 MHz. Experimental results show that the proposed processing unit has an excellent performance, an equivalent 53.15 Dhrystone 2.1 MIPS/ MHz (DMIPS/MHz), compared with other software-based architectures that allow a maximum of 1.6 DMIPS/MHz. In addition, an adaptive subsampling method is proposed to operate the pixel compressor, which allows real-time image zooming and, by removing high-frequency noise, the lateral and axial resolutions are enhanced by 25% and 33%, respectively. Realtime images, acquired from a reference phantom, validated the feasibility of the proposed architecture. For a display rate of 15 frames per second, and a 5-MHz single-element piezoelectric transducer, the proposed digital PPU requires a dynamic power of only 242 mW, which represents around 20% of the best-available software-based system. Furthermore, composed by the ultrasound processor and the image interpolation unit, the digital processing core of the PPU presents good power-performance ratios of 26 DMIPS/mW and 43.9 DMIPS/mW at a 20-MHz and 100-MHz sample frequency, respectively.

  14. Joint Preprocesser-Based Detectors for One-Way and Two-Way Cooperative Communication Networks

    KAUST Repository

    Abuzaid, Abdulrahman I.

    2014-05-01

    Efficient receiver designs for cooperative communication networks are becoming increasingly important. In previous work, cooperative networks communicated with the use of L relays. As the receiver is constrained, channel shortening and reduced-rank techniques were employed to design the preprocessing matrix that reduces the length of the received vector from L to U. In the first part of the work, a receiver structure is proposed which combines our proposed threshold selection criteria with the joint iterative optimization (JIO) algorithm that is based on the mean square error (MSE). Our receiver assists in determining the optimal U. Furthermore, this receiver provides the freedom to choose U for each frame depending on the tolerable difference allowed for MSE. Our study and simulation results show that by choosing an appropriate threshold, it is possible to gain in terms of complexity savings while having no or minimal effect on the BER performance of the system. Furthermore, the effect of channel estimation on the performance of the cooperative system is investigated. In the second part of the work, a joint preprocessor-based detector for cooperative communication networks is proposed for one-way and two-way relaying. This joint preprocessor-based detector operates on the principles of minimizing the symbol error rate (SER) instead of minimizing MSE. For a realistic assessment, pilot symbols are used to estimate the channel. From our simulations, it can be observed that our proposed detector achieves the same SER performance as that of the maximum likelihood (ML) detector with all participating relays. Additionally, our detector outperforms selection combining (SC), channel shortening (CS) scheme and reduced-rank techniques when using the same U. Finally, our proposed scheme has the lowest computational complexity.

  15. Hardware Design and Implementation of a Wavelet De-Noising Procedure for Medical Signal Preprocessing

    Directory of Open Access Journals (Sweden)

    Szi-Wen Chen

    2015-10-01

    Full Text Available In this paper, a discrete wavelet transform (DWT based de-noising with its applications into the noise reduction for medical signal preprocessing is introduced. This work focuses on the hardware realization of a real-time wavelet de-noising procedure. The proposed de-noising circuit mainly consists of three modules: a DWT, a thresholding, and an inverse DWT (IDWT modular circuits. We also proposed a novel adaptive thresholding scheme and incorporated it into our wavelet de-noising procedure. Performance was then evaluated on both the architectural designs of the software and. In addition, the de-noising circuit was also implemented by downloading the Verilog codes to a field programmable gate array (FPGA based platform so that its ability in noise reduction may be further validated in actual practice. Simulation experiment results produced by applying a set of simulated noise-contaminated electrocardiogram (ECG signals into the de-noising circuit showed that the circuit could not only desirably meet the requirement of real-time processing, but also achieve satisfactory performance for noise reduction, while the sharp features of the ECG signals can be well preserved. The proposed de-noising circuit was further synthesized using the Synopsys Design Compiler with an Artisan Taiwan Semiconductor Manufacturing Company (TSMC, Hsinchu, Taiwan 40 nm standard cell library. The integrated circuit (IC synthesis simulation results showed that the proposed design can achieve a clock frequency of 200 MHz and the power consumption was only 17.4 mW, when operated at 200 MHz.

  16. Hardware design and implementation of a wavelet de-noising procedure for medical signal preprocessing.

    Science.gov (United States)

    Chen, Szi-Wen; Chen, Yuan-Ho

    2015-10-16

    In this paper, a discrete wavelet transform (DWT) based de-noising with its applications into the noise reduction for medical signal preprocessing is introduced. This work focuses on the hardware realization of a real-time wavelet de-noising procedure. The proposed de-noising circuit mainly consists of three modules: a DWT, a thresholding, and an inverse DWT (IDWT) modular circuits. We also proposed a novel adaptive thresholding scheme and incorporated it into our wavelet de-noising procedure. Performance was then evaluated on both the architectural designs of the software and. In addition, the de-noising circuit was also implemented by downloading the Verilog codes to a field programmable gate array (FPGA) based platform so that its ability in noise reduction may be further validated in actual practice. Simulation experiment results produced by applying a set of simulated noise-contaminated electrocardiogram (ECG) signals into the de-noising circuit showed that the circuit could not only desirably meet the requirement of real-time processing, but also achieve satisfactory performance for noise reduction, while the sharp features of the ECG signals can be well preserved. The proposed de-noising circuit was further synthesized using the Synopsys Design Compiler with an Artisan Taiwan Semiconductor Manufacturing Company (TSMC, Hsinchu, Taiwan) 40 nm standard cell library. The integrated circuit (IC) synthesis simulation results showed that the proposed design can achieve a clock frequency of 200 MHz and the power consumption was only 17.4 mW, when operated at 200 MHz.

  17. Preprocessing of gravity gradients at the GOCE high-level processing facility

    Science.gov (United States)

    Bouman, Johannes; Rispens, Sietse; Gruber, Thomas; Koop, Radboud; Schrama, Ernst; Visser, Pieter; Tscherning, Carl Christian; Veicherts, Martin

    2009-07-01

    One of the products derived from the gravity field and steady-state ocean circulation explorer (GOCE) observations are the gravity gradients. These gravity gradients are provided in the gradiometer reference frame (GRF) and are calibrated in-flight using satellite shaking and star sensor data. To use these gravity gradients for application in Earth scienes and gravity field analysis, additional preprocessing needs to be done, including corrections for temporal gravity field signals to isolate the static gravity field part, screening for outliers, calibration by comparison with existing external gravity field information and error assessment. The temporal gravity gradient corrections consist of tidal and nontidal corrections. These are all generally below the gravity gradient error level, which is predicted to show a 1/ f behaviour for low frequencies. In the outlier detection, the 1/ f error is compensated for by subtracting a local median from the data, while the data error is assessed using the median absolute deviation. The local median acts as a high-pass filter and it is robust as is the median absolute deviation. Three different methods have been implemented for the calibration of the gravity gradients. All three methods use a high-pass filter to compensate for the 1/ f gravity gradient error. The baseline method uses state-of-the-art global gravity field models and the most accurate results are obtained if star sensor misalignments are estimated along with the calibration parameters. A second calibration method uses GOCE GPS data to estimate a low-degree gravity field model as well as gravity gradient scale factors. Both methods allow to estimate gravity gradient scale factors down to the 10-3 level. The third calibration method uses high accurate terrestrial gravity data in selected regions to validate the gravity gradient scale factors, focussing on the measurement band. Gravity gradient scale factors may be estimated down to the 10-2 level with this

  18. Impact of functional MRI data preprocessing pipeline on default-mode network detectability in patients with disorders of consciousness

    Directory of Open Access Journals (Sweden)

    Adrian eAndronache

    2013-08-01

    Full Text Available An emerging application of resting-state functional MRI is the study of patients with disorders of consciousness (DoC, where integrity of default-mode network (DMN activity is associated to the clinical level of preservation of consciousness. Due to the inherent inability to follow verbal instructions, arousal induced by scanning noise and postural pain, these patients tend to exhibit substantial levels of movement. This results in spurious, non-neural fluctuations of the blood-oxygen level-dependent (BOLD signal, which impair the evaluation of residual functional connectivity. Here, the effect of data preprocessing choices on the detectability of the DMN was systematically evaluated in a representative cohort of 30 clinically and etiologically heterogeneous DoC patients and 33 healthy controls. Starting from a standard preprocessing pipeline, additional steps were gradually inserted, namely band-pass filtering, removal of co-variance with the movement vectors, removal of co-variance with the global brain parenchyma signal, rejection of realignment outlier volumes and ventricle masking. Both independent-component analysis (ICA and seed-based analysis (SBA were performed, and DMN detectability was assessed quantitatively as well as visually. The results of the present study strongly show that the detection of DMN activity in the sub-optimal fMRI series acquired on DoC patients is contingent on the use of adequate filtering steps. ICA and SBA are differently affected but give convergent findings for high-grade preprocessing. We propose that future studies in this area should adopt the described preprocessing procedures as a minimum standard to reduce the probability of wrongly inferring that DMN activity is absent.

  19. Integrated fMRI Preprocessing Framework Using Extended Kalman Filter for Estimation of Slice-Wise Motion

    OpenAIRE

    Basile Pinsard; Basile Pinsard; Basile Pinsard; Arnaud Boutin; Arnaud Boutin; Julien Doyon; Julien Doyon; Habib Benali; Habib Benali; Habib Benali

    2018-01-01

    Functional MRI acquisition is sensitive to subjects' motion that cannot be fully constrained. Therefore, signal corrections have to be applied a posteriori in order to mitigate the complex interactions between changing tissue localization and magnetic fields, gradients and readouts. To circumvent current preprocessing strategies limitations, we developed an integrated method that correct motion and spatial low-frequency intensity fluctuations at the level of each slice in order to better fit ...

  20. TargetSearch - a Bioconductor package for the efficient preprocessing of GC-MS metabolite profiling data

    Science.gov (United States)

    2009-01-01

    Background Metabolite profiling, the simultaneous quantification of multiple metabolites in an experiment, is becoming increasingly popular, particularly with the rise of systems-level biology. The workhorse in this field is gas-chromatography hyphenated with mass spectrometry (GC-MS). The high-throughput of this technology coupled with a demand for large experiments has led to data pre-processing, i.e. the quantification of metabolites across samples, becoming a major bottleneck. Existing software has several limitations, including restricted maximum sample size, systematic errors and low flexibility. However, the biggest limitation is that the resulting data usually require extensive hand-curation, which is subjective and can typically take several days to weeks. Results We introduce the TargetSearch package, an open source tool which is a flexible and accurate method for pre-processing even very large numbers of GC-MS samples within hours. We developed a novel strategy to iteratively correct and update retention time indices for searching and identifying metabolites. The package is written in the R programming language with computationally intensive functions written in C for speed and performance. The package includes a graphical user interface to allow easy use by those unfamiliar with R. Conclusions TargetSearch allows fast and accurate data pre-processing for GC-MS experiments and overcomes the sample number limitations and manual curation requirements of existing software. We validate our method by carrying out an analysis against both a set of known chemical standard mixtures and of a biological experiment. In addition we demonstrate its capabilities and speed by comparing it with other GC-MS pre-processing tools. We believe this package will greatly ease current bottlenecks and facilitate the analysis of metabolic profiling data. PMID:20015393

  1. TargetSearch - a Bioconductor package for the efficient preprocessing of GC-MS metabolite profiling data

    Directory of Open Access Journals (Sweden)

    Lisec Jan

    2009-12-01

    Full Text Available Abstract Background Metabolite profiling, the simultaneous quantification of multiple metabolites in an experiment, is becoming increasingly popular, particularly with the rise of systems-level biology. The workhorse in this field is gas-chromatography hyphenated with mass spectrometry (GC-MS. The high-throughput of this technology coupled with a demand for large experiments has led to data pre-processing, i.e. the quantification of metabolites across samples, becoming a major bottleneck. Existing software has several limitations, including restricted maximum sample size, systematic errors and low flexibility. However, the biggest limitation is that the resulting data usually require extensive hand-curation, which is subjective and can typically take several days to weeks. Results We introduce the TargetSearch package, an open source tool which is a flexible and accurate method for pre-processing even very large numbers of GC-MS samples within hours. We developed a novel strategy to iteratively correct and update retention time indices for searching and identifying metabolites. The package is written in the R programming language with computationally intensive functions written in C for speed and performance. The package includes a graphical user interface to allow easy use by those unfamiliar with R. Conclusions TargetSearch allows fast and accurate data pre-processing for GC-MS experiments and overcomes the sample number limitations and manual curation requirements of existing software. We validate our method by carrying out an analysis against both a set of known chemical standard mixtures and of a biological experiment. In addition we demonstrate its capabilities and speed by comparing it with other GC-MS pre-processing tools. We believe this package will greatly ease current bottlenecks and facilitate the analysis of metabolic profiling data.

  2. TargetSearch--a Bioconductor package for the efficient preprocessing of GC-MS metabolite profiling data.

    Science.gov (United States)

    Cuadros-Inostroza, Alvaro; Caldana, Camila; Redestig, Henning; Kusano, Miyako; Lisec, Jan; Peña-Cortés, Hugo; Willmitzer, Lothar; Hannah, Matthew A

    2009-12-16

    Metabolite profiling, the simultaneous quantification of multiple metabolites in an experiment, is becoming increasingly popular, particularly with the rise of systems-level biology. The workhorse in this field is gas-chromatography hyphenated with mass spectrometry (GC-MS). The high-throughput of this technology coupled with a demand for large experiments has led to data pre-processing, i.e. the quantification of metabolites across samples, becoming a major bottleneck. Existing software has several limitations, including restricted maximum sample size, systematic errors and low flexibility. However, the biggest limitation is that the resulting data usually require extensive hand-curation, which is subjective and can typically take several days to weeks. We introduce the TargetSearch package, an open source tool which is a flexible and accurate method for pre-processing even very large numbers of GC-MS samples within hours. We developed a novel strategy to iteratively correct and update retention time indices for searching and identifying metabolites. The package is written in the R programming language with computationally intensive functions written in C for speed and performance. The package includes a graphical user interface to allow easy use by those unfamiliar with R. TargetSearch allows fast and accurate data pre-processing for GC-MS experiments and overcomes the sample number limitations and manual curation requirements of existing software. We validate our method by carrying out an analysis against both a set of known chemical standard mixtures and of a biological experiment. In addition we demonstrate its capabilities and speed by comparing it with other GC-MS pre-processing tools. We believe this package will greatly ease current bottlenecks and facilitate the analysis of metabolic profiling data.

  3. THE EFFECT OF DECOMPOSITION METHOD AS DATA PREPROCESSING ON NEURAL NETWORKS MODEL FOR FORECASTING TREND AND SEASONAL TIME SERIES

    Directory of Open Access Journals (Sweden)

    Subanar Subanar

    2006-01-01

    Full Text Available Recently, one of the central topics for the neural networks (NN community is the issue of data preprocessing on the use of NN. In this paper, we will investigate this topic particularly on the effect of Decomposition method as data processing and the use of NN for modeling effectively time series with both trend and seasonal patterns. Limited empirical studies on seasonal time series forecasting with neural networks show that some find neural networks are able to model seasonality directly and prior deseasonalization is not necessary, and others conclude just the opposite. In this research, we study particularly on the effectiveness of data preprocessing, including detrending and deseasonalization by applying Decomposition method on NN modeling and forecasting performance. We use two kinds of data, simulation and real data. Simulation data are examined on multiplicative of trend and seasonality patterns. The results are compared to those obtained from the classical time series model. Our result shows that a combination of detrending and deseasonalization by applying Decomposition method is the effective data preprocessing on the use of NN for forecasting trend and seasonal time series.

  4. Improved Dynamic Modeling of the Cascade Distillation Subsystem and Integration with Models of Other Water Recovery Subsystems

    Science.gov (United States)

    Perry, Bruce; Anderson, Molly

    2015-01-01

    The Cascade Distillation Subsystem (CDS) is a rotary multistage distiller being developed to serve as the primary processor for wastewater recovery during long-duration space missions. The CDS could be integrated with a system similar to the International Space Station (ISS) Water Processor Assembly (WPA) to form a complete Water Recovery System (WRS) for future missions. Independent chemical process simulations with varying levels of detail have previously been developed using Aspen Custom Modeler (ACM) to aid in the analysis of the CDS and several WPA components. The existing CDS simulation could not model behavior during thermal startup and lacked detailed analysis of several key internal processes, including heat transfer between stages. The first part of this paper describes modifications to the ACM model of the CDS that improve its capabilities and the accuracy of its predictions. Notably, the modified version of the model can accurately predict behavior during thermal startup for both NaCl solution and pretreated urine feeds. The model is used to predict how changing operating parameters and design features of the CDS affects its performance, and conclusions from these predictions are discussed. The second part of this paper describes the integration of the modified CDS model and the existing WPA component models into a single WRS model. The integrated model is used to demonstrate the effects that changes to one component can have on the dynamic behavior of the system as a whole.

  5. Performance Comparison of Several Pre-Processing Methods in a Hand Gesture Recognition System based on Nearest Neighbor for Different Background Conditions

    Directory of Open Access Journals (Sweden)

    Iwan Setyawan

    2012-12-01

    Full Text Available This paper presents a performance analysis and comparison of several pre-processing methods used in a hand gesture recognition system. The pre-processing methods are based on the combinations of several image processing operations, namely edge detection, low pass filtering, histogram equalization, thresholding and desaturation. The hand gesture recognition system is designed to classify an input image into one of six possible classes. The input images are taken with various background conditions. Our experiments showed that the best result is achieved when the pre-processing method consists of only a desaturation operation, achieving a classification accuracy of up to 83.15%.

  6. A Simple Estimation of Coupling Loss Factors for Two Flexible Subsystems Connected via Discrete Interfaces

    Directory of Open Access Journals (Sweden)

    Jun Zhang

    2016-01-01

    Full Text Available A simple formula is proposed to estimate the Statistical Energy Analysis (SEA coupling loss factors (CLFs for two flexible subsystems connected via discrete interfaces. First, the dynamic interactions between two discretely connected subsystems are described as a set of intermodal coupling stiffness terms. It is then found that if both subsystems are of high modal density and meanwhile the interface points all act independently, the intermodal dynamic couplings become dominated by only those between different subsystem mode sets. If ensemble- and frequency-averaged, the intermodal coupling stiffness terms can simply reduce to a function of the characteristic dynamic properties of each subsystem and the subsystem mass, as well as the number of interface points. The results can thus be accommodated within the theoretical frame of conventional SEA theory to yield a simple CLF formula. Meanwhile, the approach allows the weak coupling region between the two SEA subsystems to be distinguished simply and explicitly. The consistency and difference of the present technique with and from the traditional wave-based SEA solutions are discussed. Finally, numerical examples are given to illustrate the good performance of the present technique.

  7. Amygdala subsystems and control of feeding behavior by learned cues.

    Science.gov (United States)

    Petrovich, Gorica D; Gallagher, Michela

    2003-04-01

    A combination of behavioral studies and a neural systems analysis approach has proven fruitful in defining the role of the amygdala complex and associated circuits in fear conditioning. The evidence presented in this chapter suggests that this approach is also informative in the study of other adaptive functions that involve the amygdala. In this chapter we present a novel model to study learning in an appetitive context. Furthermore, we demonstrate that long-recognized connections between the amygdala and the hypothalamus play a crucial role in allowing learning to modulate feeding behavior. In the first part we describe a behavioral model for motivational learning. In this model a cue that acquires motivational properties through pairings with food delivery when an animal is hungry can override satiety and promote eating in sated rats. Next, we present evidence that a specific amygdala subsystem (basolateral area) is responsible for allowing such learned cues to control eating (override satiety and promote eating in sated rats). We also show that basolateral amygdala mediates these actions via connectivity with the lateral hypothalamus. Lastly, we present evidence that the amygdalohypothalamic system is specific for the control of eating by learned motivational cues, as it does not mediate another function that depends on intact basolateral amygdala, namely, the ability of a conditioned cue to support new learning based on its acquired value. Knowledge about neural systems through which food-associated cues specifically control feeding behavior provides a defined model for the study of learning. In addition, this model may be informative for understanding mechanisms of maladaptive aspects of learned control of eating that contribute to eating disorders and more moderate forms of overeating.

  8. Superconducting Super Collider silicon tracking subsystem research and development

    International Nuclear Information System (INIS)

    Miller, W.O.; Thompson, T.C.; Ziock, H.J.; Gamble, M.T.

    1990-12-01

    The Alamos National Laboratory Mechanical Engineering and Electronics Division has been investigating silicon-based elementary particle tracking device technology as part of the Superconducting Super Collider-sponsored silicon subsystem collaboration. Structural, materials, and thermal issues have been addressed. This paper explores detector structural integrity and stability, including detailed finite element models of the silicon wafer support and predictive methods used in designing with advanced composite materials. The current design comprises a magnesium metal matrix composite (MMC) truss space frame to provide a sparse support structure for the complex array of silicon detectors. This design satisfies the 25-μm structural stability requirement in a 10-Mrad radiation environment. This stability is achieved without exceeding the stringent particle interaction constraints set at 2.5% of a radiation length. Materials studies have considered thermal expansion, elastic modulus, resistance to radiation and chemicals, and manufacturability of numerous candidate materials. Based on optimization of these parameters, the MMC space frame will possess a coefficient of thermal expansion (CTE) near zero to avoid thermally induced distortions, whereas the cooling rings, which support the silicon detectors and heat pipe network, will probably be constructed of a graphite/epoxy composite whose CTE is engineered to match that of silicon. Results from radiation, chemical, and static loading tests are compared with analytical predictions and discussed. Electronic thermal loading and its efficient dissipation using heat pipe cooling technology are discussed. Calculations and preliminary designs for a sprayed-on graphite wick structure are presented. A hydrocarbon such as butane appears to be a superior choice of heat pipe working fluid based on cooling, handling, and safety criteria

  9. MAIUS-1- Vehicle, Subsystems Design and Mission Operations

    Science.gov (United States)

    Stamminger, A.; Ettl, J.; Grosse, J.; Horschgen-Eggers, M.; Jung, W.; Kallenbach, A.; Raith, G.; Saedtler, W.; Seidel, S. T.; Turner, J.; Wittkamp, M.

    2015-09-01

    In November 2015, the DLR Mobile Rocket Base will launch the MAIUS-1 rocket vehicle at Esrange, Northern Sweden. The MAIUS-A experiment is a pathfinder atom optics experiment. The scientific objective of the mission is the first creation of a BoseEinstein Condensate in space and performing atom interferometry on a sounding rocket [3]. MAIUS-1 comprises a two-stage unguided solid propellant VSB-30 rocket motor system. The vehicle consists of a Brazilian 53 1 motor as 1 st stage, a 530 motor as 2nd stage, a conical motor adapter, a despin module, a payload adapter, the MAIUS-A experiment consisting of five experiment modules, an attitude control system module, a newly developed conical service system, and a two-staged recovery system including a nosecone. In contrast to usual payloads on VSB-30 rockets, the payload has a diameter of 500 mm due to constraints of the scientific experiment. Because of this change in design, a blunted nosecone is necessary to guarantee the required static stability during the ascent phase of the flight. This paper will give an overview on the subsystems which have been built at DLR MORABA, especially the newly developed service system. Further, it will contain a description of the MAIUS-1 vehicle, the mission and the unique requirements on operations and attitude control, which is additionally required to achieve a required attitude with respect to the nadir vector. Additionally to a usual microgravity environment, the MAIUS-l payload requires attitude control to achieve a required attitude with respect to the nadir vector.

  10. Age Differences in the Intrinsic Functional Connectivity of Default Network Subsystems

    Directory of Open Access Journals (Sweden)

    Karen eCampbell

    2013-11-01

    Full Text Available Recent work suggests that the default mode network (DMN includes two core regions, the ventromedial prefrontal cortex (vmPFC and posterior cingulate cortex (PCC, and several unique subsystems that are functionally distinct. These include a medial temporal lobe (MTL subsystem, active during remembering and future projection, and a dorsomedial PFC (dmPFC subsystem, active during self-reference. The PCC has been further subdivided into ventral (vPCC and dorsal (dPCC regions that are more strongly connected with the DMN and cognitive control networks, respectively. The goal of this study was to examine age differences in resting state functional connectivity within these subsystems. After applying a rigorous procedure to reduce the effects of head motion, we used a multivariate technique to identify both common and unique patterns of functional connectivity in the MTL vs. the dmPFC, and in vPCC vs. dPCC. All four areas had robust functional connectivity with other DMN regions, and each also showed distinct connectivity patterns in both age groups. Young and older adults had equivalent functional connectivity in the MTL subsystem. Older adults showed weaker connectivity in the vPCC and dmPFC subsystems, particularly with other DMN areas, but stronger connectivity than younger adults in the dPCC subsystem, which included areas involved in cognitive control. Our data provide evidence for distinct subsystems involving DMN nodes, which are maintained with age. Nevertheless, there are age differences in the strength of functional connectivity within these subsystems, supporting prior evidence that DMN connectivity is particularly vulnerable to age, whereas connectivity involving cognitive control regions is relatively maintained. These results suggest an age difference in the integrated activity among brain networks that can have implications for cognition in older adults.

  11. Seismic Safety Margins Research Program. Phase I final report - Subsystem response (Project V)

    International Nuclear Information System (INIS)

    Shieh, L.C.; Chuang, T.Y.; O'Connell, W.J.

    1981-10-01

    This document reports on (1) the computation of the responses of subsystems, given the input subsystem support motion for components and systems whose failure can lead to an accident sequence (radioactive release), and (2) the results of a sensitivity study undertaken to determine the contributions of the several links in the seismic methodology chain (SMC) - seismic input (SI), soil-structure interaction (SSI), structure response (STR), and subsystem response (SUB) - to the uncertainty in subsystem response. For the singly supported subsystems (e.g., pumps, turbines, electrical control panels, etc.), we used the spectral acceleration response of the structure at the point where the subsystem components were mounted. For the multiple supported subsystems, we developed 13 piping models of five safety-related systems, and then used the pseudostatic-mode method with multisupport input motion to compute the response parameters in terms of the parameters used in the fragility descriptions (i.e., peak resultant accelerations for valves and peak resultant moments for piping). Damping and frequency were varied to represent the sources of modeling and random uncertainty. Two codes were developed: a modified version of SAPIV which assembles the piping supports into groups depending on the support's location relative to the attached structure, and SAPPAC a stand-alone modular program from which the time-history analysis module is extracted. On the basis of our sensitivity study, we determined that the variability in the combined soil-structure interaction, structural response, and subsystem response areas contribute more to uncertainty in subsystem response than does the variability in the seismic input area, assuming an earthquake within the limited peak ground acceleration range, i.e., 0.15 to 0.30g. The seismic input variations were in terms of different earthquake time histories. (author)

  12. A Conversation on Data Mining Strategies in LC-MS Untargeted Metabolomics: Pre-Processing and Pre-Treatment Steps

    Directory of Open Access Journals (Sweden)

    Fidele Tugizimana

    2016-11-01

    Full Text Available Untargeted metabolomic studies generate information-rich, high-dimensional, and complex datasets that remain challenging to handle and fully exploit. Despite the remarkable progress in the development of tools and algorithms, the “exhaustive” extraction of information from these metabolomic datasets is still a non-trivial undertaking. A conversation on data mining strategies for a maximal information extraction from metabolomic data is needed. Using a liquid chromatography-mass spectrometry (LC-MS-based untargeted metabolomic dataset, this study explored the influence of collection parameters in the data pre-processing step, scaling and data transformation on the statistical models generated, and feature selection, thereafter. Data obtained in positive mode generated from a LC-MS-based untargeted metabolomic study (sorghum plants responding dynamically to infection by a fungal pathogen were used. Raw data were pre-processed with MarkerLynxTM software (Waters Corporation, Manchester, UK. Here, two parameters were varied: the intensity threshold (50–100 counts and the mass tolerance (0.005–0.01 Da. After the pre-processing, the datasets were imported into SIMCA (Umetrics, Umea, Sweden for more data cleaning and statistical modeling. In addition, different scaling (unit variance, Pareto, etc. and data transformation (log and power methods were explored. The results showed that the pre-processing parameters (or algorithms influence the output dataset with regard to the number of defined features. Furthermore, the study demonstrates that the pre-treatment of data prior to statistical modeling affects the subspace approximation outcome: e.g., the amount of variation in X-data that the model can explain and predict. The pre-processing and pre-treatment steps subsequently influence the number of statistically significant extracted/selected features (variables. Thus, as informed by the results, to maximize the value of untargeted metabolomic data

  13. Work Plan for Updating Double Shell Tank (DST) Sub-System Specifications (TBR 120.020)

    International Nuclear Information System (INIS)

    GRENARD, C.E.

    1999-01-01

    The DST System stores waste from the processing of nuclear material at the Hanford Nuclear Reservation. The program to dispose of this waste has been divided into several phases with Phase 1 being the demonstration of the waste disposal technology by a private contractor. Subsystem specifications are being prepared providing requirements for the subsystems that are necessary for the continued safe storage of waste in the DST System and the removal of selected waste for processing by the privatized facility during Phase 1. This document provides the detailed plans for updating subsystem specifications developed during EY99

  14. International Space Station Temperature and Humidity Control Subsystem Verification for Node 1

    Science.gov (United States)

    Williams, David E.

    2007-01-01

    The International Space Station (ISS) Node 1 Environmental Control and Life Support (ECLS) System is comprised of five subsystems: Atmosphere Control and Supply (ACS), Atmosphere Revitalization (AR), Fire Detection and Suppression (FDS), Temperature and Humidity Control (THC), and Water Recovery and Management (WRM). This paper provides a summary of the nominal operation of the Node 1 THC subsystem design. The paper will also provide a discussion of the detailed Element Verification methodologies for nominal operation of the Node 1 THC subsystem operations utilized during the Qualification phase.

  15. Rotation curve of the neutral-hydrogen subsystem in the galactic plane

    Energy Technology Data Exchange (ETDEWEB)

    Petrovskaia, I.V.

    1981-01-01

    Separate rotation curves of the neutral-hydrogen subsystem are obtained for the first and fourth quadrants of galactic longitude on the basis of radio observations in the 21-cm line. A method that uses the entire 21-cm line profile is applied to distances from the galactic center in the range from 0.36 to 1.00 times the distance of the sun. It is found that the motion of the neutral-hydrogen subsystem is not purely circular and that the subsystem rotates more slowly in the fourth quadrant than in the first.

  16. Spectrometer control subsystem with high level functionality for use at the National Synchrotron Light Source

    International Nuclear Information System (INIS)

    Alberi, J.L.; Stubblefield, F.W.

    1980-11-01

    We have developed a subsystem capable of controlling stepping motors in a wide variety of vuv and x-ray spectrometers to be used at the National Sychrotron Light Source. The subsystem is capable of controlling up to 15 motors with encoder readback and ramped acceleration/deceleration. Both absolute and incremental encoders may be used in any mixture. Function commands to the subsystem are communicated via ASCII characters over an asynchronous serial link in a well-defined protocol in decipherable English. Thus the unit can be controlled via write statements in a high-level language. Details of hardware implementation will be presented

  17. Vacuum component subsystem of TV Thomson scattering system in JFT-2M

    International Nuclear Information System (INIS)

    Shiina, Tomio; Yamauchi, Toshihiko; Fujisawa, Atsushi; Hanawa, Osamu; Dimock, D.; Takahashi, Akira; Inomata, Shinji.

    1991-03-01

    The vacuum component subsystem, which is one of six subsystems in TV Thomson scattering (TVTS) system for the JFT-2M tokamak, is completed under a US-JAPAN cooperative program. This subsystem is composed of top and bottom flanges, side flange, beam dump, viewing dump and so on. These components are fitted in the existing 13-point Thomson scattering system as well as the TVTS optics newly developed by Princeton Plasma Physics Laboratory (PPPL) in USA. New feedback system of laser beam alignment was designed and developed. (author)

  18. A Probabilistic Approach to Network Event Formation from Pre-Processed Waveform Data

    Science.gov (United States)

    Kohl, B. C.; Given, J.

    2017-12-01

    The current state of the art for seismic event detection still largely depends on signal detection at individual sensor stations, including picking accurate arrivals times and correctly identifying phases, and relying on fusion algorithms to associate individual signal detections to form event hypotheses. But increasing computational capability has enabled progress toward the objective of fully utilizing body-wave recordings in an integrated manner to detect events without the necessity of previously recorded ground truth events. In 2011-2012 Leidos (then SAIC) operated a seismic network to monitor activity associated with geothermal field operations in western Nevada. We developed a new association approach for detecting and quantifying events by probabilistically combining pre-processed waveform data to deal with noisy data and clutter at local distance ranges. The ProbDet algorithm maps continuous waveform data into continuous conditional probability traces using a source model (e.g. Brune earthquake or Mueller-Murphy explosion) to map frequency content and an attenuation model to map amplitudes. Event detection and classification is accomplished by combining the conditional probabilities from the entire network using a Bayesian formulation. This approach was successful in producing a high-Pd, low-Pfa automated bulletin for a local network and preliminary tests with regional and teleseismic data show that it has promise for global seismic and nuclear monitoring applications. The approach highlights several features that we believe are essential to achieving low-threshold automated event detection: Minimizes the utilization of individual seismic phase detections - in traditional techniques, errors in signal detection, timing, feature measurement and initial phase ID compound and propagate into errors in event formation, Has a formalized framework that utilizes information from non-detecting stations, Has a formalized framework that utilizes source information, in

  19. PreP+07: improvements of a user friendly tool to preprocess and analyse microarray data

    Directory of Open Access Journals (Sweden)

    Claros M Gonzalo

    2009-01-01

    Full Text Available Abstract Background Nowadays, microarray gene expression analysis is a widely used technology that scientists handle but whose final interpretation usually requires the participation of a specialist. The need for this participation is due to the requirement of some background in statistics that most users lack or have a very vague notion of. Moreover, programming skills could also be essential to analyse these data. An interactive, easy to use application seems therefore necessary to help researchers to extract full information from data and analyse them in a simple, powerful and confident way. Results PreP+07 is a standalone Windows XP application that presents a friendly interface for spot filtration, inter- and intra-slide normalization, duplicate resolution, dye-swapping, error removal and statistical analyses. Additionally, it contains two unique implementation of the procedures – double scan and Supervised Lowess-, a complete set of graphical representations – MA plot, RG plot, QQ plot, PP plot, PN plot – and can deal with many data formats, such as tabulated text, GenePix GPR and ArrayPRO. PreP+07 performance has been compared with the equivalent functions in Bioconductor using a tomato chip with 13056 spots. The number of differentially expressed genes considering p-values coming from the PreP+07 and Bioconductor Limma packages were statistically identical when the data set was only normalized; however, a slight variability was appreciated when the data was both normalized and scaled. Conclusion PreP+07 implementation provides a high degree of freedom in selecting and organizing a small set of widely used data processing protocols, and can handle many data formats. Its reliability has been proven so that a laboratory researcher can afford a statistical pre-processing of his/her microarray results and obtain a list of differentially expressed genes using PreP+07 without any programming skills. All of this gives support to scientists

  20. The Python Spectral Analysis Tool (PySAT) for Powerful, Flexible, and Easy Preprocessing and Machine Learning with Point Spectral Data

    Science.gov (United States)

    Anderson, R. B.; Finch, N.; Clegg, S. M.; Graff, T.; Morris, R. V.; Laura, J.

    2018-04-01

    The PySAT point spectra tool provides a flexible graphical interface, enabling scientists to apply a wide variety of preprocessing and machine learning methods to point spectral data, with an emphasis on multivariate regression.

  1. An optical scanning subsystem for a UAS-enabled hyperspectral radiometer

    Data.gov (United States)

    National Aeronautics and Space Administration — Hyperspectral radiometers will be integrated with an optical scanning subsystem to measure remote sensing reflectance spectra over the ocean.  The entire scanning...

  2. Modeling and Simulation of Satellite Subsystems for End-to-End Spacecraft Modeling

    National Research Council Canada - National Science Library

    Schum, William K; Doolittle, Christina M; Boyarko, George A

    2006-01-01

    During the past ten years, the Air Force Research Laboratory (AFRL) has been simultaneously developing high-fidelity spacecraft payload models as well as a robust distributed simulation environment for modeling spacecraft subsystems...

  3. A membrane-based subsystem for very high recoveries of spacecraft waste waters

    Science.gov (United States)

    Ray, Roderick J.; Retzlaff, Sandra E.; Radke-Mitchell, Lyn; Newbold, David D.; Price, Donald F.

    1986-01-01

    This paper describes the continued development of a membrane-based subsystem designed to recover up to 99.5 percent of the water from various spacecraft waste waters. Specifically discussed are: (1) the design and fabrication of an energy-efficient reverse-osmosis (RO) breadboard subsystem; (2) data showing the performance of this subsystem when operated on a synthetic wash-water solution - including the results of a 92-day test; and (3) the results of pasteurization studies, including the design and operation of an in-line pasteurizer. Also included in this paper is a discussion of the design and performance of a second RO stage. This second stage results in higher-purity product water at a minimal energy requirement and provides a substantial redundancy factor to this subsystem.

  4. Optimization of a thermoelectric generator subsystem for high temperature PEM fuel cell exhaust heat recovery

    DEFF Research Database (Denmark)

    Gao, Xin; Andreasen, Søren Juhl; Kær, Søren Knudsen

    2014-01-01

    In previous work, a thermoelectric (TE) exhaust heat recovery subsystem for a high temperature polymer electrolyte membrane (HT-PEM) fuel cell stack was developed and modeled. Numerical simulations were conducted and have identified an optimized subsystem configuration and 4 types of compact heat...... modules are now connected into branches. The procedures of designing and optimizing this TE exhaust heat recovery subsystem are drawn out. The contribution of TE exhaust heat recovery to the HT-PEM fuel cell power system is preliminarily concluded. Its feasibility is also discussed....... exchangers with superior performance for further analysis. In this work, the on-design performances of the 4 heat exchangers are more thoroughly assessed on their corresponding optimized subsystem configurations. Afterward, their off-design performances are compared on the whole working range of the fuel...

  5. Failure mode, effect and criticality analysis (FMECA) on mechanical subsystems of diesel generator at NPP

    International Nuclear Information System (INIS)

    Kim, Tae Woon; Singh, Brijendra; Sung, Tae Yong; Park, Jin Hee; Lee, Yoon Hwan

    1996-06-01

    Largely, the RCM approach can be divided in three phases; (1) Functional failure analysis (FFA) on the selected system or subsystem, (2) Failure mode, effect and criticality analysis (FMECA) to identify the impact of failure to plant safety or economics, (3) Logical tree analysis (LTA) to select appropriate preventive maintenance and surveillance tasks. This report presents FMECA results for six mechanical subsystems of the diesel generators of nuclear power plants. The six mechanical subsystems are Starting air, Lub oil, Governor, Jacket water cooling, Fuel, and Engine subsystems. Generic and plant-specific failure and maintenance records are reviewed to identify critical components/failure modes. FMECA was performed for these critical component/failure modes. After reviewing current preventive maintenance activities of Wolsung unit 1, draft RCM recommendations are developed. 6 tabs., 16 refs. (Author)

  6. General-purpose stepping motor-encoder positioning subsystem with standard asynchronous serial-line interface

    International Nuclear Information System (INIS)

    Stubblefield, F.W.; Alberi, J.L.

    1982-01-01

    A general-purpose mechanical positioning subsystem for open-loop control of experiment devices which have their positions established and read out by stepping motor-encoder combinations has been developed. The subsystem is to be used mainly for experiments to be conducted at the National Synchrotron Light Source at Brookhaven National Laboratory. The subsystem unit has been designed to be compatible with a wide variety of stepping motor and encoder types. The unit may be operated by any device capable of driving a standard RS-232-C asynchronous serial communication line. An informal survey has shown that several experiments at the Light Source will use one particular type of computer, operating system, and programming language. Accordingly, a library of subroutines compatible with this combination of computer system elements has been written to facilitate driving the positioning subsystem unit

  7. Failure mode, effect and criticality analysis (FMECA) on mechanical subsystems of diesel generator at NPP

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Tae Woon; Singh, Brijendra; Sung, Tae Yong; Park, Jin Hee; Lee, Yoon Hwan [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1996-06-01

    Largely, the RCM approach can be divided in three phases; (1) Functional failure analysis (FFA) on the selected system or subsystem, (2) Failure mode, effect and criticality analysis (FMECA) to identify the impact of failure to plant safety or economics, (3) Logical tree analysis (LTA) to select appropriate preventive maintenance and surveillance tasks. This report presents FMECA results for six mechanical subsystems of the diesel generators of nuclear power plants. The six mechanical subsystems are Starting air, Lub oil, Governor, Jacket water cooling, Fuel, and Engine subsystems. Generic and plant-specific failure and maintenance records are reviewed to identify critical components/failure modes. FMECA was performed for these critical component/failure modes. After reviewing current preventive maintenance activities of Wolsung unit 1, draft RCM recommendations are developed. 6 tabs., 16 refs. (Author).

  8. Six-man, self-contained carbon dioxide concentrator subsystem for Space Station Prototype (SSP) application

    Science.gov (United States)

    Kostell, G. D.; Schubert, F. H.; Shumar, J. W.; Hallick, T. M.; Jensen, F. C.

    1974-01-01

    A six man, self contained, electrochemical carbon dioxide concentrating subsystem for space station prototype use was successfully designed, fabricated, and tested. A test program was successfully completed which covered shakedown testing, design verification testing, and acceptance testing.

  9. Failure data analysis of the SuperHILAC radio frequency subsystem

    International Nuclear Information System (INIS)

    Chang, M.K.

    1978-12-01

    This report is a continuation of an earlier report by Liang with emphasis now on the Radio Frequency subsystem and its components, using current and improved data. It was stated in Liang's report that improvement in overall SuperHILAC availability, which must be very high for medical purposes, is best made by improving subsystems that are needed in all modes of operation. Two such subsystems were Radio Frequency (RF) and Other, with relatively low availabilities of .96 and .93 respectively. Since subsystem Other is not well defined, the RF became the object of this investigation. It was hoped that the components of the RF would show properties that were obscured at the higher level. The analytic procedure of this report is essentially identical to that in the earlier report, except that an operating period analysis is added

  10. System Simulation by Recursive Feedback: Coupling a Set of Stand-Alone Subsystem Simulations

    Science.gov (United States)

    Nixon, D. D.

    2001-01-01

    Conventional construction of digital dynamic system simulations often involves collecting differential equations that model each subsystem, arran g them to a standard form, and obtaining their numerical gin solution as a single coupled, total-system simultaneous set. Simulation by numerical coupling of independent stand-alone subsimulations is a fundamentally different approach that is attractive because, among other things, the architecture naturally facilitates high fidelity, broad scope, and discipline independence. Recursive feedback is defined and discussed as a candidate approach to multidiscipline dynamic system simulation by numerical coupling of self-contained, single-discipline subsystem simulations. A satellite motion example containing three subsystems (orbit dynamics, attitude dynamics, and aerodynamics) has been defined and constructed using this approach. Conventional solution methods are used in the subsystem simulations. Distributed and centralized implementations of coupling have been considered. Numerical results are evaluated by direct comparison with a standard total-system, simultaneous-solution approach.

  11. Protective and control relays as coal-mine power-supply ACS subsystem

    Science.gov (United States)

    Kostin, V. N.; Minakova, T. E.

    2017-10-01

    The paper presents instantaneous selective short-circuit protection for the cabling of the underground part of a coal mine and central control algorithms as a Coal-Mine Power-Supply ACS Subsystem. In order to improve the reliability of electricity supply and reduce the mining equipment down-time, a dual channel relay protection and central control system is proposed as a subsystem of the coal-mine power-supply automated control system (PS ACS).

  12. Assessment of airframe-subsystems synergy on overall aircraft performance in a Collaborative Design Environment.

    OpenAIRE

    Shiva Prakasha, Prajwal; Ciampa, Pier Davide

    2016-01-01

    A Collaborative Multidisciplinary Design Optimization (MDO) methodology is presented, which uses physics based analysis to evaluate the correlations between the airframe design and its sub-systems integration from the early design process, and to exploit the synergies within a simultaneous optimization process. Further, the disciplinary analysis modules involved in the optimization task are located in different organization. Hence, the Airframe and Subsystem design tools are integrated within...

  13. Double-Shell Tank (DST) Maintenance and Recovery Subsystem Definition Report

    International Nuclear Information System (INIS)

    SMITH, E.A.

    2000-01-01

    The description of the Double-Shell Tank (DST) Maintenance and Recovery Subsystem presented in this document was developed to establish its boundaries. The DST Maintenance and Recovery Subsystem consists of new and existing equipment and facilities used to provide tank farm operators logistic support and problem resolution for the DST System during operations. This support will include evaluating equipment status, performing preventive and corrective maintenance, developing work packages, managing spares and consumables, supplying tooling, and training maintenance and operations personnel

  14. Reliability investigation for the ECC subsystem of a 1300 MWe-PWR

    International Nuclear Information System (INIS)

    Lalovic, M.

    1983-01-01

    In this study, a fault-tree analysis is used for reliability investigation of Emergency Core Cooling Sub-system of a 1300 MWe pressurised water reactor. Basic assumptions of the study are large break in the reactor coolant system and independence of the pseudo-components. Relatively high non-availability of the sub-system was calculated. Critical component and minimum cut set are determined. (author)

  15. Translation of Genotype to Phenotype by a Hierarchy of Cell Subsystems

    OpenAIRE

    Yu, Michael Ku; Kramer, Michael; Dutkowski, Janusz; Srivas, Rohith; Licon, Katherine; Kreisberg, Jason F.; Ng, Cherie T.; Krogan, Nevan; Sharan, Roded; Ideker, Trey

    2016-01-01

    Summary Accurately translating genotype to phenotype requires accounting for the functional impact of genetic variation at many biological scales. Here we present a strategy for genotype-phenotype reasoning based on existing knowledge of cellular subsystems. These subsystems and their hierarchical organization are defined by the Gene Ontology or a complementary ontology inferred directly from previously published datasets. Guided by the ontology?s hierarchical structure, we organize genotype ...

  16. Status of pre-processing of waste electrical and electronic equipment in Germany and its influence on the recovery of gold.

    Science.gov (United States)

    Chancerel, Perrine; Bolland, Til; Rotter, Vera Susanne

    2011-03-01

    Waste electrical and electronic equipment (WEEE) contains gold in low but from an environmental and economic point of view relevant concentration. After collection, WEEE is pre-processed in order to generate appropriate material fractions that are sent to the subsequent end-processing stages (recovery, reuse or disposal). The goal of this research is to quantify the overall recovery rates of pre-processing technologies used in Germany for the reference year 2007. To achieve this goal, facilities operating in Germany were listed and classified according to the technology they apply. Information on their processing capacity was gathered by evaluating statistical databases. Based on a literature review of experimental results for gold recovery rates of different pre-processing technologies, the German overall recovery rate of gold at the pre-processing level was quantified depending on the characteristics of the treated WEEE. The results reveal that - depending on the equipment groups - pre-processing recovery rates of gold of 29 to 61% are achieved in Germany. Some practical recommendations to reduce the losses during pre-processing could be formulated. Defining mass-based recovery targets in the legislation does not set incentives to recover trace elements. Instead, the priorities for recycling could be defined based on other parameters like the environmental impacts of the materials. The implementation of measures to reduce the gold losses would also improve the recovery of several other non-ferrous metals like tin, nickel, and palladium.

  17. Hornets Have It: A Conserved Olfactory Subsystem for Social Recognition in Hymenoptera?

    Directory of Open Access Journals (Sweden)

    Antoine Couto

    2017-06-01

    Full Text Available Eusocial Hymenoptera colonies are characterized by the presence of altruistic individuals, which rear their siblings instead of their own offspring. In the course of evolution, such sterile castes are thought to have emerged through the process of kin selection, altruistic traits being transmitted to following generation if they benefit relatives. By allowing kinship recognition, the detection of cuticular hydrocarbons (CHCs might be instrumental for kin selection. In carpenter ants, a female-specific olfactory subsystem processes CHC information through antennal detection by basiconic sensilla. It is still unclear if other families of eusocial Hymenoptera use the same subsystem for sensing CHCs. Here, we examined the existence of such a subsystem in Vespidae (using the hornet Vespa velutina, a family in which eusociality emerged independently of ants. The antennae of both males and female hornets contain large basiconic sensilla. Sensory neurons from the large basiconic sensilla exclusively project to a conspicuous cluster of small glomeruli in the antennal lobe, with anatomical and immunoreactive features that are strikingly similar to those of the ant CHC-sensitive subsystem. Extracellular electrophysiological recordings further show that sensory neurons within hornet basiconic sensilla preferentially respond to CHCs. Although this subsystem is not female-specific in hornets, the observed similarities with the olfactory system of ants are striking. They suggest that the basiconic sensilla subsystem could be an ancestral trait, which may have played a key role in the advent of eusociality in these hymenopteran families by allowing kin recognition and the production of altruistic behaviors toward relatives.

  18. Exploring relationships of human-automation interaction consequences on pilots: uncovering subsystems.

    Science.gov (United States)

    Durso, Francis T; Stearman, Eric J; Morrow, Daniel G; Mosier, Kathleen L; Fischer, Ute; Pop, Vlad L; Feigh, Karen M

    2015-05-01

    We attempted to understand the latent structure underlying the systems pilots use to operate in situations involving human-automation interaction (HAI). HAI is an important characteristic of many modern work situations. Of course, the cognitive subsystems are not immediately apparent by observing a functioning system, but correlations between variables may reveal important relations. The current report examined pilot judgments of 11 HAI dimensions (e.g., Workload, Task Management, Stress/Nervousness, Monitoring Automation, and Cross-Checking Automation) across 48 scenarios that required airline pilots to interact with automation on the flight deck. We found three major clusters of the dimensions identifying subsystems on the flight deck: a workload subsystem, a management subsystem, and an awareness subsystem. Relationships characterized by simple correlations cohered in ways that suggested underlying subsystems consistent with those that had previously been theorized. Understanding the relationship among dimensions affecting HAI is an important aspect in determining how a new piece of automation designed to affect one dimension will affect other dimensions as well. © 2014, Human Factors and Ergonomics Society.

  19. Cost-effective data storage/archival subsystem for functional PACS

    Science.gov (United States)

    Chen, Y. P.; Kim, Yongmin

    1993-09-01

    Not the least of the requirements of a workable PACS is the ability to store and archive vast amounts of information. A medium-size hospital will generate between 1 and 2 TBytes of data annually on a fully functional PACS. A high-speed image transmission network coupled with a comparably high-speed central data storage unit can make local memory and magnetic disks in the PACS workstations less critical and, in an extreme case, unnecessary. Under these circumstances, the capacity and performance of the central data storage subsystem and database is critical in determining the response time at the workstations, thus significantly affecting clinical acceptability. The central data storage subsystem not only needs to provide sufficient capacity to store about ten days worth of images (five days worth of new studies, and on the average, about one comparison study for each new study), but also supplies images to the requesting workstation in a timely fashion. The database must provide fast retrieval responses upon users' requests for images. This paper analyzes both advantages and disadvantages of multiple parallel transfer disks versus RAID disks for short-term central data storage subsystem, as well as optical disk jukebox versus digital recorder tape subsystem for long-term archive. Furthermore, an example high-performance cost-effective storage subsystem which integrates both the RAID disks and high-speed digital tape subsystem as a cost-effective PACS data storage/archival unit are presented.

  20. FDE-vdW: A van der Waals inclusive subsystem density-functional theory

    Energy Technology Data Exchange (ETDEWEB)

    Kevorkyants, Ruslan; Pavanello, Michele, E-mail: m.pavanello@rutgers.edu [Department of Chemistry, Rutgers University, Newark, New Jersey 07102 (United States); Eshuis, Henk [Department of Chemistry and Biochemistry, Montclair State University, Montclair, New Jersey 07043 (United States)

    2014-07-28

    We present a formally exact van der Waals inclusive electronic structure theory, called FDE-vdW, based on the Frozen Density Embedding formulation of subsystem Density-Functional Theory. In subsystem DFT, the energy functional is composed of subsystem additive and non-additive terms. We show that an appropriate definition of the long-range correlation energy is given by the value of the non-additive correlation functional. This functional is evaluated using the fluctuation–dissipation theorem aided by a formally exact decomposition of the response functions into subsystem contributions. FDE-vdW is derived in detail and several approximate schemes are proposed, which lead to practical implementations of the method. We show that FDE-vdW is Casimir-Polder consistent, i.e., it reduces to the generalized Casimir-Polder formula for asymptotic inter-subsystems separations. Pilot calculations of binding energies of 13 weakly bound complexes singled out from the S22 set show a dramatic improvement upon semilocal subsystem DFT, provided that an appropriate exchange functional is employed. The convergence of FDE-vdW with basis set size is discussed, as well as its dependence on the choice of associated density functional approximant.