WorldWideScience

Sample records for run-time two-stage process

  1. Characterization of microbial community in the two-stage process for hydrogen and methane production from food waste

    Energy Technology Data Exchange (ETDEWEB)

    Chu, Chun-Feng [School of Environmental Science and Engineering, Shanghai Jiao Tong University, Shanghai 200240 (China); Ebie, Yoshitaka [National Institute for Environmental Studies, Tsukuba 305-8506 (Japan); Xu, Kai-Qin [National Institute for Environmental Studies, Tsukuba 305-8506 (Japan); State Key Laboratory of Water Resources and Hydropower Engineering Science, Wuhan University, Wuhan 430072 (China); Li, Yu-You [Department of Civil and Environmental Engineering, Tohoku University, Sendai 980-8579 (Japan); Inamori, Yuhei [Faculty of Symbiotic Systems Science, Fukushima University, Fukushima 960-1296 (Japan)

    2010-08-15

    The structure of a microbial community in the two-stage process for H{sub 2} and CH{sub 4} production from food waste was investigated by a molecular biological approach. The process was a continuous combined thermophilic acidogenic hydrogenesis and mesophilic (RUN1) or thermophilic (RUN2) methanogenesis with recirculation of the digested sludge. A two-phase process suggested in this study effectively separate H{sub 2}-producing bacteria from methanogenic archaea by optimization of design parameters such as pH, hydraulic retention time (HRT) and temperature. Galore microbial diversity was found in the thermophilic acidogenic hydrogenesis, Clostridium sp. strain Z6 and Thermoanaerobacterium thermosaccharolyticum were considered to be the dominant thermophilic H{sub 2}-producing bacteria. The hydrogenotrophic methanogens were inhibited in thermophilic methanogenesis, whereas archaeal rDNAs were higher in the thermophilic methanogenesis than those in mesophilic methanogenesis. The yields of H{sub 2} and CH{sub 4} were in equal range depending on the characteristics of food waste, whereas effluent water quality indicators were different obviously in RUN1 and RUN2. The results indicated that hydrolysis and removal of food waste were higher in RUN2 than RUN1. (author)

  2. Time Optimal Run-time Evaluation of Distributed Timing Constraints in Process Control Software

    DEFF Research Database (Denmark)

    Drejer, N.; Kristensen, C.H.

    1993-01-01

    This paper considers run-time evaluation of an important class of constraints; Timing constraints. These appear extensively in process control systems. Timing constraints are considered in distributed systems, i.e. systems consisting of multiple autonomous nodes......

  3. Hydrogen and methane production from household solid waste in the two-stage fermentation process

    DEFF Research Database (Denmark)

    Lui, D.; Liu, D.; Zeng, Raymond Jianxiong

    2006-01-01

    A two-stage process combined hydrogen and methane production from household solid waste was demonstrated working successfully. The yield of 43 mL H-2/g volatile solid (VS) added was generated in the first hydrogen production stage and the methane production in the second stage was 500 mL CH4/g VS...... added. This figure was 21% higher than the methane yield from the one-stage process, which was run as control. Sparging of the hydrogen reactor with methane gas resulted in doubling of the hydrogen production. PH was observed as a key factor affecting fermentation pathway in hydrogen production stage....... Furthermore, this study also provided direct evidence in the dynamic fermentation process that, hydrogen production increase was reflected by acetate to butyrate ratio increase in liquid phase. (c) 2006 Elsevier Ltd. All rights reserved....

  4. Minimizing the Makespan for a Two-Stage Three-Machine Assembly Flow Shop Problem with the Sum-of-Processing-Time Based Learning Effect

    Directory of Open Access Journals (Sweden)

    Win-Chin Lin

    2018-01-01

    Full Text Available Two-stage production process and its applications appear in many production environments. Job processing times are usually assumed to be constant throughout the process. In fact, the learning effect accrued from repetitive work experiences, which leads to the reduction of actual job processing times, indeed exists in many production environments. However, the issue of learning effect is rarely addressed in solving a two-stage assembly scheduling problem. Motivated by this observation, the author studies a two-stage three-machine assembly flow shop problem with a learning effect based on sum of the processing times of already processed jobs to minimize the makespan criterion. Because this problem is proved to be NP-hard, a branch-and-bound method embedded with some developed dominance propositions and a lower bound is employed to search for optimal solutions. A cloud theory-based simulated annealing (CSA algorithm and an iterated greedy (IG algorithm with four different local search methods are used to find near-optimal solutions for small and large number of jobs. The performances of adopted algorithms are subsequently compared through computational experiments and nonparametric statistical analyses, including the Kruskal–Wallis test and a multiple comparison procedure.

  5. Methods of Run-Time Error Detection in Distributed Process Control Software

    DEFF Research Database (Denmark)

    Drejer, N.

    of generic run-time error types, design of methods of observing application software behaviorduring execution and design of methods of evaluating run time constraints. In the definition of error types it is attempted to cover all relevant aspects of the application softwaree behavior. Methods of observation......In this thesis, methods of run-time error detection in application software for distributed process control is designed. The error detection is based upon a monitoring approach in which application software is monitored by system software during the entire execution. The thesis includes definition...... and constraint evaluation is designed for the modt interesting error types. These include: a) semantical errors in data communicated between application tasks; b) errors in the execution of application tasks; and c) errors in the timing of distributed events emitted by the application software. The design...

  6. Safety, Liveness and Run-time Refinement for Modular Process-Aware Information Systems with Dynamic Sub Processes

    DEFF Research Database (Denmark)

    Debois, Søren; Hildebrandt, Thomas; Slaats, Tijs

    2015-01-01

    and verification of flexible, run-time adaptable process-aware information systems, moved into practice via the Dynamic Condition Response (DCR) Graphs notation co-developed with our industrial partner. Our key contributions are: (1) A formal theory of dynamic sub-process instantiation for declarative, event......We study modularity, run-time adaptation and refinement under safety and liveness constraints in event-based process models with dynamic sub-process instantiation. The study is part of a larger programme to provide semantically well-founded technologies for modelling, implementation......-based processes under safety and liveness constraints, given as the DCR* process language, equipped with a compositional operational semantics and conservatively extending the DCR Graphs notation; (2) an expressiveness analysis revealing that the DCR* process language is Turing-complete, while the fragment cor...

  7. Engineering analysis of the two-stage trifluoride precipitation process

    International Nuclear Information System (INIS)

    Luerkens, D.w.W.

    1984-06-01

    An engineering analysis of two-stage trifluoride precipitation processes is developed. Precipitation kinetics are modeled using consecutive reactions to represent fluoride complexation. Material balances across the precipitators are used to model the time dependent concentration profiles of the main chemical species. The results of the engineering analysis are correlated with previous experimental work on plutonium trifluoride and cerium trifluoride

  8. A Formal Approach to Run-Time Evaluation of Real-Time Behaviour in Distributed Process Control Systems

    DEFF Research Database (Denmark)

    Kristensen, C.H.

    This thesis advocates a formal approach to run-time evaluation of real-time behaviour in distributed process sontrol systems, motivated by a growing interest in applying the increasingly popular formal methods in the application area of distributed process control systems. We propose to evaluate...... because the real-time aspects of distributed process control systems are considered to be among the hardest and most interesting to handle....

  9. Biogas production of Chicken Manure by Two-stage fermentation process

    Science.gov (United States)

    Liu, Xin Yuan; Wang, Jing Jing; Nie, Jia Min; Wu, Nan; Yang, Fang; Yang, Ren Jie

    2018-06-01

    This paper performs a batch experiment for pre-acidification treatment and methane production from chicken manure by the two-stage anaerobic fermentation process. Results shows that the acetate was the main component in volatile fatty acids produced at the end of pre-acidification stage, accounting for 68% of the total amount. The daily biogas production experienced three peak period in methane production stage, and the methane content reached 60% in the second period and then slowly reduced to 44.5% in the third period. The cumulative methane production was fitted by modified Gompertz equation, and the kinetic parameters of the methane production potential, the maximum methane production rate and lag phase time were 345.2 ml, 0.948 ml/h and 343.5 h, respectively. The methane yield of 183 ml-CH4/g-VSremoved during the methane production stage and VS removal efficiency of 52.7% for the whole fermentation process were achieved.

  10. Experiments for Multi-Stage Processes

    DEFF Research Database (Denmark)

    Tyssedal, John; Kulahci, Murat

    2015-01-01

    Multi-stage processes are very common in both process and manufacturing industries. In this article we present a methodology for designing experiments for multi-stage processes. Typically in these situations the design is expected to involve many factors from different stages. To minimize...... the required number of experimental runs, we suggest using mirror image pairs of experiments at each stage following the first. As the design criterion, we consider their projectivity and mainly focus on projectivity 3 designs. We provide the methodology for generating these designs for processes with any...

  11. Kinetics of two-stage fermentation process for the production of hydrogen

    Energy Technology Data Exchange (ETDEWEB)

    Nath, Kaushik [Department of Chemical Engineering, G.H. Patel College of Engineering and Technology, Vallabh Vidyanagar 388 120, Gujarat (India); Muthukumar, Manoj; Kumar, Anish; Das, Debabrata [Fermentation Technology Laboratory, Department of Biotechnology, Indian Institute of Technology, Kharagpur 721302 (India)

    2008-02-15

    Two-stage process described in the present work is a combination of dark and photofermentation in a sequential batch mode. In the first stage glucose is fermented to acetate, CO{sub 2} and H{sub 2} in an anaerobic dark fermentation by Enterobacter cloacae DM11. This is followed by a successive second stage where acetate is converted to H{sub 2} and CO{sub 2} in a photobioreactor by photosynthetic bacteria, Rhodobacter sphaeroides O.U. 001. The yield of hydrogen in the first stage was about 3.31molH{sub 2}(molglucose){sup -1} (approximately 82% of theoretical) and that in the second stage was about 1.5-1.72molH{sub 2}(molaceticacid){sup -1} (approximately 37-43% of theoretical). The overall yield of hydrogen in two-stage process considering glucose as preliminary substrate was found to be higher compared to a single stage process. Monod model, with incorporation of substrate inhibition term, has been used to determine the growth kinetic parameters for the first stage. The values of maximum specific growth rate ({mu} {sub max}) and K{sub s} (saturation constant) were 0.398h{sup -1} and 5.509gl{sup -1}, respectively, using glucose as substrate. The experimental substrate and biomass concentration profiles have good resemblance with those obtained by kinetic model predictions. A model based on logistic equation has been developed to describe the growth of R. sphaeroides O.U 001 in the second stage. Modified Gompertz equation was applied to estimate the hydrogen production potential, rate and lag phase time in a batch process for various initial concentration of glucose, based on the cumulative hydrogen production curves. Both the curve fitting and statistical analysis showed that the equation was suitable to describe the progress of cumulative hydrogen production. (author)

  12. Influence of capacity- and time-constrained intermediate storage in two-stage food production systems

    DEFF Research Database (Denmark)

    Akkerman, Renzo; van Donk, Dirk Pieter; Gaalman, Gerard

    2007-01-01

    In food processing, two-stage production systems with a batch processor in the first stage and packaging lines in the second stage are common and mostly separated by capacity- and time-constrained intermediate storage. This combination of constraints is common in practice, but the literature hardly...... of systems like this. Contrary to the common sense in operations management, the LPT rule is able to maximize the total production volume per day. Furthermore, we show that adding one tank has considerable effects. Finally, we conclude that the optimal setup frequency for batches in the first stage...... pays any attention to this. In this paper, we show how various capacity and time constraints influence the performance of a specific two-stage system. We study the effects of several basic scheduling and sequencing rules in the presence of these constraints in order to learn the characteristics...

  13. Operation of a two-stage continuous fermentation process producing hydrogen and methane from artificial food wastes

    Energy Technology Data Exchange (ETDEWEB)

    Nagai, Kohki; Mizuno, Shiho; Umeda, Yoshito; Sakka, Makiko [Toho Gas Co., Ltd. (Japan); Osaka, Noriko [Tokyo Gas Co. Ltd. (Japan); Sakka, Kazuo [Mie Univ. (Japan)

    2010-07-01

    An anaerobic two-stage continuous fermentation process with combined thermophilic hydrogenogenic and methanogenic stages (two-stage fermentation process) was applied to artificial food wastes on a laboratory scale. In this report, organic loading rate (OLR) conditions for hydrogen fermentation were optimized before operating the two-stage fermentation process. The OLR was set at 11.2, 24.3, 35.2, 45.6, 56.1, and 67.3 g-COD{sub cr} L{sup -1} day{sup -1} with a temperature of 60 C, pH5.5 and 5.0% total solids. As a result, approximately 1.8-2.0 mol-H{sub 2} mol-hexose{sup -1} was obtained at the OLR of 11.2-56.1 g-COD{sub cr} L{sup -1} day{sup -1}. In contrast, it was inferred that the hydrogen yield at the OLR of 67.3 g-COD{sub cr} L{sup -1} day{sup -1} decreased because of an increase in lactate concentration in the culture medium. The performance of the two-stage fermentation process was also evaluated over three months. The hydraulic retention time (HRT) of methane fermentation was able to be shortened 5.0 days (under OLR 12.4 g-COD{sub cr} L{sup -1} day{sup -1} conditions) when the OLR of hydrogen fermentation was 44.0 g-COD{sub cr} L{sup -1} day{sup -1}, and the average gasification efficiency of the two-stage fermentation process was 81% at the time. (orig.)

  14. Run 2 Upgrades to the CMS Level-1 Calorimeter Trigger

    CERN Document Server

    Kreis, B.; Cavanaugh, R.; Mishra, K.; Rivera, R.; Uplegger, L.; Apanasevich, L.; Zhang, J.; Marrouche, J.; Wardle, N.; Aggleton, R.; Ball, F.; Brooke, J.; Newbold, D.; Paramesvaran, S.; Smith, D.; Baber, M.; Bundock, A.; Citron, M.; Elwood, A.; Hall, G.; Iles, G.; Laner, C.; Penning, B.; Rose, A.; Tapper, A.; Foudas, C.; Beaudette, F.; Cadamuro, L.; Mastrolorenzo, L.; Romanteau, T.; Sauvan, J.B.; Strebler, T.; Zabi, A.; Barbieri, R.; Cali, I.A.; Innocenti, G.M.; Lee, Y.J.; Roland, C.; Wyslouch, B.; Guilbaud, M.; Li, W.; Northup, M.; Tran, B.; Durkin, T.; Harder, K.; Harper, S.; Shepherd-Themistocleous, C.; Thea, A.; Williams, T.; Cepeda, M.; Dasu, S.; Dodd, L.; Forbes, R.; Gorski, T.; Klabbers, P.; Levine, A.; Ojalvo, I.; Ruggles, T.; Smith, N.; Smith, W.; Svetek, A.; Tikalsky, J.; Vicente, M.

    2016-01-21

    The CMS Level-1 calorimeter trigger is being upgraded in two stages to maintain performance as the LHC increases pile-up and instantaneous luminosity in its second run. In the first stage, improved algorithms including event-by-event pile-up corrections are used. New algorithms for heavy ion running have also been developed. In the second stage, higher granularity inputs and a time-multiplexed approach allow for improved position and energy resolution. Data processing in both stages of the upgrade is performed with new, Xilinx Virtex-7 based AMC cards.

  15. Is the continuous two-stage anaerobic digestion process well suited for all substrates?

    Science.gov (United States)

    Lindner, Jonas; Zielonka, Simon; Oechsner, Hans; Lemmer, Andreas

    2016-01-01

    Two-stage anaerobic digestion systems are often considered to be advantageous compared to one-stage processes. Although process conditions and fermenter setups are well examined, overall substrate degradation in these systems is controversially discussed. Therefore, the aim of this study was to investigate how substrates with different fibre and sugar contents (hay/straw, maize silage, sugar beet) influence the degradation rate and methane production. Intermediates and gas compositions, as well as methane yields and VS-degradation degrees were recorded. The sugar beet substrate lead to a higher pH-value drop 5.67 in the acidification reactor, which resulted in a six time higher hydrogen production in comparison to the hay/straw substrate (pH-value drop 5.34). As the achieved yields in the two-stage system showed a difference of 70.6% for the hay/straw substrate, and only 7.8% for the sugar beet substrate. Therefore two-stage systems seem to be only recommendable for digesting sugar rich substrates. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Two-stage process analysis using the process-based performance measurement framework and business process simulation

    NARCIS (Netherlands)

    Han, K.H.; Kang, J.G.; Song, M.S.

    2009-01-01

    Many enterprises have recently been pursuing process innovation or improvement to attain their performance goals. To align a business process with enterprise performances, this study proposes a two-stage process analysis for process (re)design that combines the process-based performance measurement

  17. Enhancement of bioenergy production from organic wastes by two-stage anaerobic hydrogen and methane production process

    DEFF Research Database (Denmark)

    Luo, Gang; Xie, Li; Zhou, Qi

    2011-01-01

    The present study investigated a two-stage anaerobic hydrogen and methane process for increasing bioenergy production from organic wastes. A two-stage process with hydraulic retention time (HRT) 3d for hydrogen reactor and 12d for methane reactor, obtained 11% higher energy compared to a single......:12 to 1:14, 6.7%, more energy could be obtained. Microbial community analysis indicated that the dominant bacterial species were different in the hydrogen reactors (Thermoanaerobacterium thermosaccharolyticum-like species) and methane reactors (Clostridium thermocellum-like species). The changes...

  18. Empirical study of classification process for two-stage turbo air classifier in series

    Science.gov (United States)

    Yu, Yuan; Liu, Jiaxiang; Li, Gang

    2013-05-01

    The suitable process parameters for a two-stage turbo air classifier are important for obtaining the ultrafine powder that has a narrow particle-size distribution, however little has been published internationally on the classification process for the two-stage turbo air classifier in series. The influence of the process parameters of a two-stage turbo air classifier in series on classification performance is empirically studied by using aluminum oxide powders as the experimental material. The experimental results show the following: 1) When the rotor cage rotary speed of the first-stage classifier is increased from 2 300 r/min to 2 500 r/min with a constant rotor cage rotary speed of the second-stage classifier, classification precision is increased from 0.64 to 0.67. However, in this case, the final ultrafine powder yield is decreased from 79% to 74%, which means the classification precision and the final ultrafine powder yield can be regulated through adjusting the rotor cage rotary speed of the first-stage classifier. 2) When the rotor cage rotary speed of the second-stage classifier is increased from 2 500 r/min to 3 100 r/min with a constant rotor cage rotary speed of the first-stage classifier, the cut size is decreased from 13.16 μm to 8.76 μm, which means the cut size of the ultrafine powder can be regulated through adjusting the rotor cage rotary speed of the second-stage classifier. 3) When the feeding speed is increased from 35 kg/h to 50 kg/h, the "fish-hook" effect is strengthened, which makes the ultrafine powder yield decrease. 4) To weaken the "fish-hook" effect, the equalization of the two-stage wind speeds or the combination of a high first-stage wind speed with a low second-stage wind speed should be selected. This empirical study provides a criterion of process parameter configurations for a two-stage or multi-stage classifier in series, which offers a theoretical basis for practical production.

  19. Two-stage precipitation of neptunium (IV) oxalate

    International Nuclear Information System (INIS)

    Luerkens, D.W.

    1983-07-01

    Neptunium (IV) oxalate was precipitated using a two-stage precipitation system. A series of precipitation experiments was used to identify the significant process variables affecting precipitate characteristics. Process variables tested were input concentrations, solubility conditions in the first stage precipitator, precipitation temperatures, and residence time in the first stage precipitator. A procedure has been demonstrated that produces neptunium (IV) oxalate particles that filter well and readily calcine to the oxide

  20. Methods of Run-Time Error Detection in Distributed Process Control Software

    DEFF Research Database (Denmark)

    Drejer, N.

    In this thesis, methods of run-time error detection in application software for distributed process control is designed. The error detection is based upon a monitoring approach in which application software is monitored by system software during the entire execution. The thesis includes definition...... and constraint evaluation is designed for the modt interesting error types. These include: a) semantical errors in data communicated between application tasks; b) errors in the execution of application tasks; and c) errors in the timing of distributed events emitted by the application software. The design...... of error detection methods includes a high level software specification. this has the purpose of illustrating that the designed can be used in practice....

  1. EnergyPlus Run Time Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Tianzhen; Buhl, Fred; Haves, Philip

    2008-09-20

    EnergyPlus is a new generation building performance simulation program offering many new modeling capabilities and more accurate performance calculations integrating building components in sub-hourly time steps. However, EnergyPlus runs much slower than the current generation simulation programs. This has become a major barrier to its widespread adoption by the industry. This paper analyzed EnergyPlus run time from comprehensive perspectives to identify key issues and challenges of speeding up EnergyPlus: studying the historical trends of EnergyPlus run time based on the advancement of computers and code improvements to EnergyPlus, comparing EnergyPlus with DOE-2 to understand and quantify the run time differences, identifying key simulation settings and model features that have significant impacts on run time, and performing code profiling to identify which EnergyPlus subroutines consume the most amount of run time. This paper provides recommendations to improve EnergyPlus run time from the modeler?s perspective and adequate computing platforms. Suggestions of software code and architecture changes to improve EnergyPlus run time based on the code profiling results are also discussed.

  2. The rearrangement process in a two-stage broadcast switching network

    DEFF Research Database (Denmark)

    Jacobsen, Søren B.

    1988-01-01

    The rearrangement process in the two-stage broadcast switching network presented by F.K. Hwang and G.W. Richards (ibid., vol.COM-33, no.10, p.1025-1035, Oct. 1985) is considered. By defining a certain function it is possible to calculate an upper bound on the number of connections to be moved...... during a rearrangement. When each inlet channel appears twice, the maximum number of connections to be moved is found. For a special class of inlet assignment patterns in the case of which each inlet channel appears three times, the maximum number of connections to be moved is also found. In the general...

  3. Run-time middleware to support real-time system scenarios

    NARCIS (Netherlands)

    Goossens, K.; Koedam, M.; Sinha, S.; Nelson, A.; Geilen, M.

    2015-01-01

    Systems on Chip (SOC) are powerful multiprocessor systems capable of running multiple independent applications, often with both real-time and non-real-time requirements. Scenarios exist at two levels: first, combinations of independent applications, and second, different states of a single

  4. Two-stage thermal/nonthermal waste treatment process

    International Nuclear Information System (INIS)

    Rosocha, L.A.; Anderson, G.K.; Coogan, J.J.; Kang, M.; Tennant, R.A.; Wantuck, P.J.

    1993-01-01

    An innovative waste treatment technology is being developed in Los Alamos to address the destruction of hazardous organic wastes. The technology described in this report uses two stages: a packed bed reactor (PBR) in the first stage to volatilize and/or combust liquid organics and a silent discharge plasma (SDP) reactor to remove entrained hazardous compounds in the off-gas to even lower levels. We have constructed pre-pilot-scale PBR-SDP apparatus and tested the two stages separately and in combined modes. These tests are described in the report

  5. Design Flow Instantiation for Run-Time Reconfigurable Systems: A Case Study

    Directory of Open Access Journals (Sweden)

    Yang Qu

    2007-12-01

    Full Text Available Reconfigurable system is a promising alternative to deliver both flexibility and performance at the same time. New reconfigurable technologies and technology-dependent tools have been developed, but a complete overview of the whole design flow for run-time reconfigurable systems is missing. In this work, we present a design flow instantiation for such systems using a real-life application. The design flow is roughly divided into two parts: system level and implementation. At system level, our supports for hardware resource estimation and performance evaluation are applied. At implementation level, technology-dependent tools are used to realize the run-time reconfiguration. The design case is part of a WCDMA decoder on a commercially available reconfigurable platform. The results show that using run-time reconfiguration can save over 40% area when compared to a functionally equivalent fixed system and achieve 30 times speedup in processing time when compared to a functionally equivalent pure software design.

  6. Removal Natural Organic Matter (NOM in Peat Water from Wetland Area by Coagulation-Ultrafiltration Hybrid Process with Pretreatment Two-Stage Coagulation

    Directory of Open Access Journals (Sweden)

    Mahmud Mahmud

    2016-06-01

    Full Text Available The primary problem encountered in the application of membrane technology was membrane fouling. During this time, hybrid process by coagulation-ultrafiltration in drinking water treatment that has been conducted by some research, using by one-stage coagulation. The goal of this research was to investigate the effect of two-stage coagulation as a pretreatment towards performance of the coagulation-ultrafiltration hybrid process for removal NOM in the peat water. Coagulation process, either with the one-stage or two-stage coagulation was very good in removing charge hydrophilic fraction, i.e. more than 98%. NOM fractions of the peat water, from the most easily removed by the two-stage coagulation and one-stage coagulation process was charged hydrophilic>strongly hydrophobic>weakly hydrophobic>neutral hydrophilic. The two-stage coagulation process could removed UV254 and colors with a little better than the one-stage coagulation at the optimum coagulant dose. Neutral hydrophilic fraction of peat water NOM was the most influential fraction of UF membrane fouling. The two-stage coagulation process better in removing the neutral hidrophilic fraction, while removing of the charged hydrophilic, strongly hydrophobic and weakly hydrophobic similar to the one-stage coagulation. Hybrid process by pretreatment with two-stage coagulation, beside can increased removal efficiency of UV254 and color, also can reduced fouling rate of the ultrafiltration membraneIt must not exceed 250 words, contains a brief summary of the text, covering the whole manuscript without being too elaborate on every section. Avoid any abbreviation, unless it is a common knowledge or has been previously stated.

  7. Removal Natural Organic Matter (NOM in Peat Water from Wetland Area by Coagulation-Ultrafiltration Hybrid Process with Pretreatment Two-Stage Coagulation

    Directory of Open Access Journals (Sweden)

    Mahmud Mahmud

    2013-11-01

    Full Text Available The primary problem encountered in the application of membrane technology was membrane fouling. During this time, hybrid process by coagulation-ultrafiltration in drinking water treatment that has been conducted by some research, using by one-stage coagulation. The goal of this research was to investigate the effect of two-stage coagulation as a pretreatment towards performance of the coagulation-ultrafiltration hybrid process for removal NOM in the peat water. Coagulation process, either with the one-stage or two-stage coagulation was very good in removing charge hydrophilic fraction, i.e. more than 98%. NOM fractions of the peat water, from the most easily removed by the two-stage coagulation and one-stage coagulation process was charged hydrophilic>strongly hydrophobic>weakly hydrophobic>neutral hydrophilic. The two-stage coagulation process could removed UV254 and colors with a little better than the one-stage coagulation at the optimum coagulant dose. Neutral hydrophilic fraction of peat water NOM was the most influential fraction of UF membrane fouling. The two-stage coagulation process better in removing the neutral hidrophilic fraction, while removing of the charged hydrophilic, strongly hydrophobic and weakly hydrophobic similar to the one-stage coagulation. Hybrid process by pretreatment with two-stage coagulation, beside can increased removal efficiency of UV254 and color, also can reduced fouling rate of the ultrafiltration membraneIt must not exceed 250 words, contains a brief summary of the text, covering the whole manuscript without being too elaborate on every section. Avoid any abbreviation, unless it is a common knowledge or has been previously stated.

  8. Two-stage categorization in brand extension evaluation: electrophysiological time course evidence.

    Directory of Open Access Journals (Sweden)

    Qingguo Ma

    Full Text Available A brand name can be considered a mental category. Similarity-based categorization theory has been used to explain how consumers judge a new product as a member of a known brand, a process called brand extension evaluation. This study was an event-related potential study conducted in two experiments. The study found a two-stage categorization process reflected by the P2 and N400 components in brand extension evaluation. In experiment 1, a prime-probe paradigm was presented in a pair consisting of a brand name and a product name in three conditions, i.e., in-category extension, similar-category extension, and out-of-category extension. Although the task was unrelated to brand extension evaluation, P2 distinguished out-of-category extensions from similar-category and in-category ones, and N400 distinguished similar-category extensions from in-category ones. In experiment 2, a prime-probe paradigm with a related task was used, in which product names included subcategory and major-category product names. The N400 elicited by subcategory products was more significantly negative than that elicited by major-category products, with no salient difference in P2. We speculated that P2 could reflect the early low-level and similarity-based processing in the first stage, whereas N400 could reflect the late analytic and category-based processing in the second stage.

  9. Accuracy of the One-Stage and Two-Stage Impression Techniques: A Comparative Analysis.

    Science.gov (United States)

    Jamshidy, Ladan; Mozaffari, Hamid Reza; Faraji, Payam; Sharifi, Roohollah

    2016-01-01

    Introduction . One of the main steps of impression is the selection and preparation of an appropriate tray. Hence, the present study aimed to analyze and compare the accuracy of one- and two-stage impression techniques. Materials and Methods . A resin laboratory-made model, as the first molar, was prepared by standard method for full crowns with processed preparation finish line of 1 mm depth and convergence angle of 3-4°. Impression was made 20 times with one-stage technique and 20 times with two-stage technique using an appropriate tray. To measure the marginal gap, the distance between the restoration margin and preparation finish line of plaster dies was vertically determined in mid mesial, distal, buccal, and lingual (MDBL) regions by a stereomicroscope using a standard method. Results . The results of independent test showed that the mean value of the marginal gap obtained by one-stage impression technique was higher than that of two-stage impression technique. Further, there was no significant difference between one- and two-stage impression techniques in mid buccal region, but a significant difference was reported between the two impression techniques in MDL regions and in general. Conclusion . The findings of the present study indicated higher accuracy for two-stage impression technique than for the one-stage impression technique.

  10. On response time and cycle time distributions in a two-stage cyclic queue

    NARCIS (Netherlands)

    Boxma, O.J.; Donk, P.

    1982-01-01

    We consider a two-stage closed cyclic queueing model. For the case of an exponential server at each queue we derive the joint distribution of the successive response times of a custumer at both queues, using a reversibility argument. This joint distribution turns out to have a product form. The

  11. Accuracy versus run time in an adiabatic quantum search

    International Nuclear Information System (INIS)

    Rezakhani, A. T.; Pimachev, A. K.; Lidar, D. A.

    2010-01-01

    Adiabatic quantum algorithms are characterized by their run time and accuracy. The relation between the two is essential for quantifying adiabatic algorithmic performance yet is often poorly understood. We study the dynamics of a continuous time, adiabatic quantum search algorithm and find rigorous results relating the accuracy and the run time. Proceeding with estimates, we show that under fairly general circumstances the adiabatic algorithmic error exhibits a behavior with two discernible regimes: The error decreases exponentially for short times and then decreases polynomially for longer times. We show that the well-known quadratic speedup over classical search is associated only with the exponential error regime. We illustrate the results through examples of evolution paths derived by minimization of the adiabatic error. We also discuss specific strategies for controlling the adiabatic error and run time.

  12. LHCb computing in Run II and its evolution towards Run III

    CERN Document Server

    Falabella, Antonio

    2016-01-01

    his contribution reports on the experience of the LHCb computing team during LHC Run 2 and its preparation for Run 3. Furthermore a brief introduction on LHCbDIRAC, i.e. the tool to interface to the experiment distributed computing resources for its data processing and data management operations, is given. Run 2, which started in 2015, has already seen several changes in the data processing workflows of the experiment. Most notably the ability to align and calibrate the detector between two different stages of the data processing in the high level trigger farm, eliminating the need for a second pass processing of the data offline. In addition a fraction of the data is immediately reconstructed to its final physics format in the high level trigger and only this format is exported from the experiment site to the physics analysis. This concept have successfully been tested and will continue to be used for the rest of Run 2. Furthermore the distributed data processing has been improved with new concepts and techn...

  13. Compilation time analysis to minimize run-time overhead in preemptive scheduling on multiprocessors

    Science.gov (United States)

    Wauters, Piet; Lauwereins, Rudy; Peperstraete, J.

    1994-10-01

    This paper describes a scheduling method for hard real-time Digital Signal Processing (DSP) applications, implemented on a multi-processor. Due to the very high operating frequencies of DSP applications (typically hundreds of kHz) runtime overhead should be kept as small as possible. Because static scheduling introduces very little run-time overhead it is used as much as possible. Dynamic pre-emption of tasks is allowed if and only if it leads to better performance in spite of the extra run-time overhead. We essentially combine static scheduling with dynamic pre-emption using static priorities. Since we are dealing with hard real-time applications we must be able to guarantee at compile-time that all timing requirements will be satisfied at run-time. We will show that our method performs at least as good as any static scheduling method. It also reduces the total amount of dynamic pre-emptions compared with run time methods like deadline monotonic scheduling.

  14. Accuracy of the One-Stage and Two-Stage Impression Techniques: A Comparative Analysis

    Directory of Open Access Journals (Sweden)

    Ladan Jamshidy

    2016-01-01

    Full Text Available Introduction. One of the main steps of impression is the selection and preparation of an appropriate tray. Hence, the present study aimed to analyze and compare the accuracy of one- and two-stage impression techniques. Materials and Methods. A resin laboratory-made model, as the first molar, was prepared by standard method for full crowns with processed preparation finish line of 1 mm depth and convergence angle of 3-4°. Impression was made 20 times with one-stage technique and 20 times with two-stage technique using an appropriate tray. To measure the marginal gap, the distance between the restoration margin and preparation finish line of plaster dies was vertically determined in mid mesial, distal, buccal, and lingual (MDBL regions by a stereomicroscope using a standard method. Results. The results of independent test showed that the mean value of the marginal gap obtained by one-stage impression technique was higher than that of two-stage impression technique. Further, there was no significant difference between one- and two-stage impression techniques in mid buccal region, but a significant difference was reported between the two impression techniques in MDL regions and in general. Conclusion. The findings of the present study indicated higher accuracy for two-stage impression technique than for the one-stage impression technique.

  15. A two-stage metal valorisation process from electric arc furnace dust (EAFD

    Directory of Open Access Journals (Sweden)

    H. Issa

    2016-04-01

    Full Text Available This paper demonstrates possibility of separate zinc and lead recovery from coal composite pellets, composed of EAFD with other synergetic iron-bearing wastes and by-products (mill scale, pyrite-cinder, magnetite concentrate, through a two-stage process. The results show that in the first, low temp erature stage performed in electro-resistant furnace, removal of lead is enabled due to presence of chlorides in the system. In the second stage, performed at higher temperatures in Direct Current (DC plasma furnace, valorisation of zinc is conducted. Using this process, several final products were obtained, including a higher purity zinc oxide, which, by its properties, corresponds washed Waelz oxide.

  16. Safety evaluation of the ITP filter/stripper test runs and quiet time runs using simulant solution

    International Nuclear Information System (INIS)

    Gupta, M.K.

    1993-10-01

    In-Tank Precipitation is a process for removing radioactivity from the salt stored in the Waste Management Tank Farm at Savannah River. The process involves precipitation of cesium and potassium with sodium tetraphenylborate (STPB) and adsorption of strontium and actinides on insoluble sodium titanate (ST) particles. The purpose of this report is to provide the technical bases for the evaluation of Unreviewed Safety Question for the In-Tank Precipitation (ITP) Filter/Stripper Test Runs and Quiet Time Runs Program. The primary objective of the filter-stripper test runs and quiet time runs program is to ensure that the facility will fulfill its design basis function prior to the introduction of radioactive feed. Risks associated with the program are identified and include hazards, both personnel and environmental, associated with handling the chemical simulants; the presence of flammable materials; the potential for damage to the permanenet ITP and Tank Farm facilities. The risks, potential accident scenarios, and safeguards either in place or planned are discussed at length

  17. Safety evaluation of the ITP filter/stripper test runs and quiet time runs using simulant solution

    Energy Technology Data Exchange (ETDEWEB)

    Gupta, M.K.

    1993-10-01

    In-Tank Precipitation is a process for removing radioactivity from the salt stored in the Waste Management Tank Farm at Savannah River. The process involves precipitation of cesium and potassium with sodium tetraphenylborate (STPB) and adsorption of strontium and actinides on insoluble sodium titanate (ST) particles. The purpose of this report is to provide the technical bases for the evaluation of Unreviewed Safety Question for the In-Tank Precipitation (ITP) Filter/Stripper Test Runs and Quiet Time Runs Program. The primary objective of the filter-stripper test runs and quiet time runs program is to ensure that the facility will fulfill its design basis function prior to the introduction of radioactive feed. Risks associated with the program are identified and include hazards, both personnel and environmental, associated with handling the chemical simulants; the presence of flammable materials; the potential for damage to the permanenet ITP and Tank Farm facilities. The risks, potential accident scenarios, and safeguards either in place or planned are discussed at length.

  18. A preventive maintenance model with a two-level inspection policy based on a three-stage failure process

    International Nuclear Information System (INIS)

    Wang, Wenbin; Zhao, Fei; Peng, Rui

    2014-01-01

    Inspection is always an important preventive maintenance (PM) activity and can have different depths and cover all or part of plant systems. This paper introduces a two-level inspection policy model for a single component plant system based on a three-stage failure process. Such a failure process divides the system′s life into three stages: good, minor defective and severe defective stages. The first level of inspection, the minor inspection, can only identify the minor defective stage with a certain probability, but can always reveal the severe defective stage. The major inspection can however identify both defective stages perfectly. Once the system is found to be in the minor defective stage, a shortened inspection interval is adopted. If however the system is found to be in the severe defective stage, we may delay the maintenance action if the time to the next planned PM window is less than a threshold level, but otherwise, replace immediately. This corresponds to the well adopted maintenance policy in practice such as periodic inspections with planned PMs. A numerical example is presented to demonstrate the proposed model by comparing with other models. - Highlights: • The system′s deterioration goes through a three-stage process, namely, normal, minor defective and severe defective. • Two levels of inspections are proposed, e.g., minor and major inspections. • Once the minor defective stage is found, instead of taking a maintenance action, a shortened inspection interval is recommended. • When the severe defective stage is found, we delay the maintenance according to the threshold to the next PM. • The decision variables are the inspection intervals and the threshold to PM

  19. Production of long chain alkyl esters from carbon dioxide and electricity by a two-stage bacterial process.

    Science.gov (United States)

    Lehtinen, Tapio; Efimova, Elena; Tremblay, Pier-Luc; Santala, Suvi; Zhang, Tian; Santala, Ville

    2017-11-01

    Microbial electrosynthesis (MES) is a promising technology for the reduction of carbon dioxide into value-added multicarbon molecules. In order to broaden the product profile of MES processes, we developed a two-stage process for microbial conversion of carbon dioxide and electricity into long chain alkyl esters. In the first stage, the carbon dioxide is reduced to organic compounds, mainly acetate, in a MES process by Sporomusa ovata. In the second stage, the liquid end-products of the MES process are converted to the final product by a second microorganism, Acinetobacter baylyi in an aerobic bioprocess. In this proof-of-principle study, we demonstrate for the first time the bacterial production of long alkyl esters (wax esters) from carbon dioxide and electricity as the sole sources of carbon and energy. The process holds potential for the efficient production of carbon-neutral chemicals or biofuels. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Enhancing the hydrolysis process of a two-stage biogas technology for the organic fraction of municipal solid waste

    DEFF Research Database (Denmark)

    Nasir, Zeeshan; Uellendahl, Hinrich

    2015-01-01

    The Danish company Solum A/S has developed a two-stage dry anaerobic digestion process labelled AIKAN® for the biological conversion of the organic fraction of municipal solid waste (OFMSW) into biogas and compost. In the AIKAN® process design the methanogenic (2nd) stage is separated from...... the hydrolytic (1st) stage, which enables pump-free feeding of the waste into the 1st stage (processing module), and eliminates the risk for blocking of pumps and pipes by pumping only the percolate from the 1st stage into the 2nd stage (biogas reactor tank). The biogas yield of the AIKAN® two-stage process......, however, has shown to be only about 60% of the theoretical maximum. Previous monitoring of the hydrolytic and methanogenic activity in the two stages of the process revealed that the bottleneck of the whole degradation process is rather found in the hydrolytic first stage while the methanogenic second...

  1. Similarity ratio analysis for early stage fault detection with optical emission spectrometer in plasma etching process.

    Directory of Open Access Journals (Sweden)

    Jie Yang

    Full Text Available A Similarity Ratio Analysis (SRA method is proposed for early-stage Fault Detection (FD in plasma etching processes using real-time Optical Emission Spectrometer (OES data as input. The SRA method can help to realise a highly precise control system by detecting abnormal etch-rate faults in real-time during an etching process. The method processes spectrum scans at successive time points and uses a windowing mechanism over the time series to alleviate problems with timing uncertainties due to process shift from one process run to another. A SRA library is first built to capture features of a healthy etching process. By comparing with the SRA library, a Similarity Ratio (SR statistic is then calculated for each spectrum scan as the monitored process progresses. A fault detection mechanism, named 3-Warning-1-Alarm (3W1A, takes the SR values as inputs and triggers a system alarm when certain conditions are satisfied. This design reduces the chance of false alarm, and provides a reliable fault reporting service. The SRA method is demonstrated on a real semiconductor manufacturing dataset. The effectiveness of SRA-based fault detection is evaluated using a time-series SR test and also using a post-process SR test. The time-series SR provides an early-stage fault detection service, so less energy and materials will be wasted by faulty processing. The post-process SR provides a fault detection service with higher reliability than the time-series SR, but with fault testing conducted only after each process run completes.

  2. The Performance and Development of the Inner Detector Trigger Algorithms at ATLAS for LHC Run 2

    CERN Document Server

    Sowden, Benjamin Charles; The ATLAS collaboration

    2015-01-01

    A description of the design and performance of the newly reimplemented tracking algorithms for the ATLAS trigger for LHC Run 2, to commence in spring 2015, is provided. The ATLAS High Level Trigger (HLT) has been restructured to run as a more flexible single stage process, rather than the two separate Level 2 and Event Filter stages used during Run 1. To make optimal use of this new scenario, a new tracking strategy has been implemented for Run 2 for the HLT. This new strategy will use a Fast Track Finder (FTF) algorithm to directly seed the subsequent Precision Tracking, and will result in improved track parameter resolution and significantly faster execution times than achieved during Run 1 but with no significant reduction in efficiency. The performance and timing of the algorithms for numerous physics signatures in the trigger are presented. The profiling infrastructure, constructed to provide prompt feedback from the optimisation, is described, including the methods used to monitor the relative performan...

  3. Two-stage pervaporation process for effective in situ removal acetone-butanol-ethanol from fermentation broth.

    Science.gov (United States)

    Cai, Di; Hu, Song; Miao, Qi; Chen, Changjing; Chen, Huidong; Zhang, Changwei; Li, Ping; Qin, Peiyong; Tan, Tianwei

    2017-01-01

    Two-stage pervaporation for ABE recovery from fermentation broth was studied to reduce the energy cost. The permeate after the first stage in situ pervaporation system was further used as the feedstock in the second stage of pervaporation unit using the same PDMS/PVDF membrane. A total 782.5g/L of ABE (304.56g/L of acetone, 451.98g/L of butanol and 25.97g/L of ethanol) was achieved in the second stage permeate, while the overall acetone, butanol and ethanol separation factors were: 70.7-89.73, 70.48-84.74 and 9.05-13.58, respectively. Furthermore, the theoretical evaporation energy requirement for ABE separation in the consolidate fermentation, which containing two-stage pervaporation and the following distillation process, was estimated less than ∼13.2MJ/kg-butanol. The required evaporation energy was only 36.7% of the energy content of butanol. The novel two-stage pervaporation process was effective in increasing ABE production and reducing energy consumption of the solvents separation system. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Comparison of two-stage thermophilic (68 degrees C/55 degrees C) anaerobic digestion with one-stage thermophilic (55 degrees C) digestion of cattle manure

    DEFF Research Database (Denmark)

    Nielsen, H.B.; Mladenovska, Zuzana; Westermann, Peter

    2004-01-01

    A two-stage 68degreesC/55degreesC anaerobic degradation process for treatment of cattle manure was studied. In batch experiments, an increase of the specific methane yield, ranging from 24% to 56%, was obtained when cattle manure and its fractions (fibers and liquid) were pretreated at 68degrees......, was compared with a conventional single-stage reactor running at 55degreesC with 15-days HRT. When an organic loading of 3 g volatile solids (VS) per liter per day was applied, the two-stage setup had a 6% to 8% higher specific methane yield and a 9% more effective VS-removal than the conventional single......-stage reactor. The 68degreesC reactor generated 7% to 9% of the total amount of methane of the two-stage system and maintained a volatile fatty acids (VFA) concentration of 4.0 to 4.4 g acetate per liter. Population size and activity of aceticlastic methanogens, syntrophic bacteria, and hydrolytic...

  5. LHCb detector and trigger performance in Run II

    Science.gov (United States)

    Francesca, Dordei

    2017-12-01

    The LHCb detector is a forward spectrometer at the LHC, designed to perform high precision studies of b- and c- hadrons. In Run II of the LHC, a new scheme for the software trigger at LHCb allows splitting the triggering of events into two stages, giving room to perform the alignment and calibration in real time. In the novel detector alignment and calibration strategy for Run II, data collected at the start of the fill are processed in a few minutes and used to update the alignment, while the calibration constants are evaluated for each run. This allows identical constants to be used in the online and offline reconstruction, thus improving the correlation between triggered and offline selected events. The required computing time constraints are met thanks to a new dedicated framework using the multi-core farm infrastructure for the trigger. The larger timing budget, available in the trigger, allows to perform the same track reconstruction online and offline. This enables LHCb to achieve the best reconstruction performance already in the trigger, and allows physics analyses to be performed directly on the data produced by the trigger reconstruction. The novel real-time processing strategy at LHCb is discussed from both the technical and operational point of view. The overall performance of the LHCb detector on the data of Run II is presented as well.

  6. Combining monitoring with run-time assertion checking

    NARCIS (Netherlands)

    Gouw, Stijn de

    2013-01-01

    We develop a new technique for Run-time Checking for two object-oriented languages: Java and the Abstract Behavioral Specification language ABS. In object-oriented languages, objects communicate by sending each other messages. Assuming encapsulation, the behavior of objects is completely

  7. Two-stage Lagrangian modeling of ignition processes in ignition quality tester and constant volume combustion chambers

    KAUST Repository

    Alfazazi, Adamu

    2016-08-10

    The ignition characteristics of isooctane and n-heptane in an ignition quality tester (IQT) were simulated using a two-stage Lagrangian (TSL) model, which is a zero-dimensional (0-D) reactor network method. The TSL model was also used to simulate the ignition delay of n-dodecane and n-heptane in a constant volume combustion chamber (CVCC), which is archived in the engine combustion network (ECN) library (http://www.ca.sandia.gov/ecn). A detailed chemical kinetic model for gasoline surrogates from the Lawrence Livermore National Laboratory (LLNL) was utilized for the simulation of n-heptane and isooctane. Additional simulations were performed using an optimized gasoline surrogate mechanism from RWTH Aachen University. Validations of the simulated data were also performed with experimental results from an IQT at KAUST. For simulation of n-dodecane in the CVCC, two n-dodecane kinetic models from the literature were utilized. The primary aim of this study is to test the ability of TSL to replicate ignition timings in the IQT and the CVCC. The agreement between the model and the experiment is acceptable except for isooctane in the IQT and n-heptane and n-dodecane in the CVCC. The ability of the simulations to replicate observable trends in ignition delay times with regard to changes in ambient temperature and pressure allows the model to provide insights into the reactions contributing towards ignition. Thus, the TSL model was further employed to investigate the physical and chemical processes responsible for controlling the overall ignition under various conditions. The effects of exothermicity, ambient pressure, and ambient oxygen concentration on first stage ignition were also studied. Increasing ambient pressure and oxygen concentration was found to shorten the overall ignition delay time, but does not affect the timing of the first stage ignition. Additionally, the temperature at the end of the first stage ignition was found to increase at higher ambient pressure

  8. LHCb's Time-Real Alignment in RunII

    CERN Multimedia

    Batozskaya, Varvara

    2015-01-01

    LHCb has introduced a novel real-time detector alignment and calibration strategy for LHC Run 2. Data collected at the start of the fill will be processed in a few minutes and used to update the alignment, while the calibration constants will be evaluated for each run. This procedure will improve the quality of the online alignment. Critically, this new real-time alignment and calibration procedure allows identical constants to be used in the online and offline reconstruction, thus improving the correlation between triggered and offline selected events. This offers the opportunity to optimise the event selection in the trigger by applying stronger constraints. The required computing time constraints are met thanks to a new dedicated framework using the multi-core farm infrastructure for the trigger. The motivation for a real-time alignment and calibration of the LHCb detector is discussed from both the operational and physics performance points of view. Specific challenges of this novel configur...

  9. Run charts revisited: a simulation study of run chart rules for detection of non-random variation in health care processes.

    Science.gov (United States)

    Anhøj, Jacob; Olesen, Anne Vingaard

    2014-01-01

    A run chart is a line graph of a measure plotted over time with the median as a horizontal line. The main purpose of the run chart is to identify process improvement or degradation, which may be detected by statistical tests for non-random patterns in the data sequence. We studied the sensitivity to shifts and linear drifts in simulated processes using the shift, crossings and trend rules for detecting non-random variation in run charts. The shift and crossings rules are effective in detecting shifts and drifts in process centre over time while keeping the false signal rate constant around 5% and independent of the number of data points in the chart. The trend rule is virtually useless for detection of linear drift over time, the purpose it was intended for.

  10. Treatment of corn ethanol distillery wastewater using two-stage anaerobic digestion.

    Science.gov (United States)

    Ráduly, B; Gyenge, L; Szilveszter, Sz; Kedves, A; Crognale, S

    In this study the mesophilic two-stage anaerobic digestion (AD) of corn bioethanol distillery wastewater is investigated in laboratory-scale reactors. Two-stage AD technology separates the different sub-processes of the AD in two distinct reactors, enabling the use of optimal conditions for the different microbial consortia involved in the different process phases, and thus allowing for higher applicable organic loading rates (OLRs), shorter hydraulic retention times (HRTs) and better conversion rates of the organic matter, as well as higher methane content of the produced biogas. In our experiments the reactors have been operated in semi-continuous phase-separated mode. A specific methane production of 1,092 mL/(L·d) has been reached at an OLR of 6.5 g TCOD/(L·d) (TCOD: total chemical oxygen demand) and a total HRT of 21 days (5.7 days in the first-stage, and 15.3 days in the second-stage reactor). Nonetheless the methane concentration in the second-stage reactor was very high (78.9%); the two-stage AD outperformed the reference single-stage AD (conducted at the same reactor loading rate and retention time) by only a small margin in terms of volumetric methane production rate. This makes questionable whether the higher methane content of the biogas counterbalances the added complexity of the two-stage digestion.

  11. Tracking the first two seconds: three stages of visual information processing?

    Science.gov (United States)

    Jacob, Jane; Breitmeyer, Bruno G; Treviño, Melissa

    2013-12-01

    We compared visual priming and comparison tasks to assess information processing of a stimulus during the first 2 s after its onset. In both tasks, a 13-ms prime was followed at varying SOAs by a 40-ms probe. In the priming task, observers identified the probe as rapidly and accurately as possible; in the comparison task, observers determined as rapidly and accurately as possible whether or not the probe and prime were identical. Priming effects attained a maximum at an SOA of 133 ms and then declined monotonically to zero by 700 ms, indicating reliance on relatively brief visuosensory (iconic) memory. In contrast, the comparison effects yielded a multiphasic function, showing a maximum at 0 ms followed by a minimum at 133 ms, followed in turn by a maximum at 240 ms and another minimum at 720 ms, and finally a third maximum at 1,200 ms before declining thereafter. The results indicate three stages of prime processing that we take to correspond to iconic visible persistence, iconic informational persistence, and visual working memory, with the first two used in the priming task and all three in the comparison task. These stages are related to stages presumed to underlie stimulus processing in other tasks, such as those giving rise to the attentional blink.

  12. Hydrogen production from cellulose in a two-stage process combining fermentation and electrohydrogenesis

    KAUST Repository

    Lalaurette, Elodie; Thammannagowda, Shivegowda; Mohagheghi, Ali; Maness, Pin-Ching; Logan, Bruce E.

    2009-01-01

    A two-stage dark-fermentation and electrohydrogenesis process was used to convert the recalcitrant lignocellulosic materials into hydrogen gas at high yields and rates. Fermentation using Clostridium thermocellum produced 1.67 mol H2/mol

  13. Enhanced nitrogen removal from electroplating tail wastewater through two-staged anoxic-oxic (A/O) process.

    Science.gov (United States)

    Yan, Xinmei; Zhu, Chunyan; Huang, Bin; Yan, Qun; Zhang, Guangsheng

    2018-01-01

    Consisted of anaerobic (ANA), anoxic-1 (AN1), aerobic-1 (AE1), anoxic-2 (AN2), aerobic-2 (AE2) reactors and sediment tank, the two-staged A/O process was applied for depth treatment of electroplating tail wastewater with high electrical conductivity and large amounts of ammonia nitrogen. It was found that the NH 4 + -N and COD removal efficiencies reached 97.11% and 83.00%, respectively. Besides, the short-term salinity shock of the control, AE1 and AE2 indicated that AE1 and AE2 have better resistance to high salinity when the concentration of NaCl ranged from 1 to 10g/L. Meanwhile, it was found through high-throughput sequencing that bacteria genus Nitrosomonas, Nitrospira and Thauera, which are capable of nitrogen removal, were enriched in the two-staged A/O process. Moreover, both salt-tolerant bacteria and halophili bacteria were also found in the combined process. Therefore, microbial community within the two-staged A/O process could be acclimated to high electrical conductivity, and adapted for electroplating tail wastewater treatment. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Modelling of Two-Stage Methane Digestion With Pretreatment of Biomass

    Science.gov (United States)

    Dychko, A.; Remez, N.; Opolinskyi, I.; Kraychuk, S.; Ostapchuk, N.; Yevtieieva, L.

    2018-04-01

    Systems of anaerobic digestion should be used for processing of organic waste. Managing the process of anaerobic recycling of organic waste requires reliable predicting of biogas production. Development of mathematical model of process of organic waste digestion allows determining the rate of biogas output at the two-stage process of anaerobic digestion considering the first stage. Verification of Konto's model, based on the studied anaerobic processing of organic waste, is implemented. The dependencies of biogas output and its rate from time are set and may be used to predict the process of anaerobic processing of organic waste.

  15. Implementing Run-Time Evaluation of Distributed Timing Constraints in a Real-Time Environment

    DEFF Research Database (Denmark)

    Kristensen, C. H.; Drejer, N.

    1994-01-01

    In this paper we describe a solution to the problem of implementing run-time evaluation of timing constraints in distributed real-time environments......In this paper we describe a solution to the problem of implementing run-time evaluation of timing constraints in distributed real-time environments...

  16. Preventing Run-Time Bugs at Compile-Time Using Advanced C++

    Energy Technology Data Exchange (ETDEWEB)

    Neswold, Richard [Fermilab

    2018-01-01

    When writing software, we develop algorithms that tell the computer what to do at run-time. Our solutions are easier to understand and debug when they are properly modeled using class hierarchies, enumerations, and a well-factored API. Unfortunately, even with these design tools, we end up having to debug our programs at run-time. Worse still, debugging an embedded system changes its dynamics, making it tough to find and fix concurrency issues. This paper describes techniques using C++ to detect run-time bugs *at compile time*. A concurrency library, developed at Fermilab, is used for examples in illustrating these techniques.

  17. Comparison of two-stage thermophilic (68 degrees C/55 degrees C) anaerobic digestion with one-stage thermophilic (55 degrees C) digestion of cattle manure.

    Science.gov (United States)

    Nielsen, H B; Mladenovska, Z; Westermann, P; Ahring, B K

    2004-05-05

    A two-stage 68 degrees C/55 degrees C anaerobic degradation process for treatment of cattle manure was studied. In batch experiments, an increase of the specific methane yield, ranging from 24% to 56%, was obtained when cattle manure and its fractions (fibers and liquid) were pretreated at 68 degrees C for periods of 36, 108, and 168 h, and subsequently digested at 55 degrees C. In a lab-scale experiment, the performance of a two-stage reactor system, consisting of a digester operating at 68 degrees C with a hydraulic retention time (HRT) of 3 days, connected to a 55 degrees C reactor with 12-day HRT, was compared with a conventional single-stage reactor running at 55 degrees C with 15-days HRT. When an organic loading of 3 g volatile solids (VS) per liter per day was applied, the two-stage setup had a 6% to 8% higher specific methane yield and a 9% more effective VS-removal than the conventional single-stage reactor. The 68 degrees C reactor generated 7% to 9% of the total amount of methane of the two-stage system and maintained a volatile fatty acids (VFA) concentration of 4.0 to 4.4 g acetate per liter. Population size and activity of aceticlastic methanogens, syntrophic bacteria, and hydrolytic/fermentative bacteria were significantly lower in the 68 degrees C reactor than in the 55 degrees C reactors. The density levels of methanogens utilizing H2/CO2 or formate were, however, in the same range for all reactors, although the degradation of these substrates was significantly lower in the 68 degrees C reactor than in the 55 degrees C reactors. Temporal temperature gradient electrophoresis profiles (TTGE) of the 68 degrees C reactor demonstrated a stable bacterial community along with a less divergent community of archaeal species. Copyright 2004 Wiley Periodicals, Inc.

  18. Two-stage anaerobic digestion of cheese whey

    Energy Technology Data Exchange (ETDEWEB)

    Lo, K V; Liao, P H

    1986-01-01

    A two-stage digestion of cheese whey was studied using two anaerobic rotating biological contact reactors. The second-stage reactor receiving partially treated effluent from the first-stage reactor could be operated at a hydraulic retention time of one day. The results indicated that two-stage digestion is a feasible alternative for treating whey. 6 references.

  19. Combining Compile-Time and Run-Time Parallelization

    Directory of Open Access Journals (Sweden)

    Sungdo Moon

    1999-01-01

    Full Text Available This paper demonstrates that significant improvements to automatic parallelization technology require that existing systems be extended in two ways: (1 they must combine high‐quality compile‐time analysis with low‐cost run‐time testing; and (2 they must take control flow into account during analysis. We support this claim with the results of an experiment that measures the safety of parallelization at run time for loops left unparallelized by the Stanford SUIF compiler’s automatic parallelization system. We present results of measurements on programs from two benchmark suites – SPECFP95 and NAS sample benchmarks – which identify inherently parallel loops in these programs that are missed by the compiler. We characterize remaining parallelization opportunities, and find that most of the loops require run‐time testing, analysis of control flow, or some combination of the two. We present a new compile‐time analysis technique that can be used to parallelize most of these remaining loops. This technique is designed to not only improve the results of compile‐time parallelization, but also to produce low‐cost, directed run‐time tests that allow the system to defer binding of parallelization until run‐time when safety cannot be proven statically. We call this approach predicated array data‐flow analysis. We augment array data‐flow analysis, which the compiler uses to identify independent and privatizable arrays, by associating predicates with array data‐flow values. Predicated array data‐flow analysis allows the compiler to derive “optimistic” data‐flow values guarded by predicates; these predicates can be used to derive a run‐time test guaranteeing the safety of parallelization.

  20. Two stages of economic development

    OpenAIRE

    Gong, Gang

    2016-01-01

    This study suggests that the development process of a less-developed country can be divided into two stages, which demonstrate significantly different properties in areas such as structural endowments, production modes, income distribution, and the forces that drive economic growth. The two stages of economic development have been indicated in the growth theory of macroeconomics and in the various "turning point" theories in development economics, including Lewis's dual economy theory, Kuznet...

  1. Computational Modelling of Large Scale Phage Production Using a Two-Stage Batch Process

    Directory of Open Access Journals (Sweden)

    Konrad Krysiak-Baltyn

    2018-04-01

    Full Text Available Cost effective and scalable methods for phage production are required to meet an increasing demand for phage, as an alternative to antibiotics. Computational models can assist the optimization of such production processes. A model is developed here that can simulate the dynamics of phage population growth and production in a two-stage, self-cycling process. The model incorporates variable infection parameters as a function of bacterial growth rate and employs ordinary differential equations, allowing application to a setup with multiple reactors. The model provides simple cost estimates as a function of key operational parameters including substrate concentration, feed volume and cycling times. For the phage and bacteria pairing examined, costs and productivity varied by three orders of magnitude, with the lowest cost found to be most sensitive to the influent substrate concentration and low level setting in the first vessel. An example case study of phage production is also presented, showing how parameter values affect the production costs and estimating production times. The approach presented is flexible and can be used to optimize phage production at laboratory or factory scale by minimizing costs or maximizing productivity.

  2. EFFECT OF HARDENING TIME ON DEFORMATION-STRENGTH INDICATORS OF CONCRETE FOR INJECTION WITH A TWO-STAGE EXPANSION DURING HARDENING IN WATER

    Directory of Open Access Journals (Sweden)

    Tatjana N. Zhilnikova

    2017-01-01

    Full Text Available Abstract. Objectives Concretes for injection with a two-stage expansion are a kind of selfstressing concrete obtained with the use of self-stressing cement.The aim of the work is to study the influence of the duration of aging on the porosity, strength and self-stress of concrete hardening in water, depending on the expansion value at the first stage. At the first stage, the compacted concrete mixture is expanded to ensure complete filling of the formwork space. At the second stage, the hardening concrete expands due to the formation of an increased amount of ettringite. This process is prolonged in time, with the amount of self-stress and strength dependant on the conditions of hardening. Methods  Experimental evaluation of self-stress, strength and porosity of concretes that are permanently hardened in water, under air-moist and air-dry conditions after different expansion at the first stage. The self-stress of cement stone is the result of superposition of two processes: the hardening of the structure due to hydration of silicates and its expansion as a result of hydration of calcium aluminates with the subsequent formation of ettringite. The magnitude of self-stress is determined by the ratio of these two processes. The self-stress of the cement stone changes in a manner similar to the change in its expansion. The stabilisation of expansion is accompanied by stabilisation of self-stress of cement stone. Results  The relationship of self-stress, strength and porosity of concrete for injection with a two-stage expansion on the duration and humidity conditions of hardening, taking into account the conditions of deformation limitation at the first stage, is revealed. Conclusion During prolonged hardening in an aqueous medium, self-stresses are reduced up to 25% with the exception of expansion at the first stage and up to 20% with an increase in volume up to 5% at the first stage. The increase in compressive strength is up to 28% relative to

  3. Development of an innovative two-stage process, a combination of acidogenic hydrogenesis and methanogenesis

    Energy Technology Data Exchange (ETDEWEB)

    Han, S.K.; Shin, H.S. [Korea Advanced Inst. of Science and Technology, Daejeon (Korea, Republic of). Dept. of Civil and Enviromental Engineering

    2004-07-01

    Hydrogen produced from waste by means of fermentative bacteria is an attractive way to produce this fuel as an alternative to fossil fuels. It also helps treat the associated waste. The authors have undertaken to optimize acidogenic hydrogenesis and methanogenesis. Building on this, they then developed a two-stage process that produces both hydrogen and methane. Acidogenic hydrogenesis of food waste was investigated using a leaching bed reactor. The dilution rate was varied in order to maximize efficiency which was as high as 70.8 per cent. Further to this, an upflow anaerobic sludge blanket reactor converted the wastewater from acidogenic hydrogenesis into methane. Chemical oxygen demand (COD) removal rates exceeded 96 per cent up to a COD loading of 12.9 COD/l/d. After this, the authors devised a new two-stage process based on a combination of acidogenic hydrogenesis and methanogenesis. The authors report on results for this process using food waste as feedstock. 5 refs., 5 figs.

  4. Influence of dispatching rules on average production lead time for multi-stage production systems.

    Science.gov (United States)

    Hübl, Alexander; Jodlbauer, Herbert; Altendorfer, Klaus

    2013-08-01

    In this paper the influence of different dispatching rules on the average production lead time is investigated. Two theorems based on covariance between processing time and production lead time are formulated and proved theoretically. Theorem 1 links the average production lead time to the "processing time weighted production lead time" for the multi-stage production systems analytically. The influence of different dispatching rules on average lead time, which is well known from simulation and empirical studies, can be proved theoretically in Theorem 2 for a single stage production system. A simulation study is conducted to gain more insight into the influence of dispatching rules on average production lead time in a multi-stage production system. We find that the "processing time weighted average production lead time" for a multi-stage production system is not invariant of the applied dispatching rule and can be used as a dispatching rule independent indicator for single-stage production systems.

  5. SASD and the CERN/SPS run-time coordinator

    International Nuclear Information System (INIS)

    Morpurgo, G.

    1990-01-01

    Structured Analysis and Structured Design (SASD) provides us with a handy way of specifying the flow of data between the different modules (functional units) of a system. But the formalism loses its immediacy when the control flow has to be taken into account as well. Moreover, due to the lack of appropriate software infrastructure, very often the actual implementation of the system does not reflect the module decoupling and independence so much emphasized at the design stage. In this paper the run-time coordinator, a complete software infrastructure to support a real decoupling of the functional units, is described. Special attention is given to the complementarity of our approach and the SASD methodology. (orig.)

  6. Minimizing makespan in a two-stage flow shop with parallel batch-processing machines and re-entrant jobs

    Science.gov (United States)

    Huang, J. D.; Liu, J. J.; Chen, Q. X.; Mao, N.

    2017-06-01

    Against a background of heat-treatment operations in mould manufacturing, a two-stage flow-shop scheduling problem is described for minimizing makespan with parallel batch-processing machines and re-entrant jobs. The weights and release dates of jobs are non-identical, but job processing times are equal. A mixed-integer linear programming model is developed and tested with small-scale scenarios. Given that the problem is NP hard, three heuristic construction methods with polynomial complexity are proposed. The worst case of the new constructive heuristic is analysed in detail. A method for computing lower bounds is proposed to test heuristic performance. Heuristic efficiency is tested with sets of scenarios. Compared with the two improved heuristics, the performance of the new constructive heuristic is superior.

  7. Two-step two-stage fission gas release model

    International Nuclear Information System (INIS)

    Kim, Yong-soo; Lee, Chan-bock

    2006-01-01

    Based on the recent theoretical model, two-step two-stage model is developed which incorporates two stage diffusion processes, grain lattice and grain boundary diffusion, coupled with the two step burn-up factor in the low and high burn-up regime. FRAPCON-3 code and its in-pile data sets have been used for the benchmarking and validation of this model. Results reveals that its prediction is in better agreement with the experimental measurements than that by any model contained in the FRAPCON-3 code such as ANS 5.4, modified ANS5.4, and Forsberg-Massih model over whole burn-up range up to 70,000 MWd/MTU. (author)

  8. Effect of two-stage sintering process on microstructure and mechanical properties of ODS tungsten heavy alloy

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Kyong H. [Department of Materials Science and Engineering, Korea Advanced Institute of Science and Technology, 373-1 Kusong-dong, Yusong-gu, Taejon 305-701 (Korea, Republic of); Cha, Seung I. [International Center for Young Scientists, National Institute for Materials Science 1-1, Namiki, Tsukuba 305-0044 (Japan); Ryu, Ho J. [DUPIC, Korea Atomic Energy Research Institute, 150 Deokjin-dong, Yusong-gu, Taejon 305-353 (Korea, Republic of); Hong, Soon H. [Department of Materials Science and Engineering, Korea Advanced Institute of Science and Technology, 373-1 Kusong-dong, Yusong-gu, Taejon 305-701 (Korea, Republic of)], E-mail: shhong@kaist.ac.kr

    2007-06-15

    Oxide dispersion strengthened (ODS) tungsten heavy alloys have been considered as promising candidates for advanced kinetic energy penetrator due to their characteristic fracture mode compared to conventional tungsten heavy alloy. In order to obtain high relative density, the ODS tungsten heavy alloy needs to be sintered at higher temperature for longer time, however, induces growth of tungsten grains. Therefore, it is very difficult to obtain controlled microstructure of ODS tungsten heavy alloy having fine tungsten grains with full densification. In this study, two-stage sintering process, consisted of primary solid-state sintering and followed by secondary liquid phase sintering, was introduced for ODS tungsten heavy alloys. The mechanically alloyed 94W-4.56Ni-1.14Fe-0.3Y{sub 2}O{sub 3} powders are solid-state sintered at 1300-1450 deg. C for 1 h in hydrogen atmosphere, and followed by liquid phase sintering temperature at 1465-1485 deg. C for 0-60 min. The microstructure of ODS tungsten heavy alloys showed high relative density above 97%, with contiguous tungsten grains after primary solid-state sintering. The microstructure of solid-state sintered ODS tungsten heavy alloy was changed into spherical tungsten grains embedded in W-Ni-Fe matrix during secondary liquid phase sintering. The two-stage sintered ODS tungsten heavy alloy from mechanically alloyed powders showed finer microstructure and higher mechanical properties than conventional liquid phase sintered alloy. The mechanical properties of ODS tungsten heavy alloys are dependent on the microstructural parameters such as tungsten grain size, matrix volume fraction and tungsten/tungsten contiguity, which can be controlled through the two-stage sintering process.

  9. Process-independent strong running coupling

    International Nuclear Information System (INIS)

    Binosi, Daniele; Mezrag, Cedric; Papavassiliou, Joannis; Roberts, Craig D.; Rodriguez-Quintero, Jose

    2017-01-01

    Here, we unify two widely different approaches to understanding the infrared behavior of quantum chromodynamics (QCD), one essentially phenomenological, based on data, and the other computational, realized via quantum field equations in the continuum theory. Using the latter, we explain and calculate a process-independent running-coupling for QCD, a new type of effective charge that is an analogue of the Gell-Mann–Low effective coupling in quantum electrodynamics. The result is almost identical to the process-dependent effective charge defined via the Bjorken sum rule, which provides one of the most basic constraints on our knowledge of nucleon spin structure. As a result, this reveals the Bjorken sum to be a near direct means by which to gain empirical insight into QCD's Gell-Mann–Low effective charge.

  10. Neural mechanisms of human perceptual learning: electrophysiological evidence for a two-stage process.

    Science.gov (United States)

    Hamamé, Carlos M; Cosmelli, Diego; Henriquez, Rodrigo; Aboitiz, Francisco

    2011-04-26

    Humans and other animals change the way they perceive the world due to experience. This process has been labeled as perceptual learning, and implies that adult nervous systems can adaptively modify the way in which they process sensory stimulation. However, the mechanisms by which the brain modifies this capacity have not been sufficiently analyzed. We studied the neural mechanisms of human perceptual learning by combining electroencephalographic (EEG) recordings of brain activity and the assessment of psychophysical performance during training in a visual search task. All participants improved their perceptual performance as reflected by an increase in sensitivity (d') and a decrease in reaction time. The EEG signal was acquired throughout the entire experiment revealing amplitude increments, specific and unspecific to the trained stimulus, in event-related potential (ERP) components N2pc and P3 respectively. P3 unspecific modification can be related to context or task-based learning, while N2pc may be reflecting a more specific attentional-related boosting of target detection. Moreover, bell and U-shaped profiles of oscillatory brain activity in gamma (30-60 Hz) and alpha (8-14 Hz) frequency bands may suggest the existence of two phases for learning acquisition, which can be understood as distinctive optimization mechanisms in stimulus processing. We conclude that there are reorganizations in several neural processes that contribute differently to perceptual learning in a visual search task. We propose an integrative model of neural activity reorganization, whereby perceptual learning takes place as a two-stage phenomenon including perceptual, attentional and contextual processes.

  11. 16 CFR 803.10 - Running of time.

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Running of time. 803.10 Section 803.10 Commercial Practices FEDERAL TRADE COMMISSION RULES, REGULATIONS, STATEMENTS AND INTERPRETATIONS UNDER THE HART-SCOTT-RODINO ANTITRUST IMPROVEMENTS ACT OF 1976 TRANSMITTAL RULES § 803.10 Running of time. (a...

  12. Production of acids and alcohols from syngas in a two-stage continuous fermentation process.

    Science.gov (United States)

    Abubackar, Haris Nalakath; Veiga, María C; Kennes, Christian

    2018-04-01

    A two-stage continuous system with two stirred tank reactors in series was utilized to perform syngas fermentation using Clostridium carboxidivorans. The first bioreactor (bioreactor 1) was maintained at pH 6 to promote acidogenesis and the second one (bioreactor 2) at pH 5 to stimulate solventogenesis. Both reactors were operated in continuous mode by feeding syngas (CO:CO 2 :H 2 :N 2 ; 30:10:20:40; vol%) at a constant flow rate while supplying a nutrient medium at different flow rates of 8.1, 15, 22 and 30 ml/h. A cell recycling unit was added to bioreactor 2 in order to recycle the cells back to the reactor, maintaining the OD 600 around 1 in bioreactor 2 throughout the experimental run. When comparing the flow rates, the best results in terms of solvent production were obtained with a flow rate of 22 ml/h, reaching the highest average outlet concentration for alcohols (1.51 g/L) and the most favorable alcohol/acid ratio of 0.32. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Comparative assessment of single-stage and two-stage anaerobic digestion for the treatment of thin stillage.

    Science.gov (United States)

    Nasr, Noha; Elbeshbishy, Elsayed; Hafez, Hisham; Nakhla, George; El Naggar, M Hesham

    2012-05-01

    A comparative evaluation of single-stage and two-stage anaerobic digestion processes for biomethane and biohydrogen production using thin stillage was performed to assess the impact of separating the acidogenic and methanogenic stages on anaerobic digestion. Thin stillage, the main by-product from ethanol production, was characterized by high total chemical oxygen demand (TCOD) of 122 g/L and total volatile fatty acids (TVFAs) of 12 g/L. A maximum methane yield of 0.33 L CH(4)/gCOD(added) (STP) was achieved in the two-stage process while a single-stage process achieved a maximum yield of only 0.26 L CH(4)/gCOD(added) (STP). The separation of acidification stage increased the TVFAs to TCOD ratio from 10% in the raw thin stillage to 54% due to the conversion of carbohydrates into hydrogen and VFAs. Comparison of the two processes based on energy outcome revealed that an increase of 18.5% in the total energy yield was achieved using two-stage anaerobic digestion. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. SUPRA: open-source software-defined ultrasound processing for real-time applications : A 2D and 3D pipeline from beamforming to B-mode.

    Science.gov (United States)

    Göbl, Rüdiger; Navab, Nassir; Hennersperger, Christoph

    2018-06-01

    Research in ultrasound imaging is limited in reproducibility by two factors: First, many existing ultrasound pipelines are protected by intellectual property, rendering exchange of code difficult. Second, most pipelines are implemented in special hardware, resulting in limited flexibility of implemented processing steps on such platforms. With SUPRA, we propose an open-source pipeline for fully software-defined ultrasound processing for real-time applications to alleviate these problems. Covering all steps from beamforming to output of B-mode images, SUPRA can help improve the reproducibility of results and make modifications to the image acquisition mode accessible to the research community. We evaluate the pipeline qualitatively, quantitatively, and regarding its run time. The pipeline shows image quality comparable to a clinical system and backed by point spread function measurements a comparable resolution. Including all processing stages of a usual ultrasound pipeline, the run-time analysis shows that it can be executed in 2D and 3D on consumer GPUs in real time. Our software ultrasound pipeline opens up the research in image acquisition. Given access to ultrasound data from early stages (raw channel data, radiofrequency data), it simplifies the development in imaging. Furthermore, it tackles the reproducibility of research results, as code can be shared easily and even be executed without dedicated ultrasound hardware.

  15. The CDF Run II disk inventory manager

    International Nuclear Information System (INIS)

    Hubbard, Paul; Lammel, Stephan

    2001-01-01

    The Collider Detector at Fermilab (CDF) experiment records and analyses proton-antiproton interactions at a center-of-mass energy of 2 TeV. Run II of the Fermilab Tevatron started in April of this year. The duration of the run is expected to be over two years. One of the main data handling strategies of CDF for Run II is to hide all tape access from the user and to facilitate sharing of data and thus disk space. A disk inventory manager was designed and developed over the past years to keep track of the data on disk, to coordinate user access to the data, and to stage data back from tape to disk as needed. The CDF Run II disk inventory manager consists of a server process, a user and administrator command line interfaces, and a library with the routines of the client API. Data are managed in filesets which are groups of one or more files. The system keeps track of user access to the filesets and attempts to keep frequently accessed data on disk. Data that are not on disk are automatically staged back from tape as needed. For CDF the main staging method is based on the mt-tools package as tapes are written according to the ANSI standard

  16. Experimental Results of the First Two Stages of an Advanced Transonic Core Compressor Under Isolated and Multi-Stage Conditions

    Science.gov (United States)

    Prahst, Patricia S.; Kulkarni, Sameer; Sohn, Ki H.

    2015-01-01

    NASA's Environmentally Responsible Aviation (ERA) Program calls for investigation of the technology barriers associated with improved fuel efficiency of large gas turbine engines. Under ERA the task for a High Pressure Ratio Core Technology program calls for a higher overall pressure ratio of 60 to 70. This mean that the HPC would have to almost double in pressure ratio and keep its high level of efficiency. The challenge is how to match the corrected mass flow rate of the front two supersonic high reaction and high corrected tip speed stages with a total pressure ratio of 3.5. NASA and GE teamed to address this challenge by using the initial geometry of an advanced GE compressor design to meet the requirements of the first 2 stages of the very high pressure ratio core compressor. The rig was configured to run as a 2 stage machine, with Strut and IGV, Rotor 1 and Stator 1 run as independent tests which were then followed by adding the second stage. The goal is to fully understand the stage performances under isolated and multi-stage conditions and fully understand any differences and provide a detailed aerodynamic data set for CFD validation. Full use was made of steady and unsteady measurement methods to isolate fluid dynamics loss source mechanisms due to interaction and endwalls. The paper will present the description of the compressor test article, its predicted performance and operability, and the experimental results for both the single stage and two stage configurations. We focus the detailed measurements on 97 and 100 of design speed at 3 vane setting angles.

  17. Feasibility of a two-stage biological aerated filter for depth processing of electroplating-wastewater.

    Science.gov (United States)

    Liu, Bo; Yan, Dongdong; Wang, Qi; Li, Song; Yang, Shaogui; Wu, Wenfei

    2009-09-01

    A "two-stage biological aerated filter" (T-SBAF) consisting of two columns in series was developed to treat electroplating-wastewater. Due to the low BOD/CODcr values of electroplating-wastewater, "twice start-up" was employed to reduce the time for adaptation of microorganisms, a process that takes up of 20 days. Under steady-state conditions, the removal of CODcr and NH(4)(+)-N increased first and then decreased while the hydraulic loadings increased from 0.75 to 1.5 m(3) m(-2) h(-1). The air/water ratio had the same influence on the removal of CODcr and NH(4)(+)-N when increasing from 3:1 to 6:1. When the hydraulic loadings and air/water ratio were 1.20 m(3) m(-2) h(-1) and 4:1, the optimal removal of CODcr, NH(4)(+)-N and total-nitrogen (T-N) were 90.13%, 92.51% and 55.46%, respectively. The effluent steadily reached the wastewater reuse standard. Compared to the traditional BAF, the period before backwashing of the T-SBAF could be extended to 10days, and the recovery time was considerably shortened.

  18. Comparisons of single-stage and two-stage approaches to genomic selection.

    Science.gov (United States)

    Schulz-Streeck, Torben; Ogutu, Joseph O; Piepho, Hans-Peter

    2013-01-01

    Genomic selection (GS) is a method for predicting breeding values of plants or animals using many molecular markers that is commonly implemented in two stages. In plant breeding the first stage usually involves computation of adjusted means for genotypes which are then used to predict genomic breeding values in the second stage. We compared two classical stage-wise approaches, which either ignore or approximate correlations among the means by a diagonal matrix, and a new method, to a single-stage analysis for GS using ridge regression best linear unbiased prediction (RR-BLUP). The new stage-wise method rotates (orthogonalizes) the adjusted means from the first stage before submitting them to the second stage. This makes the errors approximately independently and identically normally distributed, which is a prerequisite for many procedures that are potentially useful for GS such as machine learning methods (e.g. boosting) and regularized regression methods (e.g. lasso). This is illustrated in this paper using componentwise boosting. The componentwise boosting method minimizes squared error loss using least squares and iteratively and automatically selects markers that are most predictive of genomic breeding values. Results are compared with those of RR-BLUP using fivefold cross-validation. The new stage-wise approach with rotated means was slightly more similar to the single-stage analysis than the classical two-stage approaches based on non-rotated means for two unbalanced datasets. This suggests that rotation is a worthwhile pre-processing step in GS for the two-stage approaches for unbalanced datasets. Moreover, the predictive accuracy of stage-wise RR-BLUP was higher (5.0-6.1%) than that of componentwise boosting.

  19. Hourly cooling load forecasting using time-indexed ARX models with two-stage weighted least squares regression

    International Nuclear Information System (INIS)

    Guo, Yin; Nazarian, Ehsan; Ko, Jeonghan; Rajurkar, Kamlakar

    2014-01-01

    Highlights: • Developed hourly-indexed ARX models for robust cooling-load forecasting. • Proposed a two-stage weighted least-squares regression approach. • Considered the effect of outliers as well as trend of cooling load and weather patterns. • Included higher order terms and day type patterns in the forecasting models. • Demonstrated better accuracy compared with some ARX and ANN models. - Abstract: This paper presents a robust hourly cooling-load forecasting method based on time-indexed autoregressive with exogenous inputs (ARX) models, in which the coefficients are estimated through a two-stage weighted least squares regression. The prediction method includes a combination of two separate time-indexed ARX models to improve prediction accuracy of the cooling load over different forecasting periods. The two-stage weighted least-squares regression approach in this study is robust to outliers and suitable for fast and adaptive coefficient estimation. The proposed method is tested on a large-scale central cooling system in an academic institution. The numerical case studies show the proposed prediction method performs better than some ANN and ARX forecasting models for the given test data set

  20. Modeling and Implementing Two-Stage AdaBoost for Real-Time Vehicle License Plate Detection

    Directory of Open Access Journals (Sweden)

    Moon Kyou Song

    2014-01-01

    Full Text Available License plate (LP detection is the most imperative part of the automatic LP recognition system. In previous years, different methods, techniques, and algorithms have been developed for LP detection (LPD systems. This paper proposes to automatical detection of car LPs via image processing techniques based on classifier or machine learning algorithms. In this paper, we propose a real-time and robust method for LPD systems using the two-stage adaptive boosting (AdaBoost algorithm combined with different image preprocessing techniques. Haar-like features are used to compute and select features from LP images. The AdaBoost algorithm is used to classify parts of an image within a search window by a trained strong classifier as either LP or non-LP. Adaptive thresholding is used for the image preprocessing method applied to those images that are of insufficient quality for LPD. This method is of a faster speed and higher accuracy than most of the existing methods used in LPD. Experimental results demonstrate that the average LPD rate is 98.38% and the computational time is approximately 49 ms.

  1. Time complexity and linear-time approximation of the ancient two-machine flow shop

    NARCIS (Netherlands)

    Rote, G.; Woeginger, G.J.

    1998-01-01

    We consider the scheduling problems F2¿Cmax and F2|no-wait|Cmax, i.e. makespan minimization in a two-machine flow shop, with and without no wait in process. For both problems solution algorithms based on sorting with O(n log n) running time are known, where n denotes the number of jobs. [1, 2]. We

  2. A two-stage biological gas to liquid transfer process to convert carbon dioxide into bioplastic

    KAUST Repository

    Al Rowaihi, Israa

    2018-03-06

    The fermentation of carbon dioxide (CO2) with hydrogen (H2) uses available low-cost gases to synthesis acetic acid. Here, we present a two-stage biological process that allows the gas to liquid transfer (Bio-GTL) of CO2 into the biopolymer polyhydroxybutyrate (PHB). Using the same medium in both stages, first, acetic acid is produced (3.2 g L−1) by Acetobacterium woodii from 5.2 L gas-mixture of CO2:H2 (15:85 v/v) under elevated pressure (≥2.0 bar) to increase H2-solubility in water. Second, acetic acid is converted to PHB (3 g L−1 acetate into 0.5 g L−1 PHB) by Ralstonia eutropha H16. The efficiencies and space-time yields were evaluated, and our data show the conversion of CO2 into PHB with a 33.3% microbial cell content (percentage of the ratio of PHB concentration to cell concentration) after 217 h. Collectively, our results provide a resourceful platform for future optimization and commercialization of a Bio-GTL for PHB production.

  3. Real-time simulation of the retina allowing visualization of each processing stage

    Science.gov (United States)

    Teeters, Jeffrey L.; Werblin, Frank S.

    1991-08-01

    The retina computes to let us see, but can we see the retina compute? Until now, the answer has been no, because the unconscious nature of the processing hides it from our view. Here the authors describe a method of seeing computations performed throughout the retina. This is achieved by using neurophysiological data to construct a model of the retina, and using a special-purpose image processing computer (PIPE) to implement the model in real time. Processing in the model is organized into stages corresponding to computations performed by each retinal cell type. The final stage is the transient (change detecting) ganglion cell. A CCD camera forms the input image, and the activity of a selected retinal cell type is the output which is displayed on a TV monitor. By changing the retina cell driving the monitor, the progressive transformations of the image by the retina can be observed. These simulations demonstrate the ubiquitous presence of temporal and spatial variations in the patterns of activity generated by the retina which are fed into the brain. The dynamical aspects make these patterns very different from those generated by the common DOG (Difference of Gaussian) model of receptive field. Because the retina is so successful in biological vision systems, the processing described here may be useful in machine vision.

  4. Investigations of timing during the schedule and reinforcement intervals with wheel-running reinforcement.

    Science.gov (United States)

    Belke, Terry W; Christie-Fougere, Melissa M

    2006-11-01

    Across two experiments, a peak procedure was used to assess the timing of the onset and offset of an opportunity to run as a reinforcer. The first experiment investigated the effect of reinforcer duration on temporal discrimination of the onset of the reinforcement interval. Three male Wistar rats were exposed to fixed-interval (FI) 30-s schedules of wheel-running reinforcement and the duration of the opportunity to run was varied across values of 15, 30, and 60s. Each session consisted of 50 reinforcers and 10 probe trials. Results showed that as reinforcer duration increased, the percentage of postreinforcement pauses longer than the 30-s schedule interval increased. On probe trials, peak response rates occurred near the time of reinforcer delivery and peak times varied with reinforcer duration. In a second experiment, seven female Long-Evans rats were exposed to FI 30-s schedules leading to 30-s opportunities to run. Timing of the onset and offset of the reinforcement period was assessed by probe trials during the schedule interval and during the reinforcement interval in separate conditions. The results provided evidence of timing of the onset, but not the offset of the wheel-running reinforcement period. Further research is required to assess if timing occurs during a wheel-running reinforcement period.

  5. A two-stage biological gas to liquid transfer process to convert carbon dioxide into bioplastic

    KAUST Repository

    Al Rowaihi, Israa; Kick, Benjamin; Grö tzinger, Stefan W.; Burger, Christian; Karan, Ram; Weuster-Botz, Dirk; Eppinger, Jö rg; Arold, Stefan T.

    2018-01-01

    The fermentation of carbon dioxide (CO2) with hydrogen (H2) uses available low-cost gases to synthesis acetic acid. Here, we present a two-stage biological process that allows the gas to liquid transfer (Bio-GTL) of CO2 into the biopolymer

  6. The CMS Level-1 Calorimeter Trigger for LHC Run II

    CERN Document Server

    Zabi, Alexandre; Cadamuro, Luca; Davignon, Olivier; Romanteau, Thierry; Strebler, Thomas; Cepeda, Maria Luisa; Sauvan, Jean-baptiste; Wardle, Nicholas; Aggleton, Robin Cameron; Ball, Fionn Amhairghen; Brooke, James John; Newbold, David; Paramesvaran, Sudarshan; Smith, D; Taylor, Joseph Ross; Fountas, Konstantinos; Baber, Mark David John; Bundock, Aaron; Breeze, Shane Davy; Citron, Matthew; Elwood, Adam Christopher; Hall, Geoffrey; Iles, Gregory Michiel; Laner Ogilvy, Christian; Penning, Bjorn; Rose, A; Shtipliyski, Antoni; Tapper, Alexander; Durkin, Timothy John; Harder, Kristian; Harper, Sam; Shepherd-Themistocleous, Claire; Thea, Alessandro; Williams, Thomas Stephen; Dasu, Sridhara Rao; Dodd, Laura Margaret; Klabbers, Pamela Renee; Levine, Aaron; Ojalvo, Isabel Rose; Ruggles, Tyler Henry; Smith, Nicholas Charles; Smith, Wesley; Svetek, Ales; Forbes, R; Tikalsky, Jesra Lilah; Vicente, Marcelo

    2017-01-01

    Results from the completed Phase 1 Upgrade of the Compact Muon Solenoid (CMS) Level-1 Calorimeter Trigger are presented. The upgrade was completed in two stages, with the first running in 2015 for proton and Heavy Ion collisions and the final stage for 2016 data taking. The Level-1 trigger has been fully commissioned and has been used by CMS to collect over 43 fb-1 of data since the start of the Large Hadron Collider (LHC) Run II. The new trigger has been designed to improve the performance at high luminosity and large number of simultaneous inelastic collisions per crossing (pile-up). For this purpose it uses a novel design, the Time Multiplexed Trigger (TMT), which enables the data from an event to be processed by a single trigger processor at full granularity over several bunch crossings. The TMT design is a modular design based on the uTCA standard. The trigger processors are instrumented with Xilinx Virtex-7 690 FPGAs and 10 Gbps optical links. The TMT architecture is flexible and the number of trigger p...

  7. Validity of 20-metre multi stage shuttle run test for estimation of ...

    African Journals Online (AJOL)

    Validity of 20-metre multi stage shuttle run test for estimation of maximum oxygen uptake in indian male university students. P Chatterjee, AK Banerjee, P Debnath, P Bas, B Chatterjee. Abstract. No Abstract. South African Journal for Physical, Health Education, Recreation and DanceVol. 12(4) 2006: pp. 461-467. Full Text:.

  8. Asymptotic description of two metastable processes of solidification for the case of large relaxation time

    International Nuclear Information System (INIS)

    Omel'yanov, G.A.

    1995-07-01

    The non-isothermal Cahn-Hilliard equations in the n-dimensional case (n = 2,3) are considered. The interaction length is proportional to a small parameter, and the relaxation time is proportional to a constant. The asymptotic solutions describing two metastable processes are constructed and justified. The soliton type solution describes the first stage of separation in alloy, when a set of ''superheated liquid'' appears inside the ''solid'' part. The Van der Waals type solution describes the free interface dynamics for large time. The smoothness of temperature is established for large time and the Mullins-Sekerka problem describing the free interface is derived. (author). 46 refs

  9. Differences in ground contact time explain the less efficient running economy in north african runners.

    Science.gov (United States)

    Santos-Concejero, J; Granados, C; Irazusta, J; Bidaurrazaga-Letona, I; Zabala-Lili, J; Tam, N; Gil, S M

    2013-09-01

    The purpose of this study was to investigate the relationship between biomechanical variables and running economy in North African and European runners. Eight North African and 13 European male runners of the same athletic level ran 4-minute stages on a treadmill at varying set velocities. During the test, biomechanical variables such as ground contact time, swing time, stride length, stride frequency, stride angle and the different sub-phases of ground contact were recorded using an optical measurement system. Additionally, oxygen uptake was measured to calculate running economy. The European runners were more economical than the North African runners at 19.5 km · h(-1), presented lower ground contact time at 18 km · h(-1) and 19.5 km · h(-1) and experienced later propulsion sub-phase at 10.5 km · h(-1),12 km · h(-1), 15 km · h(-1), 16.5 km · h(-1) and 19.5 km · h(-1) than the European runners (P Running economy at 19.5 km · h(-1) was negatively correlated with swing time (r = -0.53) and stride angle (r = -0.52), whereas it was positively correlated with ground contact time (r = 0.53). Within the constraints of extrapolating these findings, the less efficient running economy in North African runners may imply that their outstanding performance at international athletic events appears not to be linked to running efficiency. Further, the differences in metabolic demand seem to be associated with differing biomechanical characteristics during ground contact, including longer contact times.

  10. Development of a two-stage inspection process for the assessment of deteriorating infrastructure

    International Nuclear Information System (INIS)

    Sheils, Emma; O'Connor, Alan; Breysse, Denys; Schoefs, Franck; Yotte, Sylvie

    2010-01-01

    Inspection-based maintenance strategies can provide an efficient tool for the management of ageing infrastructure subjected to deterioration. Many of these methods rely on quantitative data from inspections, rather than qualitative and subjective data. The focus of this paper is on the development of an inspection-based decision scheme, incorporating analysis on the effect of the cost and quality of NDT tools to assess the condition of infrastructure elements/networks during their lifetime. For the first time the two aspects of an inspection are considered, i.e. detection and sizing. Since each stage of an inspection is carried out for a distinct purpose, different parameters are used to represent each procedure and both have been incorporated into a maintenance management model. The separation of these procedures allows the interaction between the two inspection techniques to be studied. The inspection for detection process acts as a screening exercise to determine which defects require further inspection for sizing. A decision tool is developed that allows the owner/manager of the infrastructural element/network to choose the most cost-efficient maintenance management plan based on his/her specific requirements.

  11. Hydrogen and methane production from condensed molasses fermentation soluble by a two-stage anaerobic process

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Chiu-Yue; Liang, You-Chyuan; Lay, Chyi-How [Feng Chia Univ., Taichung, Taiwan (China). Dept. of Environmental Engineering and Science; Chen, Chin-Chao [Chungchou Institute of Technology, Taiwan (China). Environmental Resources Lab.; Chang, Feng-Yuan [Feng Chia Univ., Taichung, Taiwan (China). Research Center for Energy and Resources

    2010-07-01

    The treatment of condensed molasses fermentation soluble (CMS) is a troublesome problem for glutamate manufacturing factory. However, CMS contains high carbohydrate and nutrient contents and is an attractive and commercially potential feedstock for bioenergy production. The aim of this paper is to produce hydrogen and methane by two-stage anaerobic fermentation process. The fermentative hydrogen production from CMS was conducted in a continuously-stirred tank bioreactor (working volume 4 L) which was operated at a hydraulic retention time (HRT) of 8 h, organic loading rate (OLR) of 120 kg COD/m{sup 3}-d, temperature of 35 C, pH 5.5 and sewage sludge as seed. The anaerobic methane production was conducted in an up-flow bioreactor (working volume 11 L) which was operated at a HRT of 24 -60 hrs, OLR of 4.0-10 kg COD/m{sup 3}-d, temperature of 35 C, pH 7.0 with using anaerobic granule sludge from fructose manufacturing factory as the seed and the effluent from hydrogen production process as the substrate. These two reactors have been operated successfully for more than 400 days. The steady-state hydrogen content, hydrogen production rate and hydrogen production yield in the hydrogen fermentation system were 37%, 169 mmol-H{sub 2}/L-d and 93 mmol-H{sub 2}/g carbohydrate{sub removed}, respectively. In the methane fermentation system, the peak methane content and methane production rate were 66.5 and 86.8 mmol-CH{sub 4}/L-d with methane production yield of 189.3 mmol-CH{sub 4}/g COD{sub removed} at an OLR 10 kg/m{sup 3}-d. The energy production rate was used to elucidate the energy efficiency for this two-stage process. The total energy production rate of 133.3 kJ/L/d was obtained with 5.5 kJ/L/d from hydrogen fermentation and 127.8 kJ/L/d from methane fermentation. (orig.)

  12. Two-stage free electron laser research

    Science.gov (United States)

    Segall, S. B.

    1984-10-01

    KMS Fusion, Inc. began studying the feasibility of two-stage free electron lasers for the Office of Naval Research in June, 1980. At that time, the two-stage FEL was only a concept that had been proposed by Luis Elias. The range of parameters over which such a laser could be successfully operated, attainable power output, and constraints on laser operation were not known. The primary reason for supporting this research at that time was that it had the potential for producing short-wavelength radiation using a relatively low voltage electron beam. One advantage of a low-voltage two-stage FEL would be that shielding requirements would be greatly reduced compared with single-stage short-wavelength FEL's. If the electron energy were kept below about 10 MeV, X-rays, generated by electrons striking the beam line wall, would not excite neutron resonance in atomic nuclei. These resonances cause the emission of neutrons with subsequent induced radioactivity. Therefore, above about 10 MeV, a meter or more of concrete shielding is required for the system, whereas below 10 MeV, a few millimeters of lead would be adequate.

  13. Estimation Accuracy on Execution Time of Run-Time Tasks in a Heterogeneous Distributed Environment

    Directory of Open Access Journals (Sweden)

    Qi Liu

    2016-08-01

    Full Text Available Distributed Computing has achieved tremendous development since cloud computing was proposed in 2006, and played a vital role promoting rapid growth of data collecting and analysis models, e.g., Internet of things, Cyber-Physical Systems, Big Data Analytics, etc. Hadoop has become a data convergence platform for sensor networks. As one of the core components, MapReduce facilitates allocating, processing and mining of collected large-scale data, where speculative execution strategies help solve straggler problems. However, there is still no efficient solution for accurate estimation on execution time of run-time tasks, which can affect task allocation and distribution in MapReduce. In this paper, task execution data have been collected and employed for the estimation. A two-phase regression (TPR method is proposed to predict the finishing time of each task accurately. Detailed data of each task have drawn interests with detailed analysis report being made. According to the results, the prediction accuracy of concurrent tasks’ execution time can be improved, in particular for some regular jobs.

  14. Fate of dissolved organic nitrogen in two stage trickling filter process.

    Science.gov (United States)

    Simsek, Halis; Kasi, Murthy; Wadhawan, Tanush; Bye, Christopher; Blonigen, Mark; Khan, Eakalak

    2012-10-15

    Dissolved organic nitrogen (DON) represents a significant portion of nitrogen in the final effluent of wastewater treatment plants (WWTPs). Biodegradable portion of DON (BDON) can support algal growth and/or consume dissolved oxygen in the receiving waters. The fate of DON and BDON has not been studied for trickling filter WWTPs. DON and BDON data were collected along the treatment train of a WWTP with a two-stage trickling filter process. DON concentrations in the influent and effluent were 27% and 14% of total dissolved nitrogen (TDN). The plant removed about 62% and 72% of the influent DON and BDON mainly by the trickling filters. The final effluent BDON values averaged 1.8 mg/L. BDON was found to be between 51% and 69% of the DON in raw wastewater and after various treatment units. The fate of DON and BDON through the two-stage trickling filter treatment plant was modeled. The BioWin v3.1 model was successfully applied to simulate ammonia, nitrite, nitrate, TDN, DON and BDON concentrations along the treatment train. The maximum growth rates for ammonia oxidizing bacteria (AOB) and nitrite oxidizing bacteria, and AOB half saturation constant influenced ammonia and nitrate output results. Hydrolysis and ammonification rates influenced all of the nitrogen species in the model output, including BDON. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. Temporary stages and motivational variables: Two complementary perspectives in the help-seeking process for mental disorders.

    Science.gov (United States)

    Del Valle Del Valle, Gema; Carrió, Carmen; Belloch, Amparo

    2017-10-09

    Help-seeking for mental disorders is a complex process, which includes different temporary stages, and in which the motivational variables play an especially relevant role. However, there is a lack of instruments to evaluate in depth both the temporary and motivational variables involved in the help-seeking process. This study aims to analyse in detail these two sets of variables, using a specific instrument designed for the purpose, to gain a better understanding of the process of treatment seeking. A total of 152 patients seeking treatment in mental health outpatient clinics of the NHS were individually interviewed: 71 had Obsessive-Compulsive Disorder, 21 had Agoraphobia, 18 had Major Depressive Disorder), 20 had Anorexia Nervosa, and 22 had Cocaine Dependence. The patients completed a structured interview assessing the help-seeking process. Disorder severity and quality of life was also assessed. The patients with agoraphobia and with major depression took significantly less time in recognising their mental health symptoms. Similarly, patients with major depression were faster in seeking professional help. Motivational variables were grouped in 3 sets: motivators for seeking treatment, related to the negative impact of symptoms on mood and to loss of control over symptoms; motivators for delaying treatment, related to minimisation of the disorder; and stigma-associated variables. The results support the importance of considering the different motivational variables involved in the several stages of the help-seeking process. The interview designed to that end has shown its usefulness in this endeavour. Copyright © 2017 SEP y SEPB. Publicado por Elsevier España, S.L.U. All rights reserved.

  16. Leisure-time running reduces all-cause and cardiovascular mortality risk.

    Science.gov (United States)

    Lee, Duck-Chul; Pate, Russell R; Lavie, Carl J; Sui, Xuemei; Church, Timothy S; Blair, Steven N

    2014-08-05

    Although running is a popular leisure-time physical activity, little is known about the long-term effects of running on mortality. The dose-response relations between running, as well as the change in running behaviors over time, and mortality remain uncertain. We examined the associations of running with all-cause and cardiovascular mortality risks in 55,137 adults, 18 to 100 years of age (mean age 44 years). Running was assessed on a medical history questionnaire by leisure-time activity. During a mean follow-up of 15 years, 3,413 all-cause and 1,217 cardiovascular deaths occurred. Approximately 24% of adults participated in running in this population. Compared with nonrunners, runners had 30% and 45% lower adjusted risks of all-cause and cardiovascular mortality, respectively, with a 3-year life expectancy benefit. In dose-response analyses, the mortality benefits in runners were similar across quintiles of running time, distance, frequency, amount, and speed, compared with nonrunners. Weekly running even benefits, with 29% and 50% lower risks of all-cause and cardiovascular mortality, respectively, compared with never-runners. Running, even 5 to 10 min/day and at slow speeds benefits. Copyright © 2014 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  17. Combined two-stage xanthate processes for the treatment of copper-containing wastewater

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Y.K. [Department of Safety Health and Environmental Engineering, Central Taiwan University of Sciences and Technology, Taichung (Taiwan); Leu, M.H. [Department of Environmental Engineering, Kun Shan University of Technology, Yung-Kang City (Taiwan); Chang, J.E.; Lin, T.F.; Chen, T.C. [Department of Environmental Engineering, National Cheng Kung University, Tainan City (Taiwan); Chiang, L.C.; Shih, P.H. [Department of Environmental Engineering and Science, Fooyin University, Kaohsiung County (Taiwan)

    2007-02-15

    Heavy metal removal is mainly conducted by adjusting the wastewater pH to form metal hydroxide precipitates. However, in recent years, the xanthate process with a high metal removal efficiency, attracted attention due to its use of sorption/desorption of heavy metals from aqueous solutions. In this study, two kinds of agricultural xanthates, insoluble peanut-shell xanthate (IPX) and insoluble starch xanthate (ISX), were used as sorbents to treat the copper-containing wastewater (Cu concentration from 50 to 1,000 mg/L). The experimental results showed that the maximum Cu removal efficiency by IPX was 93.5 % in the case of high Cu concentrations, whereby 81.1 % of copper could rapidly be removed within one minute. Moreover, copper-containing wastewater could also be treated by ISX over a wide range (50 to 1,000 mg/L) to a level that meets the Taiwan EPA's effluent regulations (3 mg/L) within 20 minutes. Whereas IPX had a maximum binding capacity for copper of 185 mg/g IPX, the capacity for ISX was 120 mg/g ISX. IPX is cheaper than ISX, and has the benefits of a rapid reaction and a high copper binding capacity, however, it exhibits a lower copper removal efficiency. A sequential IPX and ISX treatment (i.e., two-stage xanthate processes) could therefore be an excellent alternative. The results obtained using the two-stage xanthate process revealed an effective copper treatment. The effluent (C{sub e}) was below 0.6 mg/L, compared to the influent (C{sub 0}) of 1,001 mg/L at pH = 4 and a dilution rate of 0.6 h{sup -1}. Furthermore, the Cu-ISX complex formed could meet the Taiwan TCLP regulations, and be classified as non-hazardous waste. The xanthatilization of agricultural wastes offers a comprehensive strategy for solving both agricultural waste disposal and metal-containing wastewater treatment problems. (Abstract Copyright [2007], Wiley Periodicals, Inc.)

  18. Characteristics of process oils from HTI coal/plastics co-liquefaction runs

    Energy Technology Data Exchange (ETDEWEB)

    Robbins, G.A.; Brandes, S.D.; Winschel, R.A. [and others

    1995-12-31

    The objective of this project is to provide timely analytical support to DOE`s liquefaction development effort. Specific objectives of the work reported here are presented. During a few operating periods of Run POC-2, HTI co-liquefied mixed plastics with coal, and tire rubber with coal. Although steady-state operation was not achieved during these brief tests periods, the results indicated that a liquefaction plant could operate with these waste materials as feedstocks. CONSOL analyzed 65 process stream samples from coal-only and coal/waste portions of the run. Some results obtained from characterization of samples from Run POC-2 coal/plastics operation are presented.

  19. Loading forces in shallow water running in two levels of immersion.

    Science.gov (United States)

    Haupenthal, Alessandro; Ruschel, Caroline; Hubert, Marcel; de Brito Fontana, Heiliane; Roesler, Helio

    2010-07-01

    To analyse the vertical and anteroposterior components of the ground reaction force during shallow water running at 2 levels of immersion. Twenty-two healthy adults with no gait disorders, who were familiar with aquatic exercises. Subjects performed 6 trials of water running at a self-selected speed in chest and hip immersion. Force data were collected through an underwater force plate and running speed was measured with a photocell timing light system. Analysis of covariance was used for data analysis. Vertical forces corresponded to 0.80 and 0.98 times the subject's body weight at the chest and hip level, respectively. Anteroposterior forces corresponded to 0.26 and 0.31 times the subject's body weight at the chest and hip level, respectively. As the water level decreased the subjects ran faster. No significant differences were found for the force values between the immersions, probably due to variability in speed, which was self-selected. When thinking about load values in water running professionals should consider not only the immersion level, but also the speed, as it can affect the force components, mainly the anteroposterior one. Quantitative data on this subject could help professionals to conduct safer aqua-tic rehabilitation and physical conditioning protocols.

  20. Run-Time HW/SW Scheduling of Data Flow Applications on Reconfigurable Architectures

    Directory of Open Access Journals (Sweden)

    Ghaffari Fakhreddine

    2009-01-01

    Full Text Available This paper presents an efficient dynamic and run-time Hardware/Software scheduling approach. This scheduling heuristic consists in mapping online the different tasks of a highly dynamic application in such a way that the total execution time is minimized. We consider soft real-time data flow graph oriented applications for which the execution time is function of the input data nature. The target architecture is composed of two processors connected to a dynamically reconfigurable hardware accelerator. Our approach takes advantage of the reconfiguration property of the considered architecture to adapt the treatment to the system dynamics. We compare our heuristic with another similar approach. We present the results of our scheduling method on several image processing applications. Our experiments include simulation and synthesis results on a Virtex V-based platform. These results show a better performance against existing methods.

  1. Runway Operations Planning: A Two-Stage Solution Methodology

    Science.gov (United States)

    Anagnostakis, Ioannis; Clarke, John-Paul

    2003-01-01

    The airport runway is a scarce resource that must be shared by different runway operations (arrivals, departures and runway crossings). Given the possible sequences of runway events, careful Runway Operations Planning (ROP) is required if runway utilization is to be maximized. Thus, Runway Operations Planning (ROP) is a critical component of airport operations planning in general and surface operations planning in particular. From the perspective of departures, ROP solutions are aircraft departure schedules developed by optimally allocating runway time for departures given the time required for arrivals and crossings. In addition to the obvious objective of maximizing throughput, other objectives, such as guaranteeing fairness and minimizing environmental impact, may be incorporated into the ROP solution subject to constraints introduced by Air Traffic Control (ATC) procedures. Generating optimal runway operations plans was approached in with a 'one-stage' optimization routine that considered all the desired objectives and constraints, and the characteristics of each aircraft (weight class, destination, Air Traffic Control (ATC) constraints) at the same time. Since, however, at any given point in time, there is less uncertainty in the predicted demand for departure resources in terms of weight class than in terms of specific aircraft, the ROP problem can be parsed into two stages. In the context of the Departure Planner (OP) research project, this paper introduces Runway Operations Planning (ROP) as part of the wider Surface Operations Optimization (SOO) and describes a proposed 'two stage' heuristic algorithm for solving the Runway Operations Planning (ROP) problem. Focus is specifically given on including runway crossings in the planning process of runway operations. In the first stage, sequences of departure class slots and runwy crossings slots are generated and ranked based on departure runway throughput under stochastic conditions. In the second stage, the

  2. Comparison of Oone-Stage Free Gracilis Muscle Flap With Two-Stage Method in Chronic Facial Palsy

    Directory of Open Access Journals (Sweden)

    J Ghaffari

    2007-08-01

    Full Text Available Background:Rehabilitation of facial paralysis is one of the greatest challenges faced by reconstructive surgeons today. The traditional method for treatment of patients with facial palsy is the two-stage free gracilis flap which has a long latency period of between the two stages of surgery.Methods: In this paper, we prospectively compared the results of the one-stage gracilis flap method with the two -stage technique.Results:Out of 41 patients with facial palsy refered to Hazrat-e-Fatemeh Hospital 31 were selected from whom 22 underwent two- stage and 9 one-stage method treatment. The two groups were identical according to age,sex,intensity of illness, duration, and chronicity of illness. Mean duration of follow up was 37 months. There was no significant relation between the two groups regarding the symmetry of face in repose, smiling, whistling and nasolabial folds. Frequency of complications was equal in both groups. The postoperative surgeons and patients' satisfaction were equal in both groups. There was no significant difference between the mean excursion of muscle flap in one-stage (9.8 mm and two-stage groups (8.9 mm. The ratio of contraction of the affected side compared to the normal side was similar in both groups. The mean time of the initial contraction of the muscle flap in the one-stage group (5.5 months had a significant difference (P=0.001 with the two-stage one (6.5 months.The study revealed a highly significant difference (P=0.0001 between the mean waiting period from the first operation to the beginning of muscle contraction in one-stage(5.5 monthsand two-stage groups(17.1 months.Conclusion:It seems that the results and complication of the two methods are the same,but the one-stage method requires less time for facial reanimation,and is costeffective because it saves time and decreases hospitalization costs.

  3. A 45 ps time digitizer with a two-phase clock and dual-edge two-stage interpolation in a field programmable gate array device

    Science.gov (United States)

    Szplet, R.; Kalisz, J.; Jachna, Z.

    2009-02-01

    We present a time digitizer having 45 ps resolution, integrated in a field programmable gate array (FPGA) device. The time interval measurement is based on the two-stage interpolation method. A dual-edge two-phase interpolator is driven by the on-chip synthesized 250 MHz clock with precise phase adjustment. An improved dual-edge double synchronizer was developed to control the main counter. The nonlinearity of the digitizer's transfer characteristic is identified and utilized by the dedicated hardware code processor for the on-the-fly correction of the output data. Application of presented ideas has resulted in the measurement uncertainty of the digitizer below 70 ps RMS over the time interval ranging from 0 to 1 s. The use of the two-stage interpolation and a fast FIFO memory has allowed us to obtain the maximum measurement rate of five million measurements per second.

  4. A 45 ps time digitizer with a two-phase clock and dual-edge two-stage interpolation in a field programmable gate array device

    International Nuclear Information System (INIS)

    Szplet, R; Kalisz, J; Jachna, Z

    2009-01-01

    We present a time digitizer having 45 ps resolution, integrated in a field programmable gate array (FPGA) device. The time interval measurement is based on the two-stage interpolation method. A dual-edge two-phase interpolator is driven by the on-chip synthesized 250 MHz clock with precise phase adjustment. An improved dual-edge double synchronizer was developed to control the main counter. The nonlinearity of the digitizer's transfer characteristic is identified and utilized by the dedicated hardware code processor for the on-the-fly correction of the output data. Application of presented ideas has resulted in the measurement uncertainty of the digitizer below 70 ps RMS over the time interval ranging from 0 to 1 s. The use of the two-stage interpolation and a fast FIFO memory has allowed us to obtain the maximum measurement rate of five million measurements per second

  5. A two-phase inspection model for a single component system with three-stage degradation

    International Nuclear Information System (INIS)

    Wang, Huiying; Wang, Wenbin; Peng, Rui

    2017-01-01

    This paper presents a two-phase inspection schedule and an age-based replacement policy for a single plant item contingent on a three-stage degradation process. The two phase inspection schedule can be observed in practice. The three stages are defined as the normal working stage, low-grade defective stage and critical defective stage. When an inspection detects that an item is in the low-grade defective stage, we may delay the preventive replacement action if the time to the age-based replacement is less than or equal to a threshold level. However, if it is above this threshold level, the item will be replaced immediately. If the item is found in the critical defective stage, it is replaced immediately. A hybrid bee colony algorithm is developed to find the optimal solution for the proposed model which has multiple decision variables. A numerical example is conducted to show the efficiency of this algorithm, and simulations are conducted to verify the correctness of the model. - Highlights: • A two-phase inspection model is studied. • The failure process has three stages. • The delayed replacement is considered.

  6. Thermally-aware composite run-time CPU power models

    OpenAIRE

    Walker, Matthew J.; Diestelhorst, Stephan; Hansson, Andreas; Balsamo, Domenico; Merrett, Geoff V.; Al-Hashimi, Bashir M.

    2016-01-01

    Accurate and stable CPU power modelling is fundamental in modern system-on-chips (SoCs) for two main reasons: 1) they enable significant online energy savings by providing a run-time manager with reliable power consumption data for controlling CPU energy-saving techniques; 2) they can be used as accurate and trusted reference models for system design and exploration. We begin by showing the limitations in typical performance monitoring counter (PMC) based power modelling approaches and illust...

  7. Two-Agent Single-Machine Scheduling of Jobs with Time-Dependent Processing Times and Ready Times

    Directory of Open Access Journals (Sweden)

    Jan-Yee Kung

    2013-01-01

    Full Text Available Scheduling involving jobs with time-dependent processing times has recently attracted much research attention. However, multiagent scheduling with simultaneous considerations of jobs with time-dependent processing times and ready times is relatively unexplored. Inspired by this observation, we study a two-agent single-machine scheduling problem in which the jobs have both time-dependent processing times and ready times. We consider the model in which the actual processing time of a job of the first agent is a decreasing function of its scheduled position while the actual processing time of a job of the second agent is an increasing function of its scheduled position. In addition, each job has a different ready time. The objective is to minimize the total completion time of the jobs of the first agent with the restriction that no tardy job is allowed for the second agent. We propose a branch-and-bound and several genetic algorithms to obtain optimal and near-optimal solutions for the problem, respectively. We also conduct extensive computational results to test the proposed algorithms and examine the impacts of different problem parameters on their performance.

  8. The CMS trigger in Run 2

    CERN Document Server

    Tosi, Mia

    2018-01-01

    During its second period of operation (Run 2) which started in 2015, the LHC will reach a peak instantaneous luminosity of approximately 2$\\times 10^{34}$~cm$^{-2}s^{-1}$ with an average pile-up of about 55, far larger than the design value. Under these conditions, the online event selection is a very challenging task. In CMS, it is realised by a two-level trigger system: the Level-1 (L1) Trigger, implemented in custom-designed electronics, and the High Level Trigger (HLT), a streamlined version of the offline reconstruction software running on a computer farm.\\\\ In order to face this challenge, the L1 trigger has undergone a major upgrade compared to Run 1, whereby all electronic boards of the system have been replaced, allowing more sophisticated algorithms to be run online. Its last stage, the global trigger, is now able to perform complex selections and to compute high-level quantities, like invariant masses. Likewise, the algorithms that run in the HLT went through big improvements; in particular, new ap...

  9. Comparative effectiveness of one-stage versus two-stage basilic vein transposition arteriovenous fistulas.

    Science.gov (United States)

    Ghaffarian, Amir A; Griffin, Claire L; Kraiss, Larry W; Sarfati, Mark R; Brooke, Benjamin S

    2018-02-01

    Basilic vein transposition (BVT) fistulas may be performed as either a one-stage or two-stage operation, although there is debate as to which technique is superior. This study was designed to evaluate the comparative clinical efficacy and cost-effectiveness of one-stage vs two-stage BVT. We identified all patients at a single large academic hospital who had undergone creation of either a one-stage or two-stage BVT between January 2007 and January 2015. Data evaluated included patient demographics, comorbidities, medication use, reasons for abandonment, and interventions performed to maintain patency. Costs were derived from the literature, and effectiveness was expressed in quality-adjusted life-years (QALYs). We analyzed primary and secondary functional patency outcomes as well as survival during follow-up between one-stage and two-stage BVT procedures using multivariate Cox proportional hazards models and Kaplan-Meier analysis with log-rank tests. The incremental cost-effectiveness ratio was used to determine cost savings. We identified 131 patients in whom 57 (44%) one-stage BVT and 74 (56%) two-stage BVT fistulas were created among 8 different vascular surgeons during the study period that each performed both procedures. There was no significant difference in the mean age, male gender, white race, diabetes, coronary disease, or medication profile among patients undergoing one- vs two-stage BVT. After fistula transposition, the median follow-up time was 8.3 months (interquartile range, 3-21 months). Primary patency rates of one-stage BVT were 56% at 12-month follow-up, whereas primary patency rates of two-stage BVT were 72% at 12-month follow-up. Patients undergoing two-stage BVT also had significantly higher rates of secondary functional patency at 12 months (57% for one-stage BVT vs 80% for two-stage BVT) and 24 months (44% for one-stage BVT vs 73% for two-stage BVT) of follow-up (P < .001 using log-rank test). However, there was no significant difference

  10. Effects of Two-stage Controlled pH and Temperature vs. One-step Process for Hemicellulase Biosynthesis and Feruloyl Oligosaccharide Fermentation using Aureobasidium pullulans

    Directory of Open Access Journals (Sweden)

    Xiaohong Yu

    2016-04-01

    Full Text Available A two-stage, pH- and temperature-controlled wheat bran fermentation method using Aureobasidium pullulans was investigated for feruloyl oligosaccharides (FOs production and the activities of xylanase, xylosidase, and ferulic acid esterase (FAE. A. pullulans secreted xylanase, xylosidase, and FAE at high levels in the initial pH of 4.0 to 5.0 and a fermentation liquid temperature of 31 °C to 33 °C. FOs production via two-stage fermentation (FOs 2 reached 1123 nmol/L after fermentation for 96 h, by controlling the initial pH at 4.0 and the initial temperature at 33 °C, and then changing the pH to 6.0 and the temperature to 29 °C at the same time at 36 h. This process was 12 h shorter and 219 nmol/L higher than a one-stage fermentation for producing FOs 1. Xylanase, xylosidase, and FAE activities were highly correlated with controlled pH and temperature and FOs biosynthesis rate. Thus, the combination of two-stage controlled pH and temperature could support mass production of FOs.

  11. Two-stage liquefaction of a Spanish subbituminous coal

    Energy Technology Data Exchange (ETDEWEB)

    Martinez, M.T.; Fernandez, I.; Benito, A.M.; Cebolla, V.; Miranda, J.L.; Oelert, H.H. (Instituto de Carboquimica, Zaragoza (Spain))

    1993-05-01

    A Spanish subbituminous coal has been processed in two-stage liquefaction in a non-integrated process. The first-stage coal liquefaction has been carried out in a continuous pilot plant in Germany at Clausthal Technical University at 400[degree]C, 20 MPa hydrogen pressure and anthracene oil as solvent. The second-stage coal liquefaction has been performed in continuous operation in a hydroprocessing unit at the Instituto de Carboquimica at 450[degree]C and 10 MPa hydrogen pressure, with two commercial catalysts: Harshaw HT-400E (Co-Mo/Al[sub 2]O[sub 3]) and HT-500E (Ni-Mo/Al[sub 2]O[sub 3]). The total conversion for the first-stage coal liquefaction was 75.41 wt% (coal d.a.f.), being 3.79 wt% gases, 2.58 wt% primary condensate and 69.04 wt% heavy liquids. The heteroatoms removal for the second-stage liquefaction was 97-99 wt% of S, 85-87 wt% of N and 93-100 wt% of O. The hydroprocessed liquids have about 70% of compounds with boiling point below 350[degree]C, and meet the sulphur and nitrogen specifications for refinery feedstocks. Liquids from two-stage coal liquefaction have been distilled, and the naphtha, kerosene and diesel fractions obtained have been characterized. 39 refs., 3 figs., 8 tabs.

  12. Comparison of single-stage and temperature-phased two-stage anaerobic digestion of oily food waste

    International Nuclear Information System (INIS)

    Wu, Li-Jie; Kobayashi, Takuro; Li, Yu-You; Xu, Kai-Qin

    2015-01-01

    Highlights: • A single-stage and two two-stage anaerobic systems were synchronously operated. • Similar methane production 0.44 L/g VS_a_d_d_e_d from oily food waste was achieved. • The first stage of the two-stage process became inefficient due to serious pH drop. • Recycle favored the hythan production in the two-stage digestion. • The conversion of unsaturated fatty acids was enhanced by recycle introduction. - Abstract: Anaerobic digestion is an effective technology to recover energy from oily food waste. A single-stage system and temperature-phased two-stage systems with and without recycle for anaerobic digestion of oily food waste were constructed to compare the operation performances. The synchronous operation indicated the similar ability to produce methane in the three systems, with a methane yield of 0.44 L/g VS_a_d_d_e_d. The pH drop to less than 4.0 in the first stage of two-stage system without recycle resulted in poor hydrolysis, and methane or hydrogen was not produced in this stage. Alkalinity supplement from the second stage of two-stage system with recycle improved pH in the first stage to 5.4. Consequently, 35.3% of the particulate COD in the influent was reduced in the first stage of two-stage system with recycle according to a COD mass balance, and hydrogen was produced with a percentage of 31.7%, accordingly. Similar solids and organic matter were removed in the single-stage system and two-stage system without recycle. More lipid degradation and the conversion of long-chain fatty acids were achieved in the single-stage system. Recycling was proved to be effective in promoting the conversion of unsaturated long-chain fatty acids into saturated fatty acids in the two-stage system.

  13. Statistical process control support during Defense Waste Processing Facility chemical runs

    International Nuclear Information System (INIS)

    Brown, K.G.

    1994-01-01

    The Product Composition Control System (PCCS) has been developed to ensure that the wasteforms produced by the Defense Waste Processing Facility (DWPF) at the Savannah River Site (SRS) will satisfy the regulatory and processing criteria that will be imposed. The PCCS provides rigorous, statistically-defensible management of a noisy, multivariate system subject to multiple constraints. The system has been successfully tested and has been used to control the production of the first two melter feed batches during DWPF Chemical Runs. These operations will demonstrate the viability of the DWPF process. This paper provides a brief discussion of the technical foundation for the statistical process control algorithms incorporated into PCCS, and describes the results obtained and lessons learned from DWPF Cold Chemical Run operations. The DWPF will immobilize approximately 130 million liters of high-level nuclear waste currently stored at the Site in 51 carbon steel tanks. Waste handling operations separate this waste into highly radioactive sludge and precipitate streams and less radioactive water soluble salts. (In a separate facility, soluble salts are disposed of as low-level waste in a mixture of cement slag, and flyash.) In DWPF, the precipitate steam (Precipitate Hydrolysis Aqueous or PHA) is blended with the insoluble sludge and ground glass frit to produce melter feed slurry which is continuously fed to the DWPF melter. The melter produces a molten borosilicate glass which is poured into stainless steel canisters for cooling and, ultimately, shipment to and storage in a geologic repository

  14. Experimental and numerical studies on two-stage combustion of biomass

    Energy Technology Data Exchange (ETDEWEB)

    Houshfar, Eshan

    2012-07-01

    In this thesis, two-stage combustion of biomass was experimentally/numerically investigated in a multifuel reactor. The following emissions issues have been the main focus of the work: 1- NOx and N2O 2- Unburnt species (CO and CxHy) 3- Corrosion related emissions.The study had a focus on two-stage combustion in order to reduce pollutant emissions (primarily NOx emissions). It is well known that pollutant emissions are very dependent on the process conditions such as temperature, reactant concentrations and residence times. On the other hand, emissions are also dependent on the fuel properties (moisture content, volatiles, alkali content, etc.). A detailed study of the important parameters with suitable biomass fuels in order to optimize the various process conditions was performed. Different experimental studies were carried out on biomass fuels in order to study the effect of fuel properties and combustion parameters on pollutant emissions. Process conditions typical for biomass combustion processes were studied. Advanced experimental equipment was used in these studies. The experiments showed the effects of staged air combustion, compared to non-staged combustion, on the emission levels clearly. A NOx reduction of up to 85% was reached with staged air combustion using demolition wood as fuel. An optimum primary excess air ratio of 0.8-0.95 was found as a minimizing parameter for the NOx emissions for staged air combustion. Air staging had, however, a negative effect on N2O emissions. Even though the trends showed a very small reduction in the NOx level as temperature increased for non-staged combustion, the effect of temperature was not significant for NOx and CxHy, neither in staged air combustion or non-staged combustion, while it had a great influence on the N2O and CO emissions, with decreasing levels with increasing temperature. Furthermore, flue gas recirculation (FGR) was used in combination with staged combustion to obtain an enhanced NOx reduction. The

  15. Generated forces and heat during the critical stages of friction stir welding and processing

    Energy Technology Data Exchange (ETDEWEB)

    Hussein, Sadiq Aziz; Tahir, Abd Salam Md; Izamshah, R. [University Teknikal Malaysia Melaka, Malacca (Malaysia)

    2015-10-15

    The solid-state behavior of friction stir welding process results in violent mechanical forces that should be mitigated, if not eliminated. Plunging and dwell time are the two critical stages of this welding process in terms of the generated forces and the related heat. In this study, several combinations of pre-decided penetration speeds, rotational speeds, tool designs, and dwell time periods were used to investigate these two critical stages. Moreover, a coupled-field thermal-structural finite element model was developed to validate the experimental results and the induced stresses. The experimental results revealed the relatively large changes in force and temperature during the first two stages compared with those during the translational tool movement stage. An important procedure to mitigate the undesired forces was then suggested. The model prediction of temperature values and their distribution were in good agreement with the experimental prediction. Therefore, the thermal history of this non-uniform heat distribution was used to estimate the induced thermal stresses. Despite the 37% increase in these stresses when 40 s dwell time was used instead of 5 s, these stresses showed no effect on the axial force values because of the soft material incidence and stir effects.

  16. Generated forces and heat during the critical stages of friction stir welding and processing

    International Nuclear Information System (INIS)

    Hussein, Sadiq Aziz; Tahir, Abd Salam Md; Izamshah, R.

    2015-01-01

    The solid-state behavior of friction stir welding process results in violent mechanical forces that should be mitigated, if not eliminated. Plunging and dwell time are the two critical stages of this welding process in terms of the generated forces and the related heat. In this study, several combinations of pre-decided penetration speeds, rotational speeds, tool designs, and dwell time periods were used to investigate these two critical stages. Moreover, a coupled-field thermal-structural finite element model was developed to validate the experimental results and the induced stresses. The experimental results revealed the relatively large changes in force and temperature during the first two stages compared with those during the translational tool movement stage. An important procedure to mitigate the undesired forces was then suggested. The model prediction of temperature values and their distribution were in good agreement with the experimental prediction. Therefore, the thermal history of this non-uniform heat distribution was used to estimate the induced thermal stresses. Despite the 37% increase in these stresses when 40 s dwell time was used instead of 5 s, these stresses showed no effect on the axial force values because of the soft material incidence and stir effects

  17. Influence of oxygen on the chemical stage of radiobiological mechanism

    International Nuclear Information System (INIS)

    Barilla, Jiří; Lokajíček, Miloš V.; Pisaková, Hana; Simr, Pavel

    2016-01-01

    The simulation of the chemical stage of radiobiological mechanism may be very helpful in studying the radiobiological effect of ionizing radiation when the water radical clusters formed by the densely ionizing ends of primary or secondary charged particle may form DSBs damaging DNA molecules in living cells. It is possible to study not only the efficiency of individual radicals but also the influence of other species or radiomodifiers (mainly oxygen) being present in water medium during irradiation. The mathematical model based on Continuous Petri nets (proposed by us recently) will be described. It makes it possible to analyze two main processes running at the same time: chemical radical reactions and the diffusion of radical clusters formed during energy transfer. One may study the time change of radical concentrations due to the chemical reactions running during diffusion process. Some orientation results concerning the efficiency of individual radicals in DSB formation (in the case of Co60 radiation) will be presented; the influence of oxygen present in water medium during irradiation will be shown, too. - Highlights: • Creation of the mathematical model. • Realization of the model with the help of Continuous Petri nets. • Obtain the time dependence of changes in the concentration of radicals. • Influence of oxygen on the chemical stage of radiobiological mechanism.

  18. Run-time verification of behavioural conformance for conversational web services

    OpenAIRE

    Dranidis, Dimitris; Ramollari, Ervin; Kourtesis, Dimitrios

    2009-01-01

    Web services exposing run-time behaviour that deviates from their behavioural specifications represent a major threat to the sustainability of a service-oriented ecosystem. It is therefore critical to verify the behavioural conformance of services during run-time. This paper discusses a novel approach for run-time verification of Web services. It proposes the utilisation of Stream X-machines for constructing formal behavioural specifications of Web services which can be exploited for verifyin...

  19. Stage-dependent hierarchy of criteria in multiobjective multistage decision processes

    Directory of Open Access Journals (Sweden)

    Tadeusz Trzaskalik

    2017-01-01

    Full Text Available This paper will consider a multiobjective, multistage discrete dynamic process with a changeable, state-dependent hierarchy of stage criteria determined by the decision maker. The goal of this paper is to answer the question of how to control a multistage process while taking into account both the tendency to achieve multiobjective optimization of the entire process and the time-varying hierarchy of stage criteria. We consider in detail possible situations, where the hierarchy of stage criteria changes over time in individual stages and is stage dependent. We present an interactive proposal to solving the problem, where the decision maker actively participates in finding the final realization of the process. The algorithm proposed is illustrated using a numerical example.

  20. Assessing efficiency and effectiveness of Malaysian Islamic banks: A two stage DEA analysis

    Science.gov (United States)

    Kamarudin, Norbaizura; Ismail, Wan Rosmanira; Mohd, Muhammad Azri

    2014-06-01

    Islamic banks in Malaysia are indispensable players in the financial industry with the growing needs for syariah compliance system. In the banking industry, most recent studies concerned only on operational efficiency. However rarely on the operational effectiveness. Since the production process of banking industry can be described as a two-stage process, two-stage Data Envelopment Analysis (DEA) can be applied to measure the bank performance. This study was designed to measure the overall performance in terms of efficiency and effectiveness of Islamic banks in Malaysia using Two-Stage DEA approach. This paper presents analysis of a DEA model which split the efficiency and effectiveness in order to evaluate the performance of ten selected Islamic Banks in Malaysia for the financial year period ended 2011. The analysis shows average efficient score is more than average effectiveness score thus we can say that Malaysian Islamic banks were more efficient rather than effective. Furthermore, none of the bank exhibit best practice in both stages as we can say that a bank with better efficiency does not always mean having better effectiveness at the same time.

  1. Two-stage precipitation of plutonium trifluoride

    International Nuclear Information System (INIS)

    Luerkens, D.W.

    1984-04-01

    Plutonium trifluoride was precipitated using a two-stage precipitation system. A series of precipitation experiments identified the significant process variables affecting precipitate characteristics. A mathematical precipitation model was developed which was based on the formation of plutonium fluoride complexes. The precipitation model relates all process variables, in a single equation, to a single parameter that can be used to control particle characteristics

  2. Safety evaluation of the ITP filter/stripper test runs and quiet time runs using simulant solution. Revision 3

    International Nuclear Information System (INIS)

    Gupta, M.K.

    1994-06-01

    The purpose is to provide the technical bases for the evaluation of Unreviewed Safety Question for the In-Tank Precipitation (ITP) Filter/Stripper Test Runs (Ref. 7) and Quiet Time Runs Program (described in Section 3.6). The Filter/Stripper Test Runs and Quiet Time Runs program involves a 12,000 gallon feed tank containing an agitator, a 4,000 gallon flush tank, a variable speed pump, associated piping and controls, and equipment within both the Filter and the Stripper Building

  3. Two stage heterotrophy/photoinduction culture of Scenedesmus incrassatulus: potential for lutein production.

    Science.gov (United States)

    Flórez-Miranda, Liliana; Cañizares-Villanueva, Rosa Olivia; Melchy-Antonio, Orlando; Martínez-Jerónimo, Fernando; Flores-Ortíz, Cesar Mateo

    2017-11-20

    A biomass production process including two stages, heterotrophy/photoinduction (TSHP), was developed to improve biomass and lutein production by the green microalgae Scenedesmus incrassatulus. To determine the effects of different nitrogen sources (yeast extract and urea) and temperature in the heterotrophic stage, experiments using shake flask cultures with glucose as the carbon source were carried out. The highest biomass productivity and specific pigment concentrations were reached using urea+vitamins (U+V) at 30°C. The first stage of the TSHP process was done in a 6L bioreactor, and the inductions in a 3L airlift photobioreactor. At the end of the heterotrophic stage, S. incrassatulus achieved the maximal biomass concentration, increasing from 7.22gL -1 to 17.98gL -1 with an increase in initial glucose concentration from 10.6gL -1 to 30.3gL -1 . However, the higher initial glucose concentration resulted in a lower specific growth rate (μ) and lower cell yield (Y x/s ), possibly due to substrate inhibition. After 24h of photoinduction, lutein content in S. incrassatulus biomass was 7 times higher than that obtained at the end of heterotrophic cultivation, and the lutein productivity was 1.6 times higher compared with autotrophic culture of this microalga. Hence, the two-stage heterotrophy/photoinduction culture is an effective strategy for high cell density and lutein production in S. incrassatulus. Copyright © 2017. Published by Elsevier B.V.

  4. Meta-analysis of Gaussian individual patient data: Two-stage or not two-stage?

    Science.gov (United States)

    Morris, Tim P; Fisher, David J; Kenward, Michael G; Carpenter, James R

    2018-04-30

    Quantitative evidence synthesis through meta-analysis is central to evidence-based medicine. For well-documented reasons, the meta-analysis of individual patient data is held in higher regard than aggregate data. With access to individual patient data, the analysis is not restricted to a "two-stage" approach (combining estimates and standard errors) but can estimate parameters of interest by fitting a single model to all of the data, a so-called "one-stage" analysis. There has been debate about the merits of one- and two-stage analysis. Arguments for one-stage analysis have typically noted that a wider range of models can be fitted and overall estimates may be more precise. The two-stage side has emphasised that the models that can be fitted in two stages are sufficient to answer the relevant questions, with less scope for mistakes because there are fewer modelling choices to be made in the two-stage approach. For Gaussian data, we consider the statistical arguments for flexibility and precision in small-sample settings. Regarding flexibility, several of the models that can be fitted only in one stage may not be of serious interest to most meta-analysis practitioners. Regarding precision, we consider fixed- and random-effects meta-analysis and see that, for a model making certain assumptions, the number of stages used to fit this model is irrelevant; the precision will be approximately equal. Meta-analysts should choose modelling assumptions carefully. Sometimes relevant models can only be fitted in one stage. Otherwise, meta-analysts are free to use whichever procedure is most convenient to fit the identified model. © 2018 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  5. Walking, running, and resting under time, distance, and average speed constraints: optimality of walk-run-rest mixtures.

    Science.gov (United States)

    Long, Leroy L; Srinivasan, Manoj

    2013-04-06

    On a treadmill, humans switch from walking to running beyond a characteristic transition speed. Here, we study human choice between walking and running in a more ecological (non-treadmill) setting. We asked subjects to travel a given distance overground in a given allowed time duration. During this task, the subjects carried, and could look at, a stopwatch that counted down to zero. As expected, if the total time available were large, humans walk the whole distance. If the time available were small, humans mostly run. For an intermediate total time, humans often use a mixture of walking at a slow speed and running at a higher speed. With analytical and computational optimization, we show that using a walk-run mixture at intermediate speeds and a walk-rest mixture at the lowest average speeds is predicted by metabolic energy minimization, even with costs for transients-a consequence of non-convex energy curves. Thus, sometimes, steady locomotion may not be energy optimal, and not preferred, even in the absence of fatigue. Assuming similar non-convex energy curves, we conjecture that similar walk-run mixtures may be energetically beneficial to children following a parent and animals on long leashes. Humans and other animals might also benefit energetically from alternating between moving forward and standing still on a slow and sufficiently long treadmill.

  6. Sensory shelf life estimation of minimally processed lettuce considering two stages of consumers' decision-making process.

    Science.gov (United States)

    Ares, Gastón; Giménez, Ana; Gámbaro, Adriana

    2008-01-01

    The aim of the present work was to study the influence of context, particularly the stage of the decision-making process (purchase vs consumption stage), on sensory shelf life of minimally processed lettuce. Leaves of butterhead lettuce were placed in common polypropylene bags and stored at 5, 10 and 15 degrees C. Periodically, a panel of six assessors evaluated the appearance of the samples, and a panel of 40 consumers evaluated their appearance and answered "yes" or "no" to the questions: "Imagine you are in a supermarket, you want to buy a minimally processed lettuce, and you find a package of lettuce with leaves like this, would you normally buy it?" and "Imagine you have this leaf of lettuce stored in your refrigerator, would you normally consume it?". Survival analysis was used to calculate the shelf lives of minimally processed lettuce, considering both decision-making stages. Shelf lives estimated considering rejection to purchase were significantly lower than those estimated considering rejection to consume. Therefore, in order to be conservative and assure the products' quality, shelf life should be estimated considering consumers' rejection to purchase instead of rejection to consume, as traditionally has been done. On the other hand, results from logistic regressions of consumers' rejection percentage as a function of the evaluated appearance attributes suggested that consumers considered them differently while deciding whether to purchase or to consume minimally processed lettuce.

  7. Two-time scale subordination in physical processes with long-term memory

    International Nuclear Information System (INIS)

    Stanislavsky, Aleksander; Weron, Karina

    2008-01-01

    We describe dynamical processes in continuous media with a long-term memory. Our consideration is based on a stochastic subordination idea and concerns two physical examples in detail. First we study a temporal evolution of the species concentration in a trapping reaction in which a diffusing reactant is surrounded by a sea of randomly moving traps. The analysis uses the random-variable formalism of anomalous diffusive processes. We find that the empirical trapping-reaction law, according to which the reactant concentration decreases in time as a product of an exponential and a stretched exponential function, can be explained by a two-time scale subordination of random processes. Another example is connected with a state equation for continuous media with memory. If the pressure and the density of a medium are subordinated in two different random processes, then the ordinary state equation becomes fractional with two-time scales. This allows one to arrive at the Bagley-Torvik type of state equation

  8. Novel Real-time Calibration and Alignment Procedure for LHCb Run II

    CERN Multimedia

    Prouve, Claire

    2016-01-01

    In order to achieve optimal detector performance the LHCb experiment has introduced a novel real-time detector alignment and calibration strategy for Run II of the LHC. For the alignment tasks, data is collected and processed at the beginning of each fill while the calibrations are performed for each run. This real time alignment and calibration allows the same constants being used in both the online and offline reconstruction, thus improving the correlation between triggered and offline selected events. Additionally the newly computed alignment and calibration constants can be instantly used in the trigger, making it more efficient. The online alignment and calibration of the RICH detectors also enable the use of hadronic particle identification in the trigger. The computing time constraints are met through the use of a new dedicated framework using the multi-core farm infrastructure for the LHCb trigger. An overview of all alignment and calibration tasks is presented and their performance is shown.

  9. Near Real-Time Photometric Data Processing for the Solar Mass Ejection Imager (SMEI)

    Science.gov (United States)

    Hick, P. P.; Buffington, A.; Jackson, B. V.

    2004-12-01

    The Solar Mass Ejection Imager (SMEI) records a photometric white-light response of the interplanetary medium from Earth over most of the sky in near real time. In the first two years of operation the instrument has recorded the inner heliospheric response to several hundred CMEs, including the May 28, 2003 and the October 28, 2003 halo CMEs. In this preliminary work we present the techniques required to process the SMEI data from the time the raw CCD images become available to their final assembly in photometrically accurate maps of the sky brightness relative to a long-term time base. Processing of the SMEI data includes integration of new data into the SMEI data base; a conditioning program that removes from the raw CCD images an electronic offset ("pedestal") and a temperature-dependent dark current pattern; an "indexing" program that places these CCD images onto a high-resolution sidereal grid using known spacecraft pointing information. At this "indexing" stage further conditioning removes the bulk of the the effects of high-energy-particle hits ("cosmic rays"), space debris inside the field of view, and pixels with a sudden state change ("flipper pixels"). Once the high-resolution grid is produced, it is reformatted to a lower-resolution set of sidereal maps of sky brightness. From these sidereal maps we remove bright stars, background stars, and a zodiacal cloud model (their brightnesses are retained as additional data products). The final maps can be represented in any convenient sky coordinate system. Common formats are Sun-centered Hammer-Aitoff or "fisheye" maps. Time series at selected locations on these maps are extracted and processed further to remove aurorae, variable stars and other unwanted signals. These time series (with a long-term base removed) are used in 3D tomographic reconstructions. The data processing is distributed over multiple PCs running Linux, and, runs as much as possible automatically using recurring batch jobs ('cronjobs'). The

  10. Running shoes and running injuries: mythbusting and a proposal for two new paradigms: 'preferred movement path' and 'comfort filter'.

    Science.gov (United States)

    Nigg, B M; Baltich, J; Hoerzer, S; Enders, H

    2015-10-01

    In the past 100 years, running shoes experienced dramatic changes. The question then arises whether or not running shoes (or sport shoes in general) influence the frequency of running injuries at all. This paper addresses five aspects related to running injuries and shoe selection, including (1) the changes in running injuries over the past 40 years, (2) the relationship between sport shoes, sport inserts and running injuries, (3) previously researched mechanisms of injury related to footwear and two new paradigms for injury prevention including (4) the 'preferred movement path' and (5) the 'comfort filter'. Specifically, the data regarding the relationship between impact characteristics and ankle pronation to the risk of developing a running-related injury is reviewed. Based on the lack of conclusive evidence for these two variables, which were once thought to be the prime predictors of running injuries, two new paradigms are suggested to elucidate the association between footwear and injury. These two paradigms, 'the preferred movement path' and 'the comfort filter', suggest that a runner intuitively selects a comfortable product using their own comfort filter that allows them to remain in the preferred movement path. This may automatically reduce the injury risk and may explain why there does not seem to be a secular trend in running injury rates. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  11. Audiovisual integration in speech perception: a multi-stage process

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Tuomainen, Jyrki; Andersen, Tobias

    2011-01-01

    investigate whether the integration of auditory and visual speech observed in these two audiovisual integration effects are specific traits of speech perception. We further ask whether audiovisual integration is undertaken in a single processing stage or multiple processing stages....

  12. Discrete time population dynamics of a two-stage species with recruitment and capture

    International Nuclear Information System (INIS)

    Ladino, Lilia M.; Mammana, Cristiana; Michetti, Elisabetta; Valverde, Jose C.

    2016-01-01

    This work models and analyzes the dynamics of a two-stage species with recruitment and capture factors. It arises from the discretization of a previous model developed by Ladino and Valverde (2013), which represents a progress in the knowledge of the dynamics of exploited populations. Although the methods used here are related to the study of discrete-time systems and are different from those related to continuous version, the results are similar in both the discrete and the continuous case what confirm the skill in the selection of the factors to design the model. Unlike for the continuous-time case, for the discrete-time one some (non-negative) parametric constraints are derived from the biological significance of the model and become fundamental for the proofs of such results. Finally, numerical simulations show different scenarios of dynamics related to the analytical results which confirm the validity of the model.

  13. An enhanced Ada run-time system for real-time embedded processors

    Science.gov (United States)

    Sims, J. T.

    1991-01-01

    An enhanced Ada run-time system has been developed to support real-time embedded processor applications. The primary focus of this development effort has been on the tasking system and the memory management facilities of the run-time system. The tasking system has been extended to support efficient and precise periodic task execution as required for control applications. Event-driven task execution providing a means of task-asynchronous control and communication among Ada tasks is supported in this system. Inter-task control is even provided among tasks distributed on separate physical processors. The memory management system has been enhanced to provide object allocation and protected access support for memory shared between disjoint processors, each of which is executing a distinct Ada program.

  14. Optimization of a novel enzyme treatment process for early-stage processing of sheepskins.

    Science.gov (United States)

    Lim, Y F; Bronlund, J E; Allsop, T F; Shilton, A N; Edmonds, R L

    2010-01-01

    An enzyme treatment process for early-stage processing of sheepskins has been previously reported by the Leather and Shoe Research Association of New Zealand (LASRA) as an alternative to current industry operations. The newly developed process had marked benefits over conventional processing in terms of a lowered energy usage (73%), processing time (47%) as well as water use (49%), but had been developed as a "proof of principle''. The objective of this work was to develop the process further to a stage ready for adoption by industry. Mass balancing was used to investigate potential modifications for the process based on the understanding developed from a detailed analysis of preliminary design trials. Results showed that a configuration utilising a 2 stage counter-current system for the washing stages and segregation and recycling of enzyme float prior to dilution in the neutralization stage was a significant improvement. Benefits over conventional processing include a reduction of residual TDS by 50% at the washing stages and 70% savings on water use overall. Benefits over the un-optimized LASRA process are reduction of solids in product after enzyme treatment and neutralization stages by 30%, additional water savings of 21%, as well as 10% savings of enzyme usage.

  15. The Reliability and Validity of a Four-Minute Running Time-Trial in Assessing V˙O2max and Performance

    Directory of Open Access Journals (Sweden)

    Kerry McGawley

    2017-05-01

    Full Text Available Introduction: Traditional graded-exercise tests to volitional exhaustion (GXTs are limited by the need to establish starting workloads, stage durations, and step increments. Short-duration time-trials (TTs may be easier to implement and more ecologically valid in terms of real-world athletic events. The purpose of the current study was to assess the reliability and validity of maximal oxygen uptake (V˙O2max and performance measured during a traditional GXT (STEP and a four-minute running time-trial (RunTT.Methods: Ten recreational runners (age: 32 ± 7 years; body mass: 69 ± 10 kg completed five STEP tests with a verification phase (VER and five self-paced RunTTs on a treadmill. The order of the STEP/VER and RunTT trials was alternated and counter-balanced. Performance was measured as time to exhaustion (TTE for STEP and VER and distance covered for RunTT.Results: The coefficient of variation (CV for V˙O2max was similar between STEP, VER, and RunTT (1.9 ± 1.0, 2.2 ± 1.1, and 1.8 ± 0.8%, respectively, but varied for performance between the three types of test (4.5 ± 1.9, 9.7 ± 3.5, and 1.8 ± 0.7% for STEP, VER, and RunTT, respectively. Bland-Altman limits of agreement (bias ± 95% showed V˙O2max to be 1.6 ± 3.6 mL·kg−1·min−1 higher for STEP vs. RunTT. Peak HR was also significantly higher during STEP compared with RunTT (P = 0.019.Conclusion: A four-minute running time-trial appears to provide more reliable performance data in comparison to an incremental test to exhaustion, but may underestimate V˙O2max.

  16. Mathematical modeling of a continuous alcoholic fermentation process in a two-stage tower reactor cascade with flocculating yeast recycle.

    Science.gov (United States)

    de Oliveira, Samuel Conceição; de Castro, Heizir Ferreira; Visconti, Alexandre Eliseu Stourdze; Giudici, Reinaldo

    2015-03-01

    Experiments of continuous alcoholic fermentation of sugarcane juice with flocculating yeast recycle were conducted in a system of two 0.22-L tower bioreactors in series, operated at a range of dilution rates (D 1 = D 2 = 0.27-0.95 h(-1)), constant recycle ratio (α = F R /F = 4.0) and a sugar concentration in the feed stream (S 0) around 150 g/L. The data obtained in these experimental conditions were used to adjust the parameters of a mathematical model previously developed for the single-stage process. This model considers each of the tower bioreactors as a perfectly mixed continuous reactor and the kinetics of cell growth and product formation takes into account the limitation by substrate and the inhibition by ethanol and biomass, as well as the substrate consumption for cellular maintenance. The model predictions agreed satisfactorily with the measurements taken in both stages of the cascade. The major differences with respect to the kinetic parameters previously estimated for a single-stage system were observed for the maximum specific growth rate, for the inhibition constants of cell growth and for the specific rate of substrate consumption for cell maintenance. Mathematical models were validated and used to simulate alternative operating conditions as well as to analyze the performance of the two-stage process against that of the single-stage process.

  17. Optimization of Removal Efficiency and Minimum Contact Time for Cadmium and Zinc Removal onto Iron-modified Zeolite in a Two-stage Batch Sorption Reactor

    Directory of Open Access Journals (Sweden)

    M. Ugrina

    2018-01-01

    Full Text Available In highly congested industrial sites where significant volumes of effluents have to be treated in the minimum contact time, the application of a multi-stage batch reactor is suggested. To achieve better balance between capacity utilization and cost efficiency in design optimization, a two-stage batch reactor is usually the optimal solution. Thus, in this paper, a two-stage batch sorption design approach was applied to the experimental data of cadmium and zinc uptake onto iron-modified zeolite. The optimization approach involves the application of the Vermeulen’s approximation model and mass balance equation to kinetic data. A design analysis method was developed to optimize the removal efficiency and minimum total contact time by combining the time required in the two-stages, in order to achieve the maximum percentage of cadmium and zinc removal using a fixed mass of zeolite. The benefits and limitations of the two-stage design approach have been investigated and discussed

  18. Pyrolysis of Pinus pinaster in a two-stage gasifier: Influence of processing parameters and thermal cracking of tar

    Energy Technology Data Exchange (ETDEWEB)

    Fassinou, Wanignon Ferdinand; Toure, Siaka [Laboratoire d' Energie Solaire-UFR-S.S.M.T. Universite de Cocody, 22BP582 Abidjan 22 (Ivory Coast); Van de Steene, Laurent; Volle, Ghislaine; Girard, Philippe [CIRAD-Foret, TA 10/16, 73, avenue J.-F. Breton, 34398 Montpellier, Cedex 5 (France)

    2009-01-15

    A new two-stage gasifier with fixed-bed has recently been installed on CIRAD facilities in Montpellier. The pyrolysis and the gasifier units are removable. In order to characterise the pyrolysis products before their gasification, experiments were carried out, for the first time only with the pyrolysis unit and this paper deals with the results obtained. The biomass used is Pinus pinaster. The parameters investigated are: temperature, residence time and biomass flow rate. It has been found that increasing temperature and residence time improve the cracking of tars, gas production and char quality (fixed carbon rate more than 90%, volatile matter rate less than 4%). The increase of biomass flow rate leads to a bad char quality. The efficiency of tar cracking, the quality and the heating value of the charcoal and the gases, indicate that: temperature between 650 C and 750 C, residence time of 30 min, biomass flow rate between 10 and 15 kg/h should be the most convenient experimental conditions to get better results from the experimental device and from the biomass pyrolysis process. The kinetic study of charcoal generation shows that the pyrolysis process, in experimental conditions, is a first-order reaction. The kinetic parameters calculated are comparable with those found by other researchers. (author)

  19. Modified SPC for short run test and measurement process in multi-stations

    Science.gov (United States)

    Koh, C. K.; Chin, J. F.; Kamaruddin, S.

    2018-03-01

    Due to short production runs and measurement error inherent in electronic test and measurement (T&M) processes, continuous quality monitoring through real-time statistical process control (SPC) is challenging. Industry practice allows the installation of guard band using measurement uncertainty to reduce the width of acceptance limit, as an indirect way to compensate the measurement errors. This paper presents a new SPC model combining modified guard band and control charts (\\bar{\\text{Z}} chart and W chart) for short runs in T&M process in multi-stations. The proposed model standardizes the observed value with measurement target (T) and rationed measurement uncertainty (U). S-factor (S f) is introduced to the control limits to improve the sensitivity in detecting small shifts. The model was embedded in automated quality control system and verified with a case study in real industry.

  20. Two-Stage Variable Sample-Rate Conversion System

    Science.gov (United States)

    Tkacenko, Andre

    2009-01-01

    A two-stage variable sample-rate conversion (SRC) system has been pro posed as part of a digital signal-processing system in a digital com munication radio receiver that utilizes a variety of data rates. The proposed system would be used as an interface between (1) an analog- todigital converter used in the front end of the receiver to sample an intermediatefrequency signal at a fixed input rate and (2) digita lly implemented tracking loops in subsequent stages that operate at v arious sample rates that are generally lower than the input sample r ate. This Two-Stage System would be capable of converting from an input sample rate to a desired lower output sample rate that could be var iable and not necessarily a rational fraction of the input rate.

  1. Novel real-time alignment and calibration of LHCb detector for Run II and tracking for the upgrade.

    CERN Document Server

    AUTHOR|(CDS)2091576

    2016-01-01

    LHCb has introduced a novel real-time detector alignment and calibration strategy for LHC Run II. Data collected at the start of the fill is processed in a few minutes and used to update the alignment, while the calibration constants are evaluated for each run. The procedure aims to improve the quality of the online selection and performance stability. The required computing time constraints are met thanks to a new dedicated framework using the multi-core farm infrastructure for the trigger. A similar scheme is planned to be used for Run III foreseen to start in 2020. At that time LHCb will run at an instantaneous luminosity of $2 \\times 10^{33}$ cm$^2$ s$^1$ and a fully software based trigger strategy will be used. The new running conditions and the tighter timing constraints in the software trigger (only 13 ms per event are available) represent a big challenge for track reconstruction. The new software based trigger strategy implies a full detector read-out at the collision rate of 40 MHz. High performance ...

  2. One-stage and two-stage penile buccal mucosa urethroplasty

    Directory of Open Access Journals (Sweden)

    G. Barbagli

    2016-03-01

    Full Text Available The paper provides the reader with the detailed description of current techniques of one-stage and two-stage penile buccal mucosa urethroplasty. The paper provides the reader with the preoperative patient evaluation paying attention to the use of diagnostic tools. The one-stage penile urethroplasty using buccal mucosa graft with the application of glue is preliminary showed and discussed. Two-stage penile urethroplasty is then reported. A detailed description of first-stage urethroplasty according Johanson technique is reported. A second-stage urethroplasty using buccal mucosa graft and glue is presented. Finally postoperative course and follow-up are addressed.

  3. Running speed during training and percent body fat predict race time in recreational male marathoners.

    Science.gov (United States)

    Barandun, Ursula; Knechtle, Beat; Knechtle, Patrizia; Klipstein, Andreas; Rüst, Christoph Alexander; Rosemann, Thomas; Lepers, Romuald

    2012-01-01

    Recent studies have shown that personal best marathon time is a strong predictor of race time in male ultramarathoners. We aimed to determine variables predictive of marathon race time in recreational male marathoners by using the same characteristics of anthropometry and training as used for ultramarathoners. Anthropometric and training characteristics of 126 recreational male marathoners were bivariately and multivariately related to marathon race times. After multivariate regression, running speed of the training units (β = -0.52, P marathon race times. Marathon race time for recreational male runners may be estimated to some extent by using the following equation (r (2) = 0.44): race time ( minutes) = 326.3 + 2.394 × (percent body fat, %) - 12.06 × (speed in training, km/hours). Running speed during training sessions correlated with prerace percent body fat (r = 0.33, P = 0.0002). The model including anthropometric and training variables explained 44% of the variance of marathon race times, whereas running speed during training sessions alone explained 40%. Thus, training speed was more predictive of marathon performance times than anthropometric characteristics. The present results suggest that low body fat and running speed during training close to race pace (about 11 km/hour) are two key factors for a fast marathon race time in recreational male marathoner runners.

  4. Two-stage nonrecursive filter/decimator

    International Nuclear Information System (INIS)

    Yoder, J.R.; Richard, B.D.

    1980-08-01

    A two-stage digital filter/decimator has been designed and implemented to reduce the sampling rate associated with the long-term computer storage of certain digital waveforms. This report describes the design selection and implementation process and serves as documentation for the system actually installed. A filter design with finite-impulse response (nonrecursive) was chosen for implementation via direct convolution. A newly-developed system-test statistic validates the system under different computer-operating environments

  5. Time and activity sequence prediction of business process instances

    DEFF Research Database (Denmark)

    Polato, Mirko; Sperduti, Alessandro; Burattin, Andrea

    2018-01-01

    The ability to know in advance the trend of running process instances, with respect to different features, such as the expected completion time, would allow business managers to timely counteract to undesired situations, in order to prevent losses. Therefore, the ability to accurately predict...... future features of running business process instances would be a very helpful aid when managing processes, especially under service level agreement constraints. However, making such accurate forecasts is not easy: many factors may influence the predicted features. Many approaches have been proposed...

  6. Two-Stage Dynamic Pricing and Advertising Strategies for Online Video Services

    Directory of Open Access Journals (Sweden)

    Zhi Li

    2017-01-01

    Full Text Available As the demands for online video services increase intensively, the selection of business models has drawn the great attention of online providers. Among them, pay-per-view mode and advertising mode are two important resource modes, where the reasonable fee charge and suitable volume of ads need to be determined. This paper establishes an analytical framework studying the optimal dynamic pricing and advertising strategies for online providers; it shows how the strategies are influenced by the videos available time and the viewers’ emotional factor. We create the two-stage strategy of revenue models involving a single fee mode and a mixed fee-free mode and find out the optimal fee charge and advertising level of online video services. According to the results, the optimal video price and ads volume dynamically vary over time. The viewer’s aversion level to advertising has direct effects on both the volume of ads and the number of viewers who have selected low-quality content. The optimal volume of ads decreases with the increase of ads-aversion coefficient, while increasing as the quality of videos increases. The results also indicate that, in the long run, a pure fee mode or free mode is the optimal strategy for online providers.

  7. THE MATHEMATICAL MODEL DEVELOPMENT OF THE ETHYLBENZENE DEHYDROGENATION PROCESS KINETICS IN A TWO-STAGE ADIABATIC CONTINUOUS REACTOR

    Directory of Open Access Journals (Sweden)

    V. K. Bityukov

    2015-01-01

    Full Text Available The article is devoted to the mathematical modeling of the kinetics of ethyl benzene dehydrogenation in a two-stage adiabatic reactor with a catalytic bed functioning on continuous technology. The analysis of chemical reactions taking place parallel to the main reaction of styrene formation has been carried out on the basis of which a number of assumptions were made proceeding from which a kinetic scheme describing the mechanism of the chemical reactions during the dehydrogenation process was developed. A mathematical model of the dehydrogenation process, describing the dynamics of chemical reactions taking place in each of the two stages of the reactor block at a constant temperature is developed. The estimation of the rate constants of direct and reverse reactions of each component, formation and exhaustion of the reacted mixture was made. The dynamics of the starting material concentration variations (ethyl benzene batch was obtained as well as styrene formation dynamics and all byproducts of dehydrogenation (benzene, toluene, ethylene, carbon, hydrogen, ect.. The calculated the variations of the component composition of the reaction mixture during its passage through the first and second stages of the reactor showed that the proposed mathematical description adequately reproduces the kinetics of the process under investigation. This demonstrates the advantage of the developed model, as well as loyalty to the values found for the rate constants of reactions, which enable the use of models for calculating the kinetics of ethyl benzene dehydrogenation under nonisothermal mode in order to determine the optimal temperature trajectory of the reactor operation. In the future, it will reduce energy and resource consumption, increase the volume of produced styrene and improve the economic indexes of the process.

  8. CDF run II run control and online monitor

    International Nuclear Information System (INIS)

    Arisawa, T.; Ikado, K.; Badgett, W.; Chlebana, F.; Maeshima, K.; McCrory, E.; Meyer, A.; Patrick, J.; Wenzel, H.; Stadie, H.; Wagner, W.; Veramendi, G.

    2001-01-01

    The authors discuss the CDF Run II Run Control and online event monitoring system. Run Control is the top level application that controls the data acquisition activities across 150 front end VME crates and related service processes. Run Control is a real-time multi-threaded application implemented in Java with flexible state machines, using JDBC database connections to configure clients, and including a user friendly and powerful graphical user interface. The CDF online event monitoring system consists of several parts: the event monitoring programs, the display to browse their results, the server program which communicates with the display via socket connections, the error receiver which displays error messages and communicates with Run Control, and the state manager which monitors the state of the monitor programs

  9. Real-time synchronization of batch trajectories for on-line multivariate statistical process control using Dynamic Time Warping

    OpenAIRE

    González Martínez, Jose María; Ferrer Riquelme, Alberto José; Westerhuis, Johan A.

    2011-01-01

    This paper addresses the real-time monitoring of batch processes with multiple different local time trajectories of variables measured during the process run. For Unfold Principal Component Analysis (U-PCA)—or Unfold Partial Least Squares (U-PLS)-based on-line monitoring of batch processes, batch runs need to be synchronized, not only to have the same time length, but also such that key events happen at the same time. An adaptation from Kassidas et al.'s approach [1] will be introduced to ach...

  10. Application of two-stage biofilter system for the removal of odorous compounds.

    Science.gov (United States)

    Jeong, Gwi-Taek; Park, Don-Hee; Lee, Gwang-Yeon; Cha, Jin-Myeong

    2006-01-01

    Biofiltration is a biological process which is considered to be one of the more successful examples of biotechnological applications to environmental engineering, and is most commonly used in the removal of odoriferous compounds. In this study, we have attempted to assess the efficiency with which both single and complex odoriferous compounds could be removed, using one- or two-stage biofiltration systems. The tested single odor gases, limonene, alpha-pinene, and iso-butyl alcohol, were separately evaluated in the biofilters. Both limonene and alpha-pinene were removed by 90% or more EC (elimination capacity), 364 g/m3/h and 321 g/m3/h, respectively, at an input concentration of 50 ppm and a retention time of 30 s. The iso-butyl alcohol was maintained with an effective removal yield of more than 90% (EC 375 g/m3/h) at an input concentration of 100 ppm. The complex gas removal scheme was applied with a 200 ppm inlet concentration of ethanol, 70 ppm of acetaldehyde, and 70 ppm of toluene with residence time of 45 s in a one- or two-stage biofiltration system. The removal yield of toluene was determined to be lower than that of the other gases in the one-stage biofilter. Otherwise, the complex gases were sufficiently eliminated by the two-stage biofiltration system.

  11. Design-time application mapping and platform exploration for MP-SoC customised run-time management

    NARCIS (Netherlands)

    Ykman-Couvreur, Ch.; Nollet, V.; Marescaux, T.M.; Brockmeyer, E.; Catthoor, F.; Corporaal, H.

    2007-01-01

    Abstract: In an Multi-Processor system-on-Chip (MP-SoC) environment, a customized run-time management layer should be incorporated on top of the basic Operating System services to alleviate the run-time decision-making and to globally optimise costs (e.g. energy consumption) across all active

  12. Run-Time and Compiler Support for Programming in Adaptive Parallel Environments

    Directory of Open Access Journals (Sweden)

    Guy Edjlali

    1997-01-01

    Full Text Available For better utilization of computing resources, it is important to consider parallel programming environments in which the number of available processors varies at run-time. In this article, we discuss run-time support for data-parallel programming in such an adaptive environment. Executing programs in an adaptive environment requires redistributing data when the number of processors changes, and also requires determining new loop bounds and communication patterns for the new set of processors. We have developed a run-time library to provide this support. We discuss how the run-time library can be used by compilers of high-performance Fortran (HPF-like languages to generate code for an adaptive environment. We present performance results for a Navier-Stokes solver and a multigrid template run on a network of workstations and an IBM SP-2. Our experiments show that if the number of processors is not varied frequently, the cost of data redistribution is not significant compared to the time required for the actual computation. Overall, our work establishes the feasibility of compiling HPF for a network of nondedicated workstations, which are likely to be an important resource for parallel programming in the future.

  13. Time limit and time at VO2max' during a continuous and an intermittent run.

    Science.gov (United States)

    Demarie, S; Koralsztein, J P; Billat, V

    2000-06-01

    The purpose of this study was to verify, by track field tests, whether sub-elite runners (n=15) could (i) reach their VO2max while running at v50%delta, i.e. midway between the speed associated with lactate threshold (vLAT) and that associated with maximal aerobic power (vVO2max), and (ii) if an intermittent exercise provokes a maximal and/or supra maximal oxygen consumption longer than a continuous one. Within three days, subjects underwent a multistage incremental test during which their vVO2max and vLAT were determined; they then performed two additional testing sessions, where continuous and intermittent running exercises at v50%delta were performed up to exhaustion. Subject's gas exchange and heart rate were continuously recorded by means of a telemetric apparatus. Blood samples were taken from fingertip and analysed for blood lactate concentration. In the continuous and the intermittent tests peak VO2 exceeded VO2max values, as determined during the incremental test. However in the intermittent exercise, peak VO2, time to exhaustion and time at VO2max reached significantly higher values, while blood lactate accumulation showed significantly lower values than in the continuous one. The v50%delta is sufficient to stimulate VO2max in both intermittent and continuous running. The intermittent exercise results better than the continuous one in increasing maximal aerobic power, allowing longer time at VO2max and obtaining higher peak VO2 with lower lactate accumulation.

  14. PEMBELAJARAN LARI CEPAT DENGAN MENGGUNAKAN MODIFIKASI SHUTTLE RUN

    Directory of Open Access Journals (Sweden)

    Suharjo

    2015-09-01

    Full Text Available The purpose of this study was to determine "Is Modified Shuttle Run can improve learning outcomes Elementary School fifth grade students Cenggini 02 Subdistrict Balapulang Tegal 2014". This research method is a class action research by using two cycles, each cycle consisting of four stages, namely planning, tindakkan, observation and action planning refleksi..Pada second cycle associated with the results achieved in the first cycle acts as an improvement efforts of the cycle. The subjects of this study were fifth grade students of elementary Negri Cenggini 02. Research conducted includes three domains, namely affective, cognitive and psychomotor addition to the observations made during the process of the learning process takes place. The results showed the affective, cognitive and psychomotor well categorized shows that the learning outcomes quick run using a modified shuttle run a positive impact as seen on mastery learning outcomes of students who exceed the predetermined KKM 75 In the first cycle the average value of students 75 , 71 in the second cycle the average value of 78.60 students, mastery learning in the first cycle reaches 64.29%, while in the second cycle reaches 92.86% mastery learning .mean mastery learning students has increased by 28.57%. It is concluded that learning to run faster by using a modified shuttle run has a positive effect, which can increase student interest and motivation to learn.

  15.  Running speed during training and percent body fat predict race time in recreational male marathoners

    Directory of Open Access Journals (Sweden)

    Barandun U

    2012-07-01

    Full Text Available  Background: Recent studies have shown that personal best marathon time is a strong predictor of race time in male ultramarathoners. We aimed to determine variables predictive of marathon race time in recreational male marathoners by using the same characteristics of anthropometry and training as used for ultramarathoners.Methods: Anthropometric and training characteristics of 126 recreational male marathoners were bivariately and multivariately related to marathon race times.Results: After multivariate regression, running speed of the training units (β=-0.52, P<0.0001 and percent body fat (β=0.27, P <0.0001 were the two variables most strongly correlated with marathon race times. Marathon race time for recreational male runners may be estimated to some extent by using the following equation (r2 = 0.44: race time (minutes = 326.3 + 2.394 × (percent body fat, % – 12.06 × (speed in training, km/hours. Running speed during training sessions correlated with prerace percent body fat (r=0.33, P=0.0002. The model including anthropometric and training variables explained 44% of the variance of marathon race times, whereas running speed during training sessions alone explained 40%. Thus, training speed was more predictive of marathon performance times than anthropometric characteristics.Conclusion: The present results suggest that low body fat and running speed during training close to race pace (about 11 km/hour are two key factors for a fast marathon race time in recreational male marathoner runners.Keywords: body fat, skinfold thickness, anthropometry, endurance, athlete

  16. Optics of two-stage photovoltaic concentrators with dielectric second stages

    Science.gov (United States)

    Ning, Xiaohui; O'Gallagher, Joseph; Winston, Roland

    1987-04-01

    Two-stage photovoltaic concentrators with Fresnel lenses as primaries and dielectric totally internally reflecting nonimaging concentrators as secondaries are discussed. The general design principles of such two-stage systems are given. Their optical properties are studied and analyzed in detail using computer ray trace procedures. It is found that the two-stage concentrator offers not only a higher concentration or increased acceptance angle, but also a more uniform flux distribution on the photovoltaic cell than the point focusing Fresnel lens alone. Experimental measurements with a two-stage prototype module are presented and compared to the analytical predictions.

  17. Optics of two-stage photovoltaic concentrators with dielectric second stages.

    Science.gov (United States)

    Ning, X; O'Gallagher, J; Winston, R

    1987-04-01

    Two-stage photovoltaic concentrators with Fresnel lenses as primaries and dielectric totally internally reflecting nonimaging concentrators as secondaries are discussed. The general design principles of such two-stage systems are given. Their optical properties are studied and analyzed in detail using computer ray trace procedures. It is found that the two-stage concentrator offers not only a higher concentration or increased acceptance angle, but also a more uniform flux distribution on the photovoltaic cell than the point focusing Fresnel lens alone. Experimental measurements with a two-stage prototype module are presented and compared to the analytical predictions.

  18. Rapid Two-stage Versus One-stage Surgical Repair of Interrupted Aortic Arch with Ventricular Septal Defect in Neonates

    Directory of Open Access Journals (Sweden)

    Meng-Lin Lee

    2008-11-01

    Conclusion: The outcome of rapid two-stage repair is comparable to that of one-stage repair. Rapid two-stage repair has the advantages of significantly shorter cardiopulmonary bypass duration and AXC time, and avoids deep hypothermic circulatory arrest. LVOTO remains an unresolved issue, and postoperative aortic arch restenosis can be dilated effectively by percutaneous balloon angioplasty.

  19. Effect of ammoniacal nitrogen on one-stage and two-stage anaerobic digestion of food waste

    International Nuclear Information System (INIS)

    Ariunbaatar, Javkhlan; Scotto Di Perta, Ester; Panico, Antonio; Frunzo, Luigi; Esposito, Giovanni; Lens, Piet N.L.; Pirozzi, Francesco

    2015-01-01

    Highlights: • Almost 100% of the biomethane potential of food waste was recovered during AD in a two-stage CSTR. • Recirculation of the liquid fraction of the digestate provided the necessary buffer in the AD reactors. • A higher OLR (0.9 gVS/L·d) led to higher accumulation of TAN, which caused more toxicity. • A two-stage reactor is more sensitive to elevated concentrations of ammonia. • The IC 50 of TAN for the AD of food waste amounts to 3.8 g/L. - Abstract: This research compares the operation of one-stage and two-stage anaerobic continuously stirred tank reactor (CSTR) systems fed semi-continuously with food waste. The main purpose was to investigate the effects of ammoniacal nitrogen on the anaerobic digestion process. The two-stage system gave more reliable operation compared to one-stage due to: (i) a better pH self-adjusting capacity; (ii) a higher resistance to organic loading shocks; and (iii) a higher conversion rate of organic substrate to biomethane. Also a small amount of biohydrogen was detected from the first stage of the two-stage reactor making this system attractive for biohythane production. As the digestate contains ammoniacal nitrogen, re-circulating it provided the necessary alkalinity in the systems, thus preventing an eventual failure by volatile fatty acids (VFA) accumulation. However, re-circulation also resulted in an ammonium accumulation, yielding a lower biomethane production. Based on the batch experimental results the 50% inhibitory concentration of total ammoniacal nitrogen on the methanogenic activities was calculated as 3.8 g/L, corresponding to 146 mg/L free ammonia for the inoculum used for this research. The two-stage system was affected by the inhibition more than the one-stage system, as it requires less alkalinity and the physically separated methanogens are more sensitive to inhibitory factors, such as ammonium and propionic acid

  20. Metadata aided run selection at ATLAS

    International Nuclear Information System (INIS)

    Buckingham, R M; Gallas, E J; Tseng, J C-L; Viegas, F; Vinek, E

    2011-01-01

    Management of the large volume of data collected by any large scale scientific experiment requires the collection of coherent metadata quantities, which can be used by reconstruction or analysis programs and/or user interfaces, to pinpoint collections of data needed for specific purposes. In the ATLAS experiment at the LHC, we have collected metadata from systems storing non-event-wise data (Conditions) into a relational database. The Conditions metadata (COMA) database tables not only contain conditions known at the time of event recording, but also allow for the addition of conditions data collected as a result of later analysis of the data (such as improved measurements of beam conditions or assessments of data quality). A new web based interface called 'runBrowser' makes these Conditions Metadata available as a Run based selection service. runBrowser, based on PHP and JavaScript, uses jQuery to present selection criteria and report results. It not only facilitates data selection by conditions attributes, but also gives the user information at each stage about the relationship between the conditions chosen and the remaining conditions criteria available. When a set of COMA selections are complete, runBrowser produces a human readable report as well as an XML file in a standardized ATLAS format. This XML can be saved for later use or refinement in a future runBrowser session, shared with physics/detector groups, or used as input to ELSSI (event level Metadata browser) or other ATLAS run or event processing services.

  1. Single-stage-to-orbit versus two-stage-two-orbit: A cost perspective

    Science.gov (United States)

    Hamaker, Joseph W.

    1996-03-01

    This paper considers the possible life-cycle costs of single-stage-to-orbit (SSTO) and two-stage-to-orbit (TSTO) reusable launch vehicles (RLV's). The analysis parametrically addresses the issue such that the preferred economic choice comes down to the relative complexity of the TSTO compared to the SSTO. The analysis defines the boundary complexity conditions at which the two configurations have equal life-cycle costs, and finally, makes a case for the economic preference of SSTO over TSTO.

  2. Experimental evaluation of desuperheating and oil cooling process through liquid injection in two-staged ammonia refrigeration systems with screw compressors

    International Nuclear Information System (INIS)

    Zlatanović, Ivan; Rudonja, Nedžad

    2012-01-01

    This paper examines the problem of achieving desuperheating through liquid injection in two-staged refrigeration systems based on screw compressors. The oil cooling process by refrigerant injection is also included. The basic thermodynamic principles of desuperheating and compressor cooling as well as short comparison with traditional method with a thermosyphon system have also been presented. Finally, the collected data referring to a big refrigeration plant are analyzed in the paper. Specific ammonia system concept applied in this refrigeration plant has demonstrated its advantages and disadvantages. - Highlights: ► An experiment was setup during a frozen food factory refrigeration system reconstruction and adaptation. ► Desuperheating and low-stage compressors oil cooling process were investigated. ► Efficiency of compression process and high-stage compressors functioning were examined. ► Evaporation temperature reduction has great influence on the need for injected liquid refrigerant. ► Several cases in which desuperheating and oil cooling process application are justified were determined.

  3. Studies on quantitative physiology of Trichoderma reesei with two-stage continuous culture for cellulase production

    Energy Technology Data Exchange (ETDEWEB)

    Ryu, D; Andreotti, R; Mandels, M; Gallo, B; Reese, E T

    1979-11-01

    By employing a two-stage continuous-culture system, some of the more important physiological parameters involved in cellulase biosynthesis have been evaluated with an ultimate objective of designing an optimally controlled cellulase process. The two-stage continuous-culture system was run for a period of 1350 hr with Trichoderma reesei strain MCG-77. The temperature and pH were controlled at 32/sup 0/C and pH 4.5 for the first stage (growth) and 28/sup 0/C and pH 3.5 for the second stage (enzyme production). Lactose was the only carbon source for both stages. The ratio of specific uptake rate of carbon to that of nitrogen, Q(C)/Q(N), that supported good cell growth ranged from 11 to 15, and the ratio for maximum specific enzyme productivity ranged from 5 to 13. The maintenance coefficients determined for oxygen, M/sub 0/, and for carbon source, M/sub c/, are 0.85 mmol O/sub 2//g biomass/hr and 0.14 mmol hexose/g biomass/hr, respectively. The yield constants determined are: Y/sub X/O/ = 32.3 g biomass/mol O/sub 2/, Y/sub X/C/ = 1.1 g biomass/g C or Y/sub X/C/ = 0.44 g biomass/g hexose, Y/sub X/N/ = 12.5 g biomass/g nitrogen for the cell growth stage, and Y/sub X/N/ = 16.6 g biomass/g nitrogen for the enzyme production stage. Enzyme was produced only in the second stage. Volumetric and specific enzyme productivities obtained were 90 IU/liter/hrand 8 IU/g biomass/hr, respectively. The maximum specific enzyme productivity observed was 14.8 IU/g biomass/hr. The optimal dilution rate in the second stage that corresponded to the maximum enzyme productivity was 0.026 approx. 0.028 hr/sup -1/, and the specific growth rate in the second stage that supported maximum specific enzyme productivity was equal to or slightly less than zero.

  4. A two-stage predictive model to simultaneous control of trihalomethanes in water treatment plants and distribution systems: adaptability to treatment processes.

    Science.gov (United States)

    Domínguez-Tello, Antonio; Arias-Borrego, Ana; García-Barrera, Tamara; Gómez-Ariza, José Luis

    2017-10-01

    The trihalomethanes (TTHMs) and others disinfection by-products (DBPs) are formed in drinking water by the reaction of chlorine with organic precursors contained in the source water, in two consecutive and linked stages, that starts at the treatment plant and continues in second stage along the distribution system (DS) by reaction of residual chlorine with organic precursors not removed. Following this approach, this study aimed at developing a two-stage empirical model for predicting the formation of TTHMs in the water treatment plant and subsequently their evolution along the water distribution system (WDS). The aim of the two-stage model was to improve the predictive capability for a wide range of scenarios of water treatments and distribution systems. The two-stage model was developed using multiple regression analysis from a database (January 2007 to July 2012) using three different treatment processes (conventional and advanced) in the water supply system of Aljaraque area (southwest of Spain). Then, the new model was validated using a recent database from the same water supply system (January 2011 to May 2015). The validation results indicated no significant difference in the predictive and observed values of TTHM (R 2 0.874, analytical variance distribution systems studied, proving the adaptability of the new model to the boundary conditions. Finally the predictive capability of the new model was compared with 17 other models selected from the literature, showing satisfactory results prediction and excellent adaptability to treatment processes.

  5. A two-stage heating scheme for heat assisted magnetic recording

    Science.gov (United States)

    Xiong, Shaomin; Kim, Jeongmin; Wang, Yuan; Zhang, Xiang; Bogy, David

    2014-05-01

    Heat Assisted Magnetic Recording (HAMR) has been proposed to extend the storage areal density beyond 1 Tb/in.2 for the next generation magnetic storage. A near field transducer (NFT) is widely used in HAMR systems to locally heat the magnetic disk during the writing process. However, much of the laser power is absorbed around the NFT, which causes overheating of the NFT and reduces its reliability. In this work, a two-stage heating scheme is proposed to reduce the thermal load by separating the NFT heating process into two individual heating stages from an optical waveguide and a NFT, respectively. As the first stage, the optical waveguide is placed in front of the NFT and delivers part of laser energy directly onto the disk surface to heat it up to a peak temperature somewhat lower than the Curie temperature of the magnetic material. Then, the NFT works as the second heating stage to heat a smaller area inside the waveguide heated area further to reach the Curie point. The energy applied to the NFT in the second heating stage is reduced compared with a typical single stage NFT heating system. With this reduced thermal load to the NFT by the two-stage heating scheme, the lifetime of the NFT can be extended orders longer under the cyclic load condition.

  6. Two-stage perceptual learning to break visual crowding.

    Science.gov (United States)

    Zhu, Ziyun; Fan, Zhenzhi; Fang, Fang

    2016-01-01

    When a target is presented with nearby flankers in the peripheral visual field, it becomes harder to identify, which is referred to as crowding. Crowding sets a fundamental limit of object recognition in peripheral vision, preventing us from fully appreciating cluttered visual scenes. We trained adult human subjects on a crowded orientation discrimination task and investigated whether crowding could be completely eliminated by training. We discovered a two-stage learning process with this training task. In the early stage, when the target and flankers were separated beyond a certain distance, subjects acquired a relatively general ability to break crowding, as evidenced by the fact that the breaking of crowding could transfer to another crowded orientation, even a crowded motion stimulus, although the transfer to the opposite visual hemi-field was weak. In the late stage, like many classical perceptual learning effects, subjects' performance gradually improved and showed specificity to the trained orientation. We also found that, when the target and flankers were spaced too finely, training could only reduce, rather than completely eliminate, the crowding effect. This two-stage learning process illustrates a learning strategy for our brain to deal with the notoriously difficult problem of identifying peripheral objects in clutter. The brain first learned to solve the "easy and general" part of the problem (i.e., improving the processing resolution and segmenting the target and flankers) and then tackle the "difficult and specific" part (i.e., refining the representation of the target).

  7. The Relationship between Running Velocity and the Energy Cost of Turning during Running

    Science.gov (United States)

    Hatamoto, Yoichi; Yamada, Yosuke; Sagayama, Hiroyuki; Higaki, Yasuki; Kiyonaga, Akira; Tanaka, Hiroaki

    2014-01-01

    Ball game players frequently perform changes of direction (CODs) while running; however, there has been little research on the physiological impact of CODs. In particular, the effect of running velocity on the physiological and energy demands of CODs while running has not been clearly determined. The purpose of this study was to examine the relationship between running velocity and the energy cost of a 180°COD and to quantify the energy cost of a 180°COD. Nine male university students (aged 18–22 years) participated in the study. Five shuttle trials were performed in which the subjects were required to run at different velocities (3, 4, 5, 6, 7, and 8 km/h). Each trial consisted of four stages with different turn frequencies (13, 18, 24 and 30 per minute), and each stage lasted 3 minutes. Oxygen consumption was measured during the trial. The energy cost of a COD significantly increased with running velocity (except between 7 and 8 km/h, p = 0.110). The relationship between running velocity and the energy cost of a 180°COD is best represented by a quadratic function (y = −0.012+0.066x +0.008x2, [r = 0.994, p = 0.001]), but is also well represented by a linear (y = −0.228+0.152x, [r = 0.991, prunning velocities have relatively high physiological demands if the COD frequency increases, and that running velocities affect the physiological demands of CODs. These results also showed that the energy expenditure of COD can be evaluated using only two data points. These results may be useful for estimating the energy expenditure of players during a match and designing shuttle exercise training programs. PMID:24497913

  8. Removal of cesium from simulated liquid waste with countercurrent two-stage adsorption followed by microfiltration

    Energy Technology Data Exchange (ETDEWEB)

    Han, Fei; Zhang, Guang-Hui [School of Environmental Science and Engineering, Tianjin University, Tianjin, 300072 (China); Gu, Ping, E-mail: guping@tju.edu.cn [School of Environmental Science and Engineering, Tianjin University, Tianjin, 300072 (China)

    2012-07-30

    Highlights: Black-Right-Pointing-Pointer The adsorption isotherm of cesium by copper ferrocyanide followed a Freundlich model. Black-Right-Pointing-Pointer Decontamination factor of cesium was higher in lab-scale test than that in jar test. Black-Right-Pointing-Pointer A countercurrent two-stage adsorption-microfiltration process was achieved. Black-Right-Pointing-Pointer Cesium concentration in the effluent could be calculated. Black-Right-Pointing-Pointer It is a new cesium removal process with a higher decontamination factor. - Abstract: Copper ferrocyanide (CuFC) was used as an adsorbent to remove cesium. Jar test results showed that the adsorption capacity of CuFC was better than that of potassium zinc hexacyanoferrate. Lab-scale tests were performed by an adsorption-microfiltration process, and the mean decontamination factor (DF) was 463 when the initial cesium concentration was 101.3 {mu}g/L, the dosage of CuFC was 40 mg/L and the adsorption time was 20 min. The cesium concentration in the effluent continuously decreased with the operation time, which indicated that the used adsorbent retained its adsorption capacity. To use this capacity, experiments on a countercurrent two-stage adsorption (CTA)-microfiltration (MF) process were carried out with CuFC adsorption combined with membrane separation. A calculation method for determining the cesium concentration in the effluent was given, and batch tests in a pressure cup were performed to verify the calculated method. The results showed that the experimental values fitted well with the calculated values in the CTA-MF process. The mean DF was 1123 when the dilution factor was 0.4, the initial cesium concentration was 98.75 {mu}g/L and the dosage of CuFC and adsorption time were the same as those used in the lab-scale test. The DF obtained by CTA-MF process was more than three times higher than the single-stage adsorption in the jar test.

  9. Two-stage continuous process of methyl ester from high free fatty acid mixed crude palm oil using static mixer coupled with high-intensity of ultrasound

    International Nuclear Information System (INIS)

    Somnuk, Krit; Smithmaitrie, Pruittikorn; Prateepchaikul, Gumpon

    2013-01-01

    Highlights: • Mixed crude palm oil was used in the two-step continuous process. • Two-step continuous process was performed using static mixer coupled with ultrasound. • The maximum obtained yield was 92.5 vol.% after the purification process. • The residence time less than 20 s was achieved in ultrasonic reactors. - Abstract: The two-stage continuous process of methyl ester from high free fatty acid (FFA) mixed crude palm oil (MCPO) was performed by using static mixer coupled with high-intensity of ultrasound. The 2 × 1000 W ultrasonic homogenizers were operated at 18 kHz frequency in the 2 × 100 mL continuous reactors. For the first-step, acid-catalyzed esterification was employed with 18 vol.% of methanol, 2.7 vol.% of sulfuric acid, 60 °C of temperature, and 20 L h −1 of MCPO flow rate, for reducing the acid value from 28 mg KOH g −1 to less than 2 mg KOH g −1 . For the second-step, base-catalyzed transesterification was carried out under 18 vol.% of methanol, 8 g KOH L −1 of oil, and 20 L h −1 of esterified oil flow rate at 30 °C. The high yields of esterified oil and crude biodiesel were attained within the residence time of less than 20 s in the ultrasonic reactors. The yields of each stage process were: 103.3 vol.% of esterified oil, 105.4 vol.% of crude biodiesel, and 92.5 vol.% of biodiesel when compared with 100 vol.% MCPO. The quality of the biodiesel meets the specification of biodiesel standard in Thailand

  10. Two-Stage Power Factor Corrected Power Supplies: The Low Component-Stress Approach

    DEFF Research Database (Denmark)

    Petersen, Lars; Andersen, Michael Andreas E.

    2002-01-01

    The discussion concerning the use of single-stage contra two-stage PFC solutions has been going on for the last decade and it continues. The purpose of this paper is to direct the focus back on how the power is processed and not so much as to the number of stages or the amount of power processed...

  11. Runway Operations Planning: A Two-Stage Heuristic Algorithm

    Science.gov (United States)

    Anagnostakis, Ioannis; Clarke, John-Paul

    2003-01-01

    The airport runway is a scarce resource that must be shared by different runway operations (arrivals, departures and runway crossings). Given the possible sequences of runway events, careful Runway Operations Planning (ROP) is required if runway utilization is to be maximized. From the perspective of departures, ROP solutions are aircraft departure schedules developed by optimally allocating runway time for departures given the time required for arrivals and crossings. In addition to the obvious objective of maximizing throughput, other objectives, such as guaranteeing fairness and minimizing environmental impact, can also be incorporated into the ROP solution subject to constraints introduced by Air Traffic Control (ATC) procedures. This paper introduces a two stage heuristic algorithm for solving the Runway Operations Planning (ROP) problem. In the first stage, sequences of departure class slots and runway crossings slots are generated and ranked based on departure runway throughput under stochastic conditions. In the second stage, the departure class slots are populated with specific flights from the pool of available aircraft, by solving an integer program with a Branch & Bound algorithm implementation. Preliminary results from this implementation of the two-stage algorithm on real-world traffic data are presented.

  12. Click trains and the rate of information processing: does "speeding up" subjective time make other psychological processes run faster?

    Science.gov (United States)

    Jones, Luke A; Allely, Clare S; Wearden, John H

    2011-02-01

    A series of experiments demonstrated that a 5-s train of clicks that have been shown in previous studies to increase the subjective duration of tones they precede (in a manner consistent with "speeding up" timing processes) could also have an effect on information-processing rate. Experiments used studies of simple and choice reaction time (Experiment 1), or mental arithmetic (Experiment 2). In general, preceding trials by clicks made response times significantly shorter than those for trials without clicks, but white noise had no effects on response times. Experiments 3 and 4 investigated the effects of clicks on performance on memory tasks, using variants of two classic experiments of cognitive psychology: Sperling's (1960) iconic memory task and Loftus, Johnson, and Shimamura's (1985) iconic masking task. In both experiments participants were able to recall or recognize significantly more information from stimuli preceded by clicks than those preceded by silence.

  13. Anaerobic digestion of citrus waste using two-stage membrane bioreactor

    Science.gov (United States)

    Millati, Ria; Lukitawesa; Dwi Permanasari, Ervina; Wulan Sari, Kartika; Nur Cahyanto, Muhammad; Niklasson, Claes; Taherzadeh, Mohammad J.

    2018-03-01

    Anaerobic digestion is a promising method to treat citrus waste. However, the presence of limonene in citrus waste inhibits anaerobic digestion process. Limonene is an antimicrobial compound and could inhibit methane forming bacteria that takes a longer time to recover than the injured acid forming bacteria. Hence, volatile fatty acids will be accumulated and methane production will be decreased. One way to solve this problem is by conducting anaerobic digestion process into two stages. The first step is aimed for hydrolysis, acidogenesis, and acetogenesis reactions and the second stage is aimed for methanogenesis reaction. The separation of the system would further allow each stage in their optimum conditions making the process more stable. In this research, anaerobic digestion was carried out in batch operations using 120 ml-glass bottle bioreactors in 2 stages. The first stage was performed in free-cells bioreactor, whereas the second stage was performed in both bioreactor of free cells and membrane bioreactor. In the first stage, the reactor was set into ‘anaerobic’ and ‘semi-aerobic’ conditions to examine the effect of oxygen on facultative anaerobic bacteria in acid production. In the second stage, the protection of membrane towards the cells against limonene was tested. For the first stage, the basal medium was prepared with 1.5 g VS of inoculum and 4.5 g VS of citrus waste. The digestion process was carried out at 55°C for four days. For the second stage, the membrane bioreactor was prepared with 3 g of cells that were encased and sealed in a 3×6 cm2 polyvinylidene fluoride membrane. The medium contained 40 ml basal medium and 10 ml liquid from the first stage. The bioreactors were incubated at 55°C for 2 days under anaerobic condition. The results from the first stage showed that the maximum total sugar under ‘anaerobic’ and ‘semi-aerobic’ conditions was 294.3 g/l and 244.7 g/l, respectively. The corresponding values for total volatile

  14. Dry-run of site investigation planning using the manual for preliminary investigation in Japan

    International Nuclear Information System (INIS)

    Akamura, Shigeki; Miwa, Tadashi; Tanaka, Tatsuya; Shiratsuchi, Hiroshi; Horio, Atsushi

    2011-01-01

    A stepwise site selection process has been adopted for geological disposal of HLW in Japan. Literature surveys, followed by preliminary investigations (PI) and, finally, detailed investigations in underground facilities will be carried out in the successive selection stages. In the PI stage, surface-based investigations such as borehole surveys and geophysical prospecting will be implemented. In order to conduct the PI appropriately and efficiently within a restricted timeframe and budget, planning and management of PI are very important. NUMO therefore compiled existing knowledge and experience in the planning and managing of investigations in the form of manuals to be used to improve and maintain internal expertise. The first editions of the two manuals were prepared on the basis of experience overseas, and then they were revised by taking technological environment, laws and regulation in Japan into consideration. This paper introduces the procedure of PI planning using manual as well as the results of the dry-run, with the Yokosuka area as a hypothetical PI area, where the monstraction study is under way. Based on the dry-run, applicability of the manual is checked and, at the same time, further revisions are made to improve the content. (author)

  15. Typical Periods for Two-Stage Synthesis by Time-Series Aggregation with Bounded Error in Objective Function

    Energy Technology Data Exchange (ETDEWEB)

    Bahl, Björn; Söhler, Theo; Hennen, Maike; Bardow, André, E-mail: andre.bardow@ltt.rwth-aachen.de [Institute of Technical Thermodynamics, RWTH Aachen University, Aachen (Germany)

    2018-01-08

    Two-stage synthesis problems simultaneously consider here-and-now decisions (e.g., optimal investment) and wait-and-see decisions (e.g., optimal operation). The optimal synthesis of energy systems reveals such a two-stage character. The synthesis of energy systems involves multiple large time series such as energy demands and energy prices. Since problem size increases with the size of the time series, synthesis of energy systems leads to complex optimization problems. To reduce the problem size without loosing solution quality, we propose a method for time-series aggregation to identify typical periods. Typical periods retain the chronology of time steps, which enables modeling of energy systems, e.g., with storage units or start-up cost. The aim of the proposed method is to obtain few typical periods with few time steps per period, while accurately representing the objective function of the full time series, e.g., cost. Thus, we determine the error of time-series aggregation as the cost difference between operating the optimal design for the aggregated time series and for the full time series. Thereby, we rigorously bound the maximum performance loss of the optimal energy system design. In an initial step, the proposed method identifies the best length of typical periods by autocorrelation analysis. Subsequently, an adaptive procedure determines aggregated typical periods employing the clustering algorithm k-medoids, which groups similar periods into clusters and selects one representative period per cluster. Moreover, the number of time steps per period is aggregated by a novel clustering algorithm maintaining chronology of the time steps in the periods. The method is iteratively repeated until the error falls below a threshold value. A case study based on a real-world synthesis problem of an energy system shows that time-series aggregation from 8,760 time steps to 2 typical periods with each 2 time steps results in an error smaller than the optimality gap of

  16. Strong normalization by type-directed partial evaluation and run-time code generation

    DEFF Research Database (Denmark)

    Balat, Vincent; Danvy, Olivier

    1998-01-01

    We investigate the synergy between type-directed partial evaluation and run-time code generation for the Caml dialect of ML. Type-directed partial evaluation maps simply typed, closed Caml values to a representation of their long βη-normal form. Caml uses a virtual machine and has the capability...... to load byte code at run time. Representing the long βη-normal forms as byte code gives us the ability to strongly normalize higher-order values (i.e., weak head normal forms in ML), to compile the resulting strong normal forms into byte code, and to load this byte code all in one go, at run time. We...... conclude this note with a preview of our current work on scaling up strong normalization by run-time code generation to the Caml module language....

  17. Strong Normalization by Type-Directed Partial Evaluation and Run-Time Code Generation

    DEFF Research Database (Denmark)

    Balat, Vincent; Danvy, Olivier

    1997-01-01

    We investigate the synergy between type-directed partial evaluation and run-time code generation for the Caml dialect of ML. Type-directed partial evaluation maps simply typed, closed Caml values to a representation of their long βη-normal form. Caml uses a virtual machine and has the capability...... to load byte code at run time. Representing the long βη-normal forms as byte code gives us the ability to strongly normalize higher-order values (i.e., weak head normal forms in ML), to compile the resulting strong normal forms into byte code, and to load this byte code all in one go, at run time. We...... conclude this note with a preview of our current work on scaling up strong normalization by run-time code generation to the Caml module language....

  18. Novel real-time alignment and calibration of the LHCb detector in Run2

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00144085

    2017-01-01

    LHCb has introduced a novel real-time detector alignment and calibration strategy for LHC Run2. Data collected at the start of the fill are processed in a few minutes and used to update the alignment parameters, while the calibration constants are evaluated for each run. This procedure improves the quality of the online reconstruction. For example, the vertex locator is retracted and reinserted for stable beam conditions in each fill to be centred on the primary vertex position in the transverse plane. Consequently its position changes on a fill-by-fill basis. Critically, this new real-time alignment and calibration procedure allows identical constants to be used in the online and offline reconstruction, thus improving the correlation between triggered and offline-selected events. This offers the opportunity to optimise the event selection in the trigger by applying stronger constraints. The required computing time constraints are met thanks to a new dedicated framework using the multi-core farm infrastructur...

  19. Hybrid alkali-hydrodynamic disintegration of waste-activated sludge before two-stage anaerobic digestion process.

    Science.gov (United States)

    Grübel, Klaudiusz; Suschka, Jan

    2015-05-01

    The first step of anaerobic digestion, the hydrolysis, is regarded as the rate-limiting step in the degradation of complex organic compounds, such as waste-activated sludge (WAS). The aim of lab-scale experiments was to pre-hydrolyze the sludge by means of low intensive alkaline sludge conditioning before applying hydrodynamic disintegration, as the pre-treatment procedure. Application of both processes as a hybrid disintegration sludge technology resulted in a higher organic matter release (soluble chemical oxygen demand (SCOD)) to the liquid sludge phase compared with the effects of processes conducted separately. The total SCOD after alkalization at 9 pH (pH in the range of 8.96-9.10, SCOD = 600 mg O2/L) and after hydrodynamic (SCOD = 1450 mg O2/L) disintegration equaled to 2050 mg/L. However, due to the synergistic effect, the obtained SCOD value amounted to 2800 mg/L, which constitutes an additional chemical oxygen demand (COD) dissolution of about 35 %. Similarly, the synergistic effect after alkalization at 10 pH was also obtained. The applied hybrid pre-hydrolysis technology resulted in a disintegration degree of 28-35%. The experiments aimed at selection of the most appropriate procedures in terms of optimal sludge digestion results, including high organic matter degradation (removal) and high biogas production. The analyzed soft hybrid technology influenced the effectiveness of mesophilic/thermophilic anaerobic digestion in a positive way and ensured the sludge minimization. The adopted pre-treatment technology (alkalization + hydrodynamic cavitation) resulted in 22-27% higher biogas production and 13-28% higher biogas yield. After two stages of anaerobic digestion (mesophilic conditions (MAD) + thermophilic anaerobic digestion (TAD)), the highest total solids (TS) reduction amounted to 45.6% and was received for the following sample at 7 days MAD + 17 days TAD. About 7% higher TS reduction was noticed compared with the sample after 9

  20. Achieving Accurate Automatic Sleep Staging on Manually Pre-processed EEG Data Through Synchronization Feature Extraction and Graph Metrics.

    Science.gov (United States)

    Chriskos, Panteleimon; Frantzidis, Christos A; Gkivogkli, Polyxeni T; Bamidis, Panagiotis D; Kourtidou-Papadeli, Chrysoula

    2018-01-01

    Sleep staging, the process of assigning labels to epochs of sleep, depending on the stage of sleep they belong, is an arduous, time consuming and error prone process as the initial recordings are quite often polluted by noise from different sources. To properly analyze such data and extract clinical knowledge, noise components must be removed or alleviated. In this paper a pre-processing and subsequent sleep staging pipeline for the sleep analysis of electroencephalographic signals is described. Two novel methods of functional connectivity estimation (Synchronization Likelihood/SL and Relative Wavelet Entropy/RWE) are comparatively investigated for automatic sleep staging through manually pre-processed electroencephalographic recordings. A multi-step process that renders signals suitable for further analysis is initially described. Then, two methods that rely on extracting synchronization features from electroencephalographic recordings to achieve computerized sleep staging are proposed, based on bivariate features which provide a functional overview of the brain network, contrary to most proposed methods that rely on extracting univariate time and frequency features. Annotation of sleep epochs is achieved through the presented feature extraction methods by training classifiers, which are in turn able to accurately classify new epochs. Analysis of data from sleep experiments on a randomized, controlled bed-rest study, which was organized by the European Space Agency and was conducted in the "ENVIHAB" facility of the Institute of Aerospace Medicine at the German Aerospace Center (DLR) in Cologne, Germany attains high accuracy rates, over 90% based on ground truth that resulted from manual sleep staging by two experienced sleep experts. Therefore, it can be concluded that the above feature extraction methods are suitable for semi-automatic sleep staging.

  1. Optimal Infinite Runs in One-Clock Priced Timed Automata

    DEFF Research Database (Denmark)

    David, Alexandre; Ejsing-Duun, Daniel; Fontani, Lisa

    We address the problem of finding an infinite run with the optimal cost-time ratio in a one-clock priced timed automaton and pro- vide an algorithmic solution. Through refinements of the quotient graph obtained by strong time-abstracting bisimulation partitioning, we con- struct a graph with time...

  2. The optimal production-run time for a stock-dependent imperfect production process

    Directory of Open Access Journals (Sweden)

    Jain Divya

    2013-01-01

    Full Text Available This paper develops an inventory model for a hypothesized volume flexible manufacturing system in which the production rate is stock-dependent and the system produces both perfect and imperfect quality items. The demand rate of perfect quality items is known and constant, whereas the demand rate of imperfect (non-conforming to specifications quality items is a function of discount offered in the selling price. In this paper, we determine an optimal production-run time and the optimal discount that should be offered in the selling price to influence the sale of imperfect quality items produced by the manufacturing system. The considered model aims to maximize the net profit obtained through the sales of both perfect and imperfect quality items subject to certain constraints of the system. The solution procedure suggests the use of ‘Interior Penalty Function Method’ to solve the associated constrained maximization problem. Finally, a numerical example demonstrating the applicability of proposed model has been included.

  3. Effect of ammoniacal nitrogen on one-stage and two-stage anaerobic digestion of food waste

    Energy Technology Data Exchange (ETDEWEB)

    Ariunbaatar, Javkhlan, E-mail: jaka@unicas.it [Department of Civil and Mechanical Engineering, University of Cassino and Southern Lazio, Via Di Biasio 43, 03043 Cassino, FR (Italy); UNESCO-IHE Institute for Water Education, Westvest 7, 2611 AX Delft (Netherlands); Scotto Di Perta, Ester [Department of Civil, Architectural and Environmental Engineering, University of Naples Federico II, Via Claudio 21, 80125 Naples (Italy); Panico, Antonio [Telematic University PEGASO, Piazza Trieste e Trento, 48, 80132 Naples (Italy); Frunzo, Luigi [Department of Mathematics and Applications Renato Caccioppoli, University of Naples Federico II, Via Claudio, 21, 80125 Naples (Italy); Esposito, Giovanni [Department of Civil and Mechanical Engineering, University of Cassino and Southern Lazio, Via Di Biasio 43, 03043 Cassino, FR (Italy); Lens, Piet N.L. [UNESCO-IHE Institute for Water Education, Westvest 7, 2611 AX Delft (Netherlands); Pirozzi, Francesco [Department of Civil, Architectural and Environmental Engineering, University of Naples Federico II, Via Claudio 21, 80125 Naples (Italy)

    2015-04-15

    Highlights: • Almost 100% of the biomethane potential of food waste was recovered during AD in a two-stage CSTR. • Recirculation of the liquid fraction of the digestate provided the necessary buffer in the AD reactors. • A higher OLR (0.9 gVS/L·d) led to higher accumulation of TAN, which caused more toxicity. • A two-stage reactor is more sensitive to elevated concentrations of ammonia. • The IC{sub 50} of TAN for the AD of food waste amounts to 3.8 g/L. - Abstract: This research compares the operation of one-stage and two-stage anaerobic continuously stirred tank reactor (CSTR) systems fed semi-continuously with food waste. The main purpose was to investigate the effects of ammoniacal nitrogen on the anaerobic digestion process. The two-stage system gave more reliable operation compared to one-stage due to: (i) a better pH self-adjusting capacity; (ii) a higher resistance to organic loading shocks; and (iii) a higher conversion rate of organic substrate to biomethane. Also a small amount of biohydrogen was detected from the first stage of the two-stage reactor making this system attractive for biohythane production. As the digestate contains ammoniacal nitrogen, re-circulating it provided the necessary alkalinity in the systems, thus preventing an eventual failure by volatile fatty acids (VFA) accumulation. However, re-circulation also resulted in an ammonium accumulation, yielding a lower biomethane production. Based on the batch experimental results the 50% inhibitory concentration of total ammoniacal nitrogen on the methanogenic activities was calculated as 3.8 g/L, corresponding to 146 mg/L free ammonia for the inoculum used for this research. The two-stage system was affected by the inhibition more than the one-stage system, as it requires less alkalinity and the physically separated methanogens are more sensitive to inhibitory factors, such as ammonium and propionic acid.

  4. Articulating spacers used in two-stage revision of infected hip and knee prostheses abrade with time.

    Science.gov (United States)

    Fink, Bernd; Rechtenbach, Annett; Büchner, Hubert; Vogt, Sebastian; Hahn, Michael

    2011-04-01

    Articulating spacers used in two-stage revision surgery of infected prostheses have the potential to abrade and subsequently induce third-body wear of the new prosthesis. We asked whether particulate material abraded from spacers could be detected in the synovial membrane 6 weeks after implantation when the spacers were removed for the second stage of the revision. Sixteen hip spacers (cemented prosthesis stem articulating with a cement cup) and four knee spacers (customized mobile cement spacers) were explanted 6 weeks after implantation and the synovial membranes were removed at the same time. The membranes were examined by xray fluorescence spectroscopy, xray diffraction for the presence of abraded particles originating from the spacer material, and analyzed in a semiquantitative manner by inductively coupled plasma mass spectrometry. Histologic analyses also were performed. We found zirconium dioxide in substantial amounts in all samples, and in the specimens of the hip synovial lining, we detected particles that originated from the metal heads of the spacers. Histologically, zirconium oxide particles were seen in the synovial membrane of every spacer and bone cement particles in one knee and two hip spacers. The observations suggest cement spacers do abrade within 6 weeks. Given the presence of abrasion debris, we recommend total synovectomy and extensive lavage during the second-stage reimplantation surgery to minimize the number of abraded particles and any retained bacteria.

  5. Change in skeletal muscle stiffness after running competition is dependent on both running distance and recovery time: a pilot study.

    Science.gov (United States)

    Sadeghi, Seyedali; Newman, Cassidy; Cortes, Daniel H

    2018-01-01

    Long-distance running competitions impose a large amount of mechanical loading and strain leading to muscle edema and delayed onset muscle soreness (DOMS). Damage to various muscle fibers, metabolic impairments and fatigue have been linked to explain how DOMS impairs muscle function. Disruptions of muscle fiber during DOMS exacerbated by exercise have been shown to change muscle mechanical properties. The objective of this study is to quantify changes in mechanical properties of different muscles in the thigh and lower leg as function of running distance and time after competition. A custom implementation of Focused Comb-Push Ultrasound Shear Elastography (F-CUSE) method was used to evaluate shear modulus in runners before and after a race. Twenty-two healthy individuals (age: 23 ± 5 years) were recruited using convenience sampling and split into three race categories: short distance (nine subjects, 3-5 miles), middle distance (10 subjects, 10-13 miles), and long distance (three subjects, 26+ miles). Shear Wave Elastography (SWE) measurements were taken on both legs of each subject on the rectus femoris (RF), vastus lateralis (VL), vastus medialis (VM), soleus, lateral gastrocnemius (LG), medial gastrocnemius (MG), biceps femoris (BF) and semitendinosus (ST) muscles. For statistical analyses, a linear mixed model was used, with recovery time and running distance as fixed variables, while shear modulus was used as the dependent variable. Recovery time had a significant effect on the soleus ( p  = 0.05), while running distance had considerable effect on the biceps femoris ( p  = 0.02), vastus lateralis ( p  trend from before competition to immediately after competition. The preliminary results suggest that SWE could potentially be used to quantify changes of muscle mechanical properties as a way for measuring recovery procedures for runners.

  6. A preventive maintenance policy based on dependent two-stage deterioration and external shocks

    International Nuclear Information System (INIS)

    Yang, Li; Ma, Xiaobing; Peng, Rui; Zhai, Qingqing; Zhao, Yu

    2017-01-01

    This paper proposes a preventive maintenance policy for a single-unit system whose failure has two competing and dependent causes, i.e., internal deterioration and sudden shocks. The internal failure process is divided into two stages, i.e. normal and defective. Shocks arrive according to a non-homogeneous Poisson process (NHPP), leading to the failure of the system immediately. The occurrence rate of a shock is affected by the state of the system. Both an age-based replacement and finite number of periodic inspections are schemed simultaneously to deal with the competing failures. The objective of this study is to determine the optimal preventive replacement interval, inspection interval and number of inspections such that the expected cost per unit time is minimized. A case study on oil pipeline maintenance is presented to illustrate the maintenance policy. - Highlights: • A maintenance model based on two-stage deterioration and sudden shocks is developed. • The impact of internal system state on external shock process is studied. • A new preventive maintenance strategy combining age-based replacements and periodic inspections is proposed. • Postponed replacement of a defective system is provided by restricting the number of inspections.

  7. Two-Stage Conversion of High Free Fatty Acid Jatropha curcas Oil to Biodiesel Using Brønsted Acidic Ionic Liquid and KOH as Catalysts

    Directory of Open Access Journals (Sweden)

    Subrata Das

    2014-01-01

    Full Text Available Biodiesel was produced from high free fatty acid (FFA Jatropha curcas oil (JCO by two-stage process in which esterification was performed by Brønsted acidic ionic liquid 1-(1-butylsulfonic-3-methylimidazolium chloride ([BSMIM]Cl followed by KOH catalyzed transesterification. Maximum FFA conversion of 93.9% was achieved and it reduced from 8.15 wt% to 0.49 wt% under the optimum reaction conditions of methanol oil molar ratio 12 : 1 and 10 wt% of ionic liquid catalyst at 70°C in 6 h. The ionic liquid catalyst was reusable up to four times of consecutive runs under the optimum reaction conditions. At the second stage, the esterified JCO was transesterified by using 1.3 wt% KOH and methanol oil molar ratio of 6 : 1 in 20 min at 64°C. The yield of the final biodiesel was found to be 98.6% as analyzed by NMR spectroscopy. Chemical composition of the final biodiesel was also determined by GC-MS analysis.

  8. First-Passage-Time Distribution for Variable-Diffusion Processes

    Science.gov (United States)

    Barney, Liberty; Gunaratne, Gemunu H.

    2017-05-01

    First-passage-time distribution, which presents the likelihood of a stock reaching a pre-specified price at a given time, is useful in establishing the value of financial instruments and in designing trading strategies. First-passage-time distribution for Wiener processes has a single peak, while that for stocks exhibits a notable second peak within a trading day. This feature has only been discussed sporadically—often dismissed as due to insufficient/incorrect data or circumvented by conversion to tick time—and to the best of our knowledge has not been explained in terms of the underlying stochastic process. It was shown previously that intra-day variations in the market can be modeled by a stochastic process containing two variable-diffusion processes (Hua et al. in, Physica A 419:221-233, 2015). We show here that the first-passage-time distribution of this two-stage variable-diffusion model does exhibit a behavior similar to the empirical observation. In addition, we find that an extended model incorporating overnight price fluctuations exhibits intra- and inter-day behavior similar to those of empirical first-passage-time distributions.

  9. Two stage fluid bed-plasma gasification process for solid waste valorisation: Technical review and preliminary thermodynamic modelling of sulphur emissions

    International Nuclear Information System (INIS)

    Morrin, Shane; Lettieri, Paola; Chapman, Chris; Mazzei, Luca

    2012-01-01

    Highlights: ► We investigate sulphur during MSW gasification within a fluid bed-plasma process. ► We review the literature on the feed, sulphur and process principles therein. ► The need for research in this area was identified. ► We perform thermodynamic modelling of the fluid bed stage. ► Initial findings indicate the prominence of solid phase sulphur. - Abstract: Gasification of solid waste for energy has significant potential given an abundant feed supply and strong policy drivers. Nonetheless, significant ambiguities in the knowledge base are apparent. Consequently this study investigates sulphur mechanisms within a novel two stage fluid bed-plasma gasification process. This paper includes a detailed review of gasification and plasma fundamentals in relation to the specific process, along with insight on MSW based feedstock properties and sulphur pollutant therein. As a first step to understanding sulphur partitioning and speciation within the process, thermodynamic modelling of the fluid bed stage has been performed. Preliminary findings, supported by plant experience, indicate the prominence of solid phase sulphur species (as opposed to H 2 S) – Na and K based species in particular. Work is underway to further investigate and validate this.

  10. Real-time SHVC software decoding with multi-threaded parallel processing

    Science.gov (United States)

    Gudumasu, Srinivas; He, Yuwen; Ye, Yan; He, Yong; Ryu, Eun-Seok; Dong, Jie; Xiu, Xiaoyu

    2014-09-01

    This paper proposes a parallel decoding framework for scalable HEVC (SHVC). Various optimization technologies are implemented on the basis of SHVC reference software SHM-2.0 to achieve real-time decoding speed for the two layer spatial scalability configuration. SHVC decoder complexity is analyzed with profiling information. The decoding process at each layer and the up-sampling process are designed in parallel and scheduled by a high level application task manager. Within each layer, multi-threaded decoding is applied to accelerate the layer decoding speed. Entropy decoding, reconstruction, and in-loop processing are pipeline designed with multiple threads based on groups of coding tree units (CTU). A group of CTUs is treated as a processing unit in each pipeline stage to achieve a better trade-off between parallelism and synchronization. Motion compensation, inverse quantization, and inverse transform modules are further optimized with SSE4 SIMD instructions. Simulations on a desktop with an Intel i7 processor 2600 running at 3.4 GHz show that the parallel SHVC software decoder is able to decode 1080p spatial 2x at up to 60 fps (frames per second) and 1080p spatial 1.5x at up to 50 fps for those bitstreams generated with SHVC common test conditions in the JCT-VC standardization group. The decoding performance at various bitrates with different optimization technologies and different numbers of threads are compared in terms of decoding speed and resource usage, including processor and memory.

  11. Design considerations for single-stage and two-stage pneumatic pellet injectors

    International Nuclear Information System (INIS)

    Gouge, M.J.; Combs, S.K.; Fisher, P.W.; Milora, S.L.

    1988-09-01

    Performance of single-stage pneumatic pellet injectors is compared with several models for one-dimensional, compressible fluid flow. Agreement is quite good for models that reflect actual breech chamber geometry and incorporate nonideal effects such as gas friction. Several methods of improving the performance of single-stage pneumatic pellet injectors in the near term are outlined. The design and performance of two-stage pneumatic pellet injectors are discussed, and initial data from the two-stage pneumatic pellet injector test facility at Oak Ridge National Laboratory are presented. Finally, a concept for a repeating two-stage pneumatic pellet injector is described. 27 refs., 8 figs., 3 tabs

  12. Hybrid biogas upgrading in a two-stage thermophilic reactor

    DEFF Research Database (Denmark)

    Corbellini, Viola; Kougias, Panagiotis; Treu, Laura

    2018-01-01

    The aim of this study is to propose a hybrid biogas upgrading configuration composed of two-stage thermophilic reactors. Hydrogen is directly injected in the first stage reactor. The output gas from the first reactor (in-situ biogas upgrade) is subsequently transferred to a second upflow reactor...... (ex-situ upgrade), in which enriched hydrogenotrophic culture is responsible for the hydrogenation of carbon dioxide to methane. The overall objective of the work was to perform an initial methane enrichment in the in-situ reactor, avoiding deterioration of the process due to elevated pH levels......, and subsequently, to complete the biogas upgrading process in the ex-situ chamber. The methane content in the first stage reactor reached on average 87% and the corresponding value in the second stage was 91%, with a maximum of 95%. A remarkable accumulation of volatile fatty acids was observed in the first...

  13. Experimental study on an innovative multifunction heat pipe type heat recovery two-stage sorption refrigeration system

    International Nuclear Information System (INIS)

    Li, T.X.; Wang, R.Z.; Wang, L.W.; Lu, Z.S.

    2008-01-01

    An innovative multifunction heat pipe type sorption refrigeration system is designed, in which a two-stage sorption thermodynamic cycle based on two heat recovery processes was employed to reduce the driving heat source temperature, and the composite sorbent of CaCl 2 and activated carbon was used to improve the mass and heat transfer performances. For this test unit, the heating, cooling and heat recovery processes between two reactive beds are performed by multifunction heat pipes. The aim of this paper is to investigate the cycled characteristics of two-stage sorption refrigeration system with heat recovery processes. The two sub-cycles of a two-stage cycle have different sorption platforms though the adsorption and desorption temperatures are equivalent. The experimental results showed that the pressure evolutions of two beds are nearly equivalent during the first stage, and desorption pressure during the second stage is large higher than that in the first stage while the desorption temperatures are same during the two operation stages. In comparison with conventional two-stage cycle, the two-stage cycle with heat recovery processes can reduce the heating load for desorber and cooling load for adsorber, the coefficient of performance (COP) has been improved more than 23% when both cycles have the same regeneration temperature of 103 deg. C and the cooling water temperature of 30 deg. C. The advanced two-stage cycle provides an effective method for application of sorption refrigeration technology under the condition of low-grade temperature heat source or utilization of renewable energy

  14. Transport fuels from two-stage coal liquefaction

    Energy Technology Data Exchange (ETDEWEB)

    Benito, A.; Cebolla, V.; Fernandez, I.; Martinez, M.T.; Miranda, J.L.; Oelert, H.; Prado, J.G. (Instituto de Carboquimica CSIC, Zaragoza (Spain))

    1994-03-01

    Four Spanish lignites and their vitrinite concentrates were evaluated for coal liquefaction. Correlationships between the content of vitrinite and conversion in direct liquefaction were observed for the lignites but not for the vitrinite concentrates. The most reactive of the four coals was processed in two-stage liquefaction at a higher scale. First-stage coal liquefaction was carried out in a continuous unit at Clausthal University at a temperature of 400[degree]C at 20 MPa hydrogen pressure and with anthracene oil as a solvent. The coal conversion obtained was 75.41% being 3.79% gases, 2.58% primary condensate and 69.04% heavy liquids. A hydroprocessing unit was built at the Instituto de Carboquimica for the second-stage coal liquefaction. Whole and deasphalted liquids from the first-stage liquefaction were processed at 450[degree]C and 10 MPa hydrogen pressure, with two commercial catalysts: Harshaw HT-400E (Co-Mo/Al[sub 2]O[sub 3]) and HT-500E (Ni-Mo/Al[sub 2]O[sub 3]). The effects of liquid hourly space velocity (LHSV), temperature, gas/liquid ratio and catalyst on the heteroatom liquids, and levels of 5 ppm of nitrogen and 52 ppm of sulphur were reached at 450[degree]C, 10 MPa hydrogen pressure, 0.08 kg H[sub 2]/kg feedstock and with Harshaw HT-500E catalyst. The liquids obtained were hydroprocessed again at 420[degree]C, 10 MPa hydrogen pressure and 0.06 kg H[sub 2]/kg feedstock to hydrogenate the aromatic structures. In these conditions, the aromaticity was reduced considerably, and 39% of naphthas and 35% of kerosene fractions were obtained. 18 refs., 4 figs., 4 tabs.

  15. LHCb's Real-Time Alignment in Run2

    CERN Multimedia

    Batozskaya, Varvara

    2015-01-01

    Stable, precise spatial alignment and PID calibration are necessary to achieve optimal detector performances. During Run2, LHCb will have a new real-time detector alignment and calibration to reach equivalent performances in the online and offline reconstruction. This offers the opportunity to optimise the event selection by applying stronger constraints as well as hadronic particle identification at the trigger level. The required computing time constraints are met thanks to a new dedicated framework using the multi-core farm infrastructure for the trigger.

  16. Categorization for Faces and Tools-Two Classes of Objects Shaped by Different Experience-Differs in Processing Timing, Brain Areas Involved, and Repetition Effects.

    Science.gov (United States)

    Kozunov, Vladimir; Nikolaeva, Anastasia; Stroganova, Tatiana A

    2017-01-01

    The brain mechanisms that integrate the separate features of sensory input into a meaningful percept depend upon the prior experience of interaction with the object and differ between categories of objects. Recent studies using representational similarity analysis (RSA) have characterized either the spatial patterns of brain activity for different categories of objects or described how category structure in neuronal representations emerges in time, but never simultaneously. Here we applied a novel, region-based, multivariate pattern classification approach in combination with RSA to magnetoencephalography data to extract activity associated with qualitatively distinct processing stages of visual perception. We asked participants to name what they see whilst viewing bitonal visual stimuli of two categories predominantly shaped by either value-dependent or sensorimotor experience, namely faces and tools, and meaningless images. We aimed to disambiguate the spatiotemporal patterns of brain activity between the meaningful categories and determine which differences in their processing were attributable to either perceptual categorization per se , or later-stage mentalizing-related processes. We have extracted three stages of cortical activity corresponding to low-level processing, category-specific feature binding, and supra-categorical processing. All face-specific spatiotemporal patterns were associated with bilateral activation of ventral occipito-temporal areas during the feature binding stage at 140-170 ms. The tool-specific activity was found both within the categorization stage and in a later period not thought to be associated with binding processes. The tool-specific binding-related activity was detected within a 210-220 ms window and was located to the intraparietal sulcus of the left hemisphere. Brain activity common for both meaningful categories started at 250 ms and included widely distributed assemblies within parietal, temporal, and prefrontal regions

  17. Categorization for Faces and Tools—Two Classes of Objects Shaped by Different Experience—Differs in Processing Timing, Brain Areas Involved, and Repetition Effects

    Science.gov (United States)

    Kozunov, Vladimir; Nikolaeva, Anastasia; Stroganova, Tatiana A.

    2018-01-01

    The brain mechanisms that integrate the separate features of sensory input into a meaningful percept depend upon the prior experience of interaction with the object and differ between categories of objects. Recent studies using representational similarity analysis (RSA) have characterized either the spatial patterns of brain activity for different categories of objects or described how category structure in neuronal representations emerges in time, but never simultaneously. Here we applied a novel, region-based, multivariate pattern classification approach in combination with RSA to magnetoencephalography data to extract activity associated with qualitatively distinct processing stages of visual perception. We asked participants to name what they see whilst viewing bitonal visual stimuli of two categories predominantly shaped by either value-dependent or sensorimotor experience, namely faces and tools, and meaningless images. We aimed to disambiguate the spatiotemporal patterns of brain activity between the meaningful categories and determine which differences in their processing were attributable to either perceptual categorization per se, or later-stage mentalizing-related processes. We have extracted three stages of cortical activity corresponding to low-level processing, category-specific feature binding, and supra-categorical processing. All face-specific spatiotemporal patterns were associated with bilateral activation of ventral occipito-temporal areas during the feature binding stage at 140–170 ms. The tool-specific activity was found both within the categorization stage and in a later period not thought to be associated with binding processes. The tool-specific binding-related activity was detected within a 210–220 ms window and was located to the intraparietal sulcus of the left hemisphere. Brain activity common for both meaningful categories started at 250 ms and included widely distributed assemblies within parietal, temporal, and prefrontal regions

  18. Experimental and kinetics studies of aromatic hydrogenation in a two-stage hydrotreating process using NiMo/Al{sub 2}O{sub 3} and NiW/Al{sub 2}O{sub 3} catalysts

    Energy Technology Data Exchange (ETDEWEB)

    Owusu-Boakye, A.; Dalai, A.K.; Ferdous, D. [Saskatchewan Univ., Saskatoon, SK (Canada). Dept. of Chemical Engineering, Catalysis and Chemical Reaction Engineering Laboratories; Adjaye, J. [Syncrude Canada Ltd., Edmonton, AB (Canada)

    2006-10-15

    The degree of hydrogenation of aromatics in light gas oil (LGO) feed from Athabasca bitumen was examined using a two-stage process. Experiments were conducted in a trickle-bed reactor using 2 catalysts, namely nickel molybdenum alumina (NiMo/Al{sub 2}O{sub 3}) in stage one and nickel tungsten alumina (NiW/Al{sub 2}O{sub 3}) in the second stage. NiMo/Al{sub 2}O{sub 3} was used in the first stage in order to remove nitrogen and sulphur containing heteroatoms. NiW/Al{sub 2}O{sub 3} was used in the second stage for saturation of the aromatic rings in the hydrocarbon species. The catalysts were used under a range of temperature and pressure condition. Temperature and liquid hourly space velocity ranged from 350 to 390 degrees C and 1.0 to 1.5 per hour, respectively. Pressure was kept constant at 11.0 MPa for all experiments. Reaction time results from the two-stage process were compared with those from a single-stage where hydrotreating was performed over NiMo/AL{sub 2}O{sub 3}. Product samples from different feedstocks were analyzed with respect to sulfur, nitrogen and aromatic content. Gasoline selectivity and kinetic parameters for hydrodesulphurization (HDS) or hydrodenitrogenation (HDN) reactions for the feed materials were also compared. The effect of hydrogen sulphide (H{sub 2}S) inhibition on aromatics hydrogenation (HDA) was also kinetically modelled using the Langmuir-Hinshelwood approach. Kinetic analysis of the single-stage hydrotreating process showed that HDA and HDS activities were slowed by the presence of hydrogen sulphide that is produced as a by-product of the HDS process. However, with inter-stage removal of hydrogen sulphide in the two-stage process, significant improvement of the HDA and HDS activities were noted. It was concluded that the experimental data was successfully predicted by the Langmuir-Hinshelwood kinetic models. 27 refs., 4 tabs., 8 figs.

  19. Adaptive Embedded Systems – Challenges of Run-Time Resource Management

    DEFF Research Database (Denmark)

    Understanding and efficiently controlling the dynamic behavior of adaptive embedded systems is a challenging endavor. The challenges come from the often very complicated interplay between the application, the application mapping, and the underlying hardware architecture. With MPSoC, we have...... the technology to design and fabricate dynamically reconfigurable hardware platforms. However, such platforms will pose new challenges to tools and methods to efficiently explore these platforms at run-time. This talk will address some of the challenges of run-time resource management in adaptive embedded...... systems....

  20. A two-stage method for inverse medium scattering

    KAUST Repository

    Ito, Kazufumi; Jin, Bangti; Zou, Jun

    2013-01-01

    We present a novel numerical method to the time-harmonic inverse medium scattering problem of recovering the refractive index from noisy near-field scattered data. The approach consists of two stages, one pruning step of detecting the scatterer

  1. Effluent composition prediction of a two-stage anaerobic digestion process: machine learning and stoichiometry techniques.

    Science.gov (United States)

    Alejo, Luz; Atkinson, John; Guzmán-Fierro, Víctor; Roeckel, Marlene

    2018-05-16

    Computational self-adapting methods (Support Vector Machines, SVM) are compared with an analytical method in effluent composition prediction of a two-stage anaerobic digestion (AD) process. Experimental data for the AD of poultry manure were used. The analytical method considers the protein as the only source of ammonia production in AD after degradation. Total ammonia nitrogen (TAN), total solids (TS), chemical oxygen demand (COD), and total volatile solids (TVS) were measured in the influent and effluent of the process. The TAN concentration in the effluent was predicted, this being the most inhibiting and polluting compound in AD. Despite the limited data available, the SVM-based model outperformed the analytical method for the TAN prediction, achieving a relative average error of 15.2% against 43% for the analytical method. Moreover, SVM showed higher prediction accuracy in comparison with Artificial Neural Networks. This result reveals the future promise of SVM for prediction in non-linear and dynamic AD processes. Graphical abstract ᅟ.

  2. Short- and long-run time-of-use price elasticities in Swiss residential electricity demand

    International Nuclear Information System (INIS)

    Filippini, Massimo

    2011-01-01

    This paper presents an empirical analysis on the residential demand for electricity by time-of-day. This analysis has been performed using aggregate data at the city level for 22 Swiss cities for the period 2000-2006. For this purpose, we estimated two log-log demand equations for peak and off-peak electricity consumption using static and dynamic partial adjustment approaches. These demand functions were estimated using several econometric approaches for panel data, for example LSDV and RE for static models, and LSDV and corrected LSDV estimators for dynamic models. The attempt of this empirical analysis has been to highlight some of the characteristics of the Swiss residential electricity demand. The estimated short-run own price elasticities are lower than 1, whereas in the long-run these values are higher than 1. The estimated short-run and long-run cross-price elasticities are positive. This result shows that peak and off-peak electricity are substitutes. In this context, time differentiated prices should provide an economic incentive to customers so that they can modify consumption patterns by reducing peak demand and shifting electricity consumption from peak to off-peak periods. - Highlights: → Empirical analysis on the residential demand for electricity by time-of-day. → Estimators for dynamic panel data. → Peak and off-peak residential electricity are substitutes.

  3. Two-Stage Series-Resonant Inverter

    Science.gov (United States)

    Stuart, Thomas A.

    1994-01-01

    Two-stage inverter includes variable-frequency, voltage-regulating first stage and fixed-frequency second stage. Lightweight circuit provides regulated power and is invulnerable to output short circuits. Does not require large capacitor across ac bus, like parallel resonant designs. Particularly suitable for use in ac-power-distribution system of aircraft.

  4. Design and implement of infrared small target real-time detection system based on pipeline technology

    Science.gov (United States)

    Sun, Lihui; Wang, Yongzhong; He, Yongqiang

    2007-01-01

    The detection for motive small target in infrared image sequence has become a hot topic nowadays. Background suppress algorithm based on minim gradient median filter and temporal recursion target detection algorithm are introduced. On the basis of contents previously mentioned, a four stages pipeline structure infrared small target detection process system, which aims at characters of algorithm complexity, large amounts of data to process, high frame frequency and exigent real-time character in this kind of application, is designed and implemented. The logical structure of the system was introduced and the function and signals flows are programmed. The system is composed of two FPGA chips and two DSP chips of TI. According to the function of each part, the system is divided into image preprocess stage, target detection stage, track relation stage and image output stage. The experiment of running algorithms on the system presented in this paper proved that the system could meet acquisition and process of 50Hz 240x320 digital image and the system could real time detect small target with a signal-noise ratio more than 3 reliably. The system achieves the characters of large amount of memory, high real-time processing, excellent extension and favorable interactive interface.

  5. Treatment of process water containing heavy metals with a two-stage electrolysis procedure in a membrane electrolysis cell

    Energy Technology Data Exchange (ETDEWEB)

    Fischer, R.; Krebs, P. [Technische Universitaet Dresden, Institut fuer Siedlungs- und Industriewasserwirtschaft, Mommsenstrasse 13, 01062 Dresden (Germany); Seidel, H. [UFZ-Umweltforschungszentrum Leipzig-Halle GmbH, Department Bioremediation, Permoserstrasse 15, D-04318 Leipzig (Germany); Morgenstern, P. [UFZ-Umweltforschungszentrum Leipzig-Halle GmbH, Department Analytik, Permoserstrasse 15, D-04318 Leipzig (Germany); Foerster, H.J.; Thiele, W. [Eilenburger Elektrolyse- und Umwelttechnik GmbH, Ziegelstrasse 2, D-04838 Eilenburg (Germany)

    2005-04-01

    The capability of a two-stage electrochemical treatment for the regeneration of acidic heavy-metal containing process water was examined. The process water came from sediment bioleaching and was characterized by a wide spectrum of dissolved metals, a high sulfate content, and a pH of about 3. In the modular laboratory model cell used, the anode chamber and the cathode chamber were separated by a central chamber fitted with an ion exchanger membrane on either side. The experiments were carried out applying a platinum anode and a graphite cathode at a current density of 0.1 A/cm{sup 2}. The circulation flow of the process water in the batch process amounted to 35 L/h, the electrolysis duration was 5.5 h at maximum and the total electrolysis current was about 1 A. In the first stage, the acidic process water containing metals passed through the cathode chamber. In the second stage, the cathodically pretreated process water was electrolyzed anodically. In the cathode chamber the main load of dissolved Cu, Zn, Cr and Pb was eliminated. The sulfuric acid surplus of 3-4 g/L decreased to about 1 g/L, the pH rose from initially 3.0 to 4-5, but the desired pH of 9-10 was not achieved. Precipitation in the proximity to the cathode evidently takes place at a higher pH than farther away. The dominant process in the anode chamber was the precipitation of amorphous MnO{sub 2} owing to the oxidation of dissolved Mn(II). The further depletion of the remaining heavy metals in the cathodically pretreated process water by subsequent anodic treatment was nearly exhaustive, more than 99 % of Cd, Cr, Cu, Mn, Ni, Pb, and Zn were removed from the leachate. The high depletion of heavy metals might be due to both the sorption on MnO{sub 2} precipitates and/or basic ferrous sulfate formed anodically, and the migration of metal ions through the cation exchanger membrane via the middle chamber into the cathode chamber. In the anode chamber, the sulfuric acid content increased to 6-7 g/L and the

  6. On the robustness of two-stage estimators

    KAUST Repository

    Zhelonkin, Mikhail

    2012-04-01

    The aim of this note is to provide a general framework for the analysis of the robustness properties of a broad class of two-stage models. We derive the influence function, the change-of-variance function, and the asymptotic variance of a general two-stage M-estimator, and provide their interpretations. We illustrate our results in the case of the two-stage maximum likelihood estimator and the two-stage least squares estimator. © 2011.

  7. Hydrogen production from cellulose in a two-stage process combining fermentation and electrohydrogenesis

    KAUST Repository

    Lalaurette, Elodie

    2009-08-01

    A two-stage dark-fermentation and electrohydrogenesis process was used to convert the recalcitrant lignocellulosic materials into hydrogen gas at high yields and rates. Fermentation using Clostridium thermocellum produced 1.67 mol H2/mol-glucose at a rate of 0.25 L H2/L-d with a corn stover lignocellulose feed, and 1.64 mol H2/mol-glucose and 1.65 L H2/L-d with a cellobiose feed. The lignocelluose and cellobiose fermentation effluent consisted primarily of: acetic, lactic, succinic, and formic acids and ethanol. An additional 800 ± 290 mL H2/g-COD was produced from a synthetic effluent with a wastewater inoculum (fermentation effluent inoculum; FEI) by electrohydrogensis using microbial electrolysis cells (MECs). Hydrogen yields were increased to 980 ± 110 mL H2/g-COD with the synthetic effluent by combining in the inoculum samples from multiple microbial fuel cells (MFCs) each pre-acclimated to a single substrate (single substrate inocula; SSI). Hydrogen yields and production rates with SSI and the actual fermentation effluents were 980 ± 110 mL/g-COD and 1.11 ± 0.13 L/L-d (synthetic); 900 ± 140 mL/g-COD and 0.96 ± 0.16 L/L-d (cellobiose); and 750 ± 180 mL/g-COD and 1.00 ± 0.19 L/L-d (lignocellulose). A maximum hydrogen production rate of 1.11 ± 0.13 L H2/L reactor/d was produced with synthetic effluent. Energy efficiencies based on electricity needed for the MEC using SSI were 270 ± 20% for the synthetic effluent, 230 ± 50% for lignocellulose effluent and 220 ± 30% for the cellobiose effluent. COD removals were ∼90% for the synthetic effluents, and 70-85% based on VFA removal (65% COD removal) with the cellobiose and lignocellulose effluent. The overall hydrogen yield was 9.95 mol-H2/mol-glucose for the cellobiose. These results show that pre-acclimation of MFCs to single substrates improves performance with a complex mixture of substrates, and that high hydrogen yields and gas production rates can be achieved using a two-stage fermentation and MEC

  8. A two-stage method of quantitative flood risk analysis for reservoir real-time operation using ensemble-based hydrologic forecasts

    Science.gov (United States)

    Liu, P.

    2013-12-01

    Quantitative analysis of the risk for reservoir real-time operation is a hard task owing to the difficulty of accurate description of inflow uncertainties. The ensemble-based hydrologic forecasts directly depict the inflows not only the marginal distributions but also their persistence via scenarios. This motivates us to analyze the reservoir real-time operating risk with ensemble-based hydrologic forecasts as inputs. A method is developed by using the forecast horizon point to divide the future time into two stages, the forecast lead-time and the unpredicted time. The risk within the forecast lead-time is computed based on counting the failure number of forecast scenarios, and the risk in the unpredicted time is estimated using reservoir routing with the design floods and the reservoir water levels of forecast horizon point. As a result, a two-stage risk analysis method is set up to quantify the entire flood risks by defining the ratio of the number of scenarios that excessive the critical value to the total number of scenarios. The China's Three Gorges Reservoir (TGR) is selected as a case study, where the parameter and precipitation uncertainties are implemented to produce ensemble-based hydrologic forecasts. The Bayesian inference, Markov Chain Monte Carlo, is used to account for the parameter uncertainty. Two reservoir operation schemes, the real operated and scenario optimization, are evaluated for the flood risks and hydropower profits analysis. With the 2010 flood, it is found that the improvement of the hydrologic forecast accuracy is unnecessary to decrease the reservoir real-time operation risk, and most risks are from the forecast lead-time. It is therefore valuable to decrease the avarice of ensemble-based hydrologic forecasts with less bias for a reservoir operational purpose.

  9. An Empirical Derivation of the Run Time of the Bubble Sort Algorithm.

    Science.gov (United States)

    Gonzales, Michael G.

    1984-01-01

    Suggests a moving pictorial tool to help teach principles in the bubble sort algorithm. Develops such a tool applied to an unsorted list of numbers and describes a method to derive the run time of the algorithm. The method can be modified to run the times of various other algorithms. (JN)

  10. Treatment of natural rubber processing wastewater using a combination system of a two-stage up-flow anaerobic sludge blanket and down-flow hanging sponge system.

    Science.gov (United States)

    Tanikawa, D; Syutsubo, K; Hatamoto, M; Fukuda, M; Takahashi, M; Choeisai, P K; Yamaguchi, T

    2016-01-01

    A pilot-scale experiment of natural rubber processing wastewater treatment was conducted using a combination system consisting of a two-stage up-flow anaerobic sludge blanket (UASB) and a down-flow hanging sponge (DHS) reactor for more than 10 months. The system achieved a chemical oxygen demand (COD) removal efficiency of 95.7% ± 1.3% at an organic loading rate of 0.8 kg COD/(m(3).d). Bacterial activity measurement of retained sludge from the UASB showed that sulfate-reducing bacteria (SRB), especially hydrogen-utilizing SRB, possessed high activity compared with methane-producing bacteria (MPB). Conversely, the acetate-utilizing activity of MPB was superior to SRB in the second stage of the reactor. The two-stage UASB-DHS system can reduce power consumption by 95% and excess sludge by 98%. In addition, it is possible to prevent emissions of greenhouse gases (GHG), such as methane, using this system. Furthermore, recovered methane from the two-stage UASB can completely cover the electricity needs for the operation of the two-stage UASB-DHS system, accounting for approximately 15% of the electricity used in the natural rubber manufacturing process.

  11. Sensitivity Analysis in Two-Stage DEA

    Directory of Open Access Journals (Sweden)

    Athena Forghani

    2015-07-01

    Full Text Available Data envelopment analysis (DEA is a method for measuring the efficiency of peer decision making units (DMUs which uses a set of inputs to produce a set of outputs. In some cases, DMUs have a two-stage structure, in which the first stage utilizes inputs to produce outputs used as the inputs of the second stage to produce final outputs. One important issue in two-stage DEA is the sensitivity of the results of an analysis to perturbations in the data. The current paper looks into combined model for two-stage DEA and applies the sensitivity analysis to DMUs on the entire frontier. In fact, necessary and sufficient conditions for preserving a DMU's efficiency classiffication are developed when various data changes are applied to all DMUs.

  12. Sensitivity Analysis in Two-Stage DEA

    Directory of Open Access Journals (Sweden)

    Athena Forghani

    2015-12-01

    Full Text Available Data envelopment analysis (DEA is a method for measuring the efficiency of peer decision making units (DMUs which uses a set of inputs to produce a set of outputs. In some cases, DMUs have a two-stage structure, in which the first stage utilizes inputs to produce outputs used as the inputs of the second stage to produce final outputs. One important issue in two-stage DEA is the sensitivity of the results of an analysis to perturbations in the data. The current paper looks into combined model for two-stage DEA and applies the sensitivity analysis to DMUs on the entire frontier. In fact, necessary and sufficient conditions for preserving a DMU's efficiency classiffication are developed when various data changes are applied to all DMUs.

  13. Time ordering of two-step processes in energetic ion-atom collisions: Basic formalism

    International Nuclear Information System (INIS)

    Stolterfoht, N.

    1993-01-01

    The semiclassical approximation is applied in second order to describe time ordering of two-step processes in energetic ion-atom collisions. Emphasis is given to the conditions for interferences between first- and second-order terms. In systems with two active electrons, time ordering gives rise to a pair of associated paths involving a second-order process and its time-inverted process. Combining these paths within the independent-particle frozen orbital model, time ordering is lost. It is shown that the loss of time ordering modifies the second-order amplitude so that its ability to interfere with the first-order amplitude is essentially reduced. Time ordering and the capability for interference is regained, as one path is blocked by means of the Pauli exclusion principle. The time-ordering formalism is prepared for papers dealing with collision experiments of single excitation [Stolterfoht et al., following paper, Phys. Rev. A 48, 2986 (1993)] and double excitation [Stolterfoht et al. (unpublished)

  14. The control, at the design stage, of risks related to buildings management over time

    Directory of Open Access Journals (Sweden)

    Claudio Martani

    2013-10-01

    Full Text Available In the present paper an apparatus of tools and methods is presented to evaluate, at the design stage, the risks over a set of objectives through buildings lifetime. To this purpose a tool is first presented to relate technological requirements of each technical elements to the pertinent maintenance interventions. Then a process is also proposed to estimate the risks on user requirements runningMonte Carlo simulations. The risk management process proposed in the present work aims to support designers and promoters in making predictions on the outcomes of long, not standardized, multivariable dependent processes – as the building process is – in order to indicate the attitude of a designed building to meet a framework of important objectives through its lifetime.

  15. Two stage-type railgun accelerator

    International Nuclear Information System (INIS)

    Ogino, Mutsuo; Azuma, Kingo.

    1995-01-01

    The present invention provides a two stage-type railgun accelerator capable of spiking a flying body (ice pellet) formed by solidifying a gaseous hydrogen isotope as a fuel to a thermonuclear reactor at a higher speed into a central portion of plasmas. Namely, the two stage-type railgun accelerator accelerates the flying body spiked from a initial stage accelerator to a portion between rails by Lorentz force generated when electric current is supplied to the two rails by way of a plasma armature. In this case, two sets of solenoids are disposed for compressing the plasma armature in the longitudinal direction of the rails. The first and the second sets of solenoid coils are previously supplied with electric current. After passing of the flying body, the armature formed into plasmas by a gas laser disposed at the back of the flying body is compressed in the longitudinal direction of the rails by a magnetic force of the first and the second sets of solenoid coils to increase the plasma density. A current density is also increased simultaneously. Then, the first solenoid coil current is turned OFF to accelerate the flying body in two stages by the compressed plasma armature. (I.S.)

  16. On the Use of Running Trends as Summary Statistics for Univariate Time Series and Time Series Association

    OpenAIRE

    Trottini, Mario; Vigo, Isabel; Belda, Santiago

    2015-01-01

    Given a time series, running trends analysis (RTA) involves evaluating least squares trends over overlapping time windows of L consecutive time points, with overlap by all but one observation. This produces a new series called the “running trends series,” which is used as summary statistics of the original series for further analysis. In recent years, RTA has been widely used in climate applied research as summary statistics for time series and time series association. There is no doubt that ...

  17. Quantifying short run cost-effectiveness during a gradual implementation process.

    Science.gov (United States)

    van de Wetering, Gijs; Woertman, Willem H; Verbeek, Andre L; Broeders, Mireille J; Adang, Eddy M M

    2013-12-01

    This paper examines the short run inefficiencies that arise during gradual implementation of a new cost-effective technology in healthcare. These inefficiencies arise when health gains associated with the new technology cannot be obtained immediately because the new technology does not yet supply all patients, and when there is overcapacity for the old technology in the short run because the supply of care is divided among two mutually exclusive technologies. Such efficiency losses are not taken into account in standard textbook cost-effectiveness analysis in which a steady state is presented where costs and effects are assumed to be unchanging over time. A model is constructed to quantify such short run inefficiencies as well as to inform the decision maker about the optimal implementation pattern for the new technology. The model operates by integrating the incremental net benefit equations for both the period of co-existence of mutually exclusive technologies and the period after complete substitution of the old technology. It takes into account the rate of implementation of the new technology, depreciation of capital of the old technology as well as the demand curves for both technologies. The model is applied to the real world case of converting from screen film to digital mammography in the Netherlands.

  18. The Run-2 ATLAS Trigger System

    International Nuclear Information System (INIS)

    Martínez, A Ruiz

    2016-01-01

    The ATLAS trigger successfully collected collision data during the first run of the LHC between 2009-2013 at different centre-of-mass energies between 900 GeV and 8TeV. The trigger system consists of a hardware Level-1 and a software-based high level trigger (HLT) that reduces the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of a few hundred Hz. In Run-2, the LHC will operate at centre-of-mass energies of 13 and 14 TeV and higher luminosity, resulting in up to five times higher rates of processes of interest. A brief review of the ATLAS trigger system upgrades that were implemented between Run-1 and Run-2, allowing to cope with the increased trigger rates while maintaining or even improving the efficiency to select physics processes of interest, will be given. This includes changes to the Level-1 calorimeter and muon trigger systems, the introduction of a new Level-1 topological trigger module and the merging of the previously two-level HLT system into a single event processing farm. A few examples will be shown, such as the impressive performance improvements in the HLT trigger algorithms used to identify leptons, hadrons and global event quantities like missing transverse energy. Finally, the status of the commissioning of the trigger system and its performance during the 2015 run will be presented. (paper)

  19. Device for two-stage cementing of casing

    Energy Technology Data Exchange (ETDEWEB)

    Kudimov, D A; Goncharevskiy, Ye N; Luneva, L G; Shchelochkov, S N; Shil' nikova, L N; Tereshchenko, V G; Vasiliev, V A; Volkova, V V; Zhdokov, K I

    1981-01-01

    A device is claimed for two-stage cementing of casing. It consists of a body with lateral plugging vents, upper and lower movable sleeves, a check valve with axial channels that's situated in the lower sleeve, and a displacement limiting device for the lower sleeve. To improve the cementing process of the casing by preventing overflow of cementing fluids from the annular space into the first stage casing, the limiter is equipped with a spring rod that is capable of covering the axial channels of the check valve while it's in an operating mode. In addition, the rod in the upper part is equipped with a reinforced area under the axial channels of the check valve.

  20. A novel flow sensor based on resonant sensing with two-stage microleverage mechanism

    Science.gov (United States)

    Yang, B.; Guo, X.; Wang, Q. H.; Lu, C. F.; Hu, D.

    2018-04-01

    The design, simulation, fabrication, and experiments of a novel flow sensor based on resonant sensing with a two-stage microleverage mechanism are presented in this paper. Different from the conventional detection methods for flow sensors, two differential resonators are adopted to implement air flow rate transformation through two-stage leverage magnification. The proposed flow sensor has a high sensitivity since the adopted two-stage microleverage mechanism possesses a higher amplification factor than a single-stage microleverage mechanism. The modal distribution and geometric dimension of the two-stage leverage mechanism and hair are analyzed and optimized by Ansys simulation. A digital closed-loop driving technique with a phase frequency detector-based coordinate rotation digital computer algorithm is implemented for the detection and locking of resonance frequency. The sensor fabricated by the standard deep dry silicon on a glass process has a device dimension of 5100 μm (length) × 5100 μm (width) × 100 μm (height) with a hair diameter of 1000 μm. The preliminary experimental results demonstrate that the maximal mechanical sensitivity of the flow sensor is approximately 7.41 Hz/(m/s)2 at a resonant frequency of 22 kHz for the hair height of 9 mm and increases by 2.42 times as hair height extends from 3 mm to 9 mm. Simultaneously, a detection-limit of 3.23 mm/s air flow amplitude at 60 Hz is confirmed. The proposed flow sensor has great application prospects in the micro-autonomous system and technology, self-stabilizing micro-air vehicles, and environmental monitoring.

  1. Change in skeletal muscle stiffness after running competition is dependent on both running distance and recovery time: a pilot study

    Directory of Open Access Journals (Sweden)

    Seyedali Sadeghi

    2018-03-01

    Full Text Available Long-distance running competitions impose a large amount of mechanical loading and strain leading to muscle edema and delayed onset muscle soreness (DOMS. Damage to various muscle fibers, metabolic impairments and fatigue have been linked to explain how DOMS impairs muscle function. Disruptions of muscle fiber during DOMS exacerbated by exercise have been shown to change muscle mechanical properties. The objective of this study is to quantify changes in mechanical properties of different muscles in the thigh and lower leg as function of running distance and time after competition. A custom implementation of Focused Comb-Push Ultrasound Shear Elastography (F-CUSE method was used to evaluate shear modulus in runners before and after a race. Twenty-two healthy individuals (age: 23 ± 5 years were recruited using convenience sampling and split into three race categories: short distance (nine subjects, 3–5 miles, middle distance (10 subjects, 10–13 miles, and long distance (three subjects, 26+ miles. Shear Wave Elastography (SWE measurements were taken on both legs of each subject on the rectus femoris (RF, vastus lateralis (VL, vastus medialis (VM, soleus, lateral gastrocnemius (LG, medial gastrocnemius (MG, biceps femoris (BF and semitendinosus (ST muscles. For statistical analyses, a linear mixed model was used, with recovery time and running distance as fixed variables, while shear modulus was used as the dependent variable. Recovery time had a significant effect on the soleus (p = 0.05, while running distance had considerable effect on the biceps femoris (p = 0.02, vastus lateralis (p < 0.01 and semitendinosus muscles (p = 0.02. Sixty-seven percent of muscles exhibited a decreasing stiffness trend from before competition to immediately after competition. The preliminary results suggest that SWE could potentially be used to quantify changes of muscle mechanical properties as a way for measuring recovery procedures for runners.

  2. Two-stage implant systems.

    Science.gov (United States)

    Fritz, M E

    1999-06-01

    Since the advent of osseointegration approximately 20 years ago, there has been a great deal of scientific data developed on two-stage integrated implant systems. Although these implants were originally designed primarily for fixed prostheses in the mandibular arch, they have been used in partially dentate patients, in patients needing overdentures, and in single-tooth restorations. In addition, this implant system has been placed in extraction sites, in bone-grafted areas, and in maxillary sinus elevations. Often, the documentation of these procedures has lagged. In addition, most of the reports use survival criteria to describe results, often providing overly optimistic data. It can be said that the literature describes a true adhesion of the epithelium to the implant similar to adhesion to teeth, that two-stage implants appear to have direct contact somewhere between 50% and 70% of the implant surface, that the microbial flora of the two-stage implant system closely resembles that of the natural tooth, and that the microbiology of periodontitis appears to be closely related to peri-implantitis. In evaluations of the data from implant placement in all of the above-noted situations by means of meta-analysis, it appears that there is a strong case that two-stage dental implants are successful, usually showing a confidence interval of over 90%. It also appears that the mandibular implants are more successful than maxillary implants. Studies also show that overdenture therapy is valid, and that single-tooth implants and implants placed in partially dentate mouths have a success rate that is quite good, although not quite as high as in the fully edentulous dentition. It would also appear that the potential causes of failure in the two-stage dental implant systems are peri-implantitis, placement of implants in poor-quality bone, and improper loading of implants. There are now data addressing modifications of the implant surface to alter the percentage of

  3. Exact run length distribution of the double sampling x-bar chart with estimated process parameters

    Directory of Open Access Journals (Sweden)

    Teoh, W. L.

    2016-05-01

    Full Text Available Since the run length distribution is generally highly skewed, a significant concern about focusing too much on the average run length (ARL criterion is that we may miss some crucial information about a control chart’s performance. Thus it is important to investigate the entire run length distribution of a control chart for an in-depth understanding before implementing the chart in process monitoring. In this paper, the percentiles of the run length distribution for the double sampling (DS X chart with estimated process parameters are computed. Knowledge of the percentiles of the run length distribution provides a more comprehensive understanding of the expected behaviour of the run length. This additional information includes the early false alarm, the skewness of the run length distribution, and the median run length (MRL. A comparison of the run length distribution between the optimal ARL-based and MRL-based DS X chart with estimated process parameters is presented in this paper. Examples of applications are given to aid practitioners to select the best design scheme of the DS X chart with estimated process parameters, based on their specific purpose.

  4. Investigation of Power Losses of Two-Stage Two-Phase Converter with Two-Phase Motor

    Directory of Open Access Journals (Sweden)

    Michal Prazenica

    2011-01-01

    Full Text Available The paper deals with determination of losses of two-stage power electronic system with two-phase variable orthogonal output. The simulation is focused on the investigation of losses in the converter during one period in steady-state operation. Modeling and simulation of two matrix converters with R-L load is shown in the paper. The simulation results confirm a very good time-waveform of the phase current and the system seems to be suitable for low-cost application in automotive/aerospace industries and in application with high frequency voltage sources.

  5. Two-stage revision of septic knee prosthesis with articulating knee spacers yields better infection eradication rate than one-stage or two-stage revision with static spacers.

    Science.gov (United States)

    Romanò, C L; Gala, L; Logoluso, N; Romanò, D; Drago, L

    2012-12-01

    The best method for treating chronic periprosthetic knee infection remains controversial. Randomized, comparative studies on treatment modalities are lacking. This systematic review of the literature compares the infection eradication rate after two-stage versus one-stage revision and static versus articulating spacers in two-stage procedures. We reviewed full-text papers and those with an abstract in English published from 1966 through 2011 that reported the success rate of infection eradication after one-stage or two-stage revision with two different types of spacers. In all, 6 original articles reporting the results after one-stage knee exchange arthoplasty (n = 204) and 38 papers reporting on two-stage revision (n = 1,421) were reviewed. The average success rate in the eradication of infection was 89.8% after a two-stage revision and 81.9% after a one-stage procedure at a mean follow-up of 44.7 and 40.7 months, respectively. The average infection eradication rate after a two-stage procedure was slightly, although significantly, higher when an articulating spacer rather than a static spacer was used (91.2 versus 87%). The methodological limitations of this study and the heterogeneous material in the studies reviewed notwithstanding, this systematic review shows that, on average, a two-stage procedure is associated with a higher rate of eradication of infection than one-stage revision for septic knee prosthesis and that articulating spacers are associated with a lower recurrence of infection than static spacers at a comparable mean duration of follow-up. IV.

  6. New improved counter - current multi-stage centrifugal extractor for solvent extraction process

    International Nuclear Information System (INIS)

    Gheorghe, Ionita; Mirica, Dumitru; Croitoru, Cornelia; Stefanescu, Ioan; Retegan, Teodora; Kitamoto, Asashi

    2003-01-01

    Total actinide recovery, lanthanide/actinide separation and selective partitioning of actinide from high level waste (HLW) are nowadays of a major interest. Actinide partitioning with a view to safe disposing of HLW or utilization in many other applications of recovered elements involves an extraction process usually by means of mixer-settler, pulse column or centrifugal contactor. The latter, presents some doubtless advantages and responds to the above mentioned goals. A new type of counter-current multistage centrifugal extractor has been designed and performed. A similar apparatus was not found from in other published papers as yet. The counter-current multi-stage centrifugal extractor was a cylinder made of stainless steel with an effective length of 346 mm, the effective diameter of 100 mm and a volume of 1.5 liters, working in a horizontal position. The new internal structure and geometry of the new advanced centrifugal extractor consisting of nine cells (units), five rotation units, two mixing units, two propelling units and two final plates, ensures the counter-current running of the two phases.The central shaft having the rotation cells fixed on it is coupled by an intermediary connection to a electric motor of high rotation speed. The conceptual layout of the advanced counter-current multi-stage centrifugal extractor is presented. The newly designed extractor has been tested at 500-2800 rot/min for a ratio of the aqueous/organic phase =1 to examine the mechanical behavior and the hydrodynamics of the two phases in countercurrent. The results showed that the performances have been generally good and the design requirements were fulfilled. The newly designed counter-current multistage centrifugal extractor appears to be a promising way to increase extraction rate of radionuclides and metals from liquid effluents. (authors)

  7. Systematic development of a two-stage fed-batch process for lipid accumulation in Rhodotorula glutinis.

    Science.gov (United States)

    Lorenz, Eric; Runge, Dennis; Marbà-Ardébol, Anna-Maria; Schmacht, Maximilian; Stahl, Ulf; Senz, Martin

    2017-03-20

    The application of oleaginous yeast cells as feed supplement, for instance in aqua culture, can be a meaningful alternative for fish meal and oil additives. Therefore, a two-stage fed-batch process split into growth and lipogenesis phase was systematically developed to enrich the oleaginous yeast Rhodotorula glutinis Rh-00301 with high amounts of lipids at industrial relevant biomasses. Thereby, the different carbon sources glucose, sucrose and glycerol were investigated concerning their abilities to serve as a suited raw material for growth and/or lipid accumulation. With the background of economic efficiency C/N ratios of 40, 50 and 70 were investigated as well. It became apparent that glycerol is an improper carbon source most likely because of the passive diffusion of this compound caused by absence of active transporters. The opposite was observed for sucrose, which is the main carbon source in molasses. Finally, an industrially applicable process was successfully established that ensures biomasses of 106±2gL -1 combined with an attractive lipid content of 63±6% and a high lipid-substrate yield (Y L/S ) of 0.18±0.02gg -1 in a short period of time (84h). Furthermore, during these studies a non-negligible formation of the by-product glycerol was detected. This characteristic of R. glutinis is discussed related to other oleaginous yeasts, where glycerol formation is absent. Nevertheless, due to modifications in the feeding procedure, the formation of glycerol could have been reduced but not avoided. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. An Enhanced Preventive Maintenance Optimization Model Based on a Three-Stage Failure Process

    Directory of Open Access Journals (Sweden)

    Ruifeng Yang

    2015-01-01

    Full Text Available Nuclear power plants are highly complex systems and the issues related to their safety are of primary importance. Probabilistic safety assessment is regarded as the most widespread methodology for studying the safety of nuclear power plants. As maintenance is one of the most important factors for affecting the reliability and safety, an enhanced preventive maintenance optimization model based on a three-stage failure process is proposed. Preventive maintenance is still a dominant maintenance policy due to its easy implementation. In order to correspond to the three-color scheme commonly used in practice, the lifetime of system before failure is divided into three stages, namely, normal, minor defective, and severe defective stages. When the minor defective stage is identified, two measures are considered for comparison: one is that halving the inspection interval only when the minor defective stage is identified at the first time; the other one is that if only identifying the minor defective stage, the subsequent inspection interval is halved. Maintenance is implemented immediately once the severe defective stage is identified. Minimizing the expected cost per unit time is our objective function to optimize the inspection interval. Finally, a numerical example is presented to illustrate the effectiveness of the proposed models.

  9. LHCb Run II tracking performance and prospects for the Upgrade

    CERN Multimedia

    2016-01-01

    The LHCb tracking system consists of a Vertex Locator around the interaction point, a tracking station with four layers of silicon strip detectors in front of the magnet, and three tracking stations, using either straw-tubes or silicon strip detectors, behind the magnet. This system allows to reconstruct charged particles with a high efficiency (typically > 95% for particles with momentum > 5 GeV) and an excellent momentum resolution (0.5% for particles with momentum < 20 GeV). The high momentum resolution results in very narrow mass peaks, leading to a very good signal-to-background ratio in such key channels as $B_s\\to\\mu^+\\mu^-$. Furthermore an optimal decay time resolution is an essential element in the studies of time dependent CP violation. For Run II a novel reconstruction strategy was adopted, allowing to run the same track reconstruction in the software trigger as offline. This convergence was possible due to a staged approach in the track reconstruction and a large reduction in the processing tim...

  10. Straight-run vs. sex separate rearing for two broiler genetic lines Part 2: Economic analysis and processing advantages.

    Science.gov (United States)

    Da Costa, M J; Colson, G; Frost, T J; Halley, J; Pesti, G M

    2017-07-01

    The objective of this analysis was to evaluate the effects of raising broilers under sex separate and straight-run conditions for 2 broiler genetic lines. One-day-old Ross 308 and Ross 708 chicks (n = 1,344) were sex separated and placed in 48 pens according to rearing type: sex separate (28 males or 28 females) or straight-run (14 males + 14 females). There were 3 dietary phases: starter (zero to 17 d), grower (17 to 32 d), and finisher (32 to 48 d). Bird individual BW and group feed intakes were measured at 12, 17, 25, 32, 42, and 48 d to evaluate performance. At 33, 43, and 49 d 4 birds per pen (straight-run pens 2 males + 2 females) were sampled for carcass yield evaluation. Data were analyzed using linear and non-linear regression in order to estimate feed intake and cut-up weights at 3 separate market weights (1,700, 2,700, and 3,700 g). Returns over feed cost were estimated for a 1.8 million broiler complex for each rearing system and under 9 feed/meat price scenarios. Overall, rearing birds that were sex separated resulted in extra income that ranged from ${\\$}$48,824 to ${\\$}$330,300 per week, depending on the market targeted and feed and meat price scenarios. Sex separation was shown to be especially important in disadvantageous scenarios in which feed prices were high. Gains from sex separation were markedly higher for the Ross 708 than for the Ross 308 broilers. Bird variability also was evaluated at the 3 separate market ages under narrow ranges of BW that were targeted. Straight-run birds decreased the number of birds present in the desired range. Depending on market weight, straight-run rearing resulted in 9.1 to 16.6% fewer birds than sex separate rearing to meet marketing goals. It was concluded that sex separation can result in increased company profitability and have possible beneficial effects at the processing plant due to increased bird uniformity. © 2017 Poultry Science Association Inc.

  11. Two-stage agglomeration of fine-grained herbal nettle waste

    Science.gov (United States)

    Obidziński, Sławomir; Joka, Magdalena; Fijoł, Olga

    2017-10-01

    This paper compares the densification work necessary for the pressure agglomeration of fine-grained dusty nettle waste, with the densification work involved in two-stage agglomeration of the same material. In the first stage, the material was pre-densified through coating with a binder material in the form of a 5% potato starch solution, and then subjected to pressure agglomeration. A number of tests were conducted to determine the effect of the moisture content in the nettle waste (15, 18 and 21%), as well as the process temperature (50, 70, 90°C) on the values of densification work and the density of the obtained pellets. For pre-densified pellets from a mixture of nettle waste and a starch solution, the conducted tests determined the effect of pellet particle size (1, 2, and 3 mm) and the process temperature (50, 70, 90°C) on the same values. On the basis of the tests, we concluded that the introduction of a binder material and the use of two-stage agglomeration in nettle waste densification resulted in increased densification work (as compared to the densification of nettle waste alone) and increased pellet density.

  12. Run Clever - No difference in risk of injury when comparing progression in running volume and running intensity in recreational runners

    DEFF Research Database (Denmark)

    Ramskov, Daniel; Rasmussen, Sten; Sørensen, Henrik

    2018-01-01

    Background/aim: The Run Clever trial investigated if there was a difference in injury occurrence across two running schedules, focusing on progression in volume of running intensity (Sch-I) or in total running volume (Sch-V). It was hypothesised that 15% more runners with a focus on progression...... in volume of running intensity would sustain an injury compared with runners with a focus on progression in total running volume. Methods: Healthy recreational runners were included and randomly allocated to Sch-I or Sch-V. In the first eight weeks of the 24-week follow-up, all participants (n=839) followed...... participants received real-time, individualised feedback on running intensity and running volume. The primary outcome was running-related injury (RRI). Results: After preconditioning a total of 80 runners sustained an RRI (Sch-I n=36/Sch-V n=44). The cumulative incidence proportion (CIP) in Sch-V (reference...

  13. Modal intersection types, two-level languages, and staged synthesis

    DEFF Research Database (Denmark)

    Henglein, Fritz; Rehof, Jakob

    2016-01-01

    -linguistic framework for staged program synthesis, where metaprograms are automatically synthesized which, when executed, generate code in a target language. We survey the basic theory of staged synthesis and illustrate by example how a two-level language theory specialized from λ∩ ⎕ can be used to understand......A typed λ-calculus, λ∩ ⎕, is introduced, combining intersection types and modal types. We develop the metatheory of λ∩ ⎕, with particular emphasis on the theory of subtyping and distributivity of the modal and intersection type operators. We describe how a stratification of λ∩ ⎕ leads to a multi...... the process of staged synthesis....

  14. LHCb: LHCb High Level Trigger design issues for post Long Stop 1 running

    CERN Multimedia

    Albrecht, J; Raven, G; Sokoloff, M D; Williams, M

    2013-01-01

    The LHCb High Level Trigger uses two stages of software running on an Event Filter Farm (EFF) to select events for offline reconstruction and analysis. The first stage (Hlt1) processes approximately 1 MHz of events accepted by a hardware trigger. In 2012, the second stage (Hlt2) wrote 5 kHz to permanent storage for later processing. Following the LHC's Long Stop 1 (anticipated for 2015), the machine energy will increase from 8 TeV in the center-of-mass to 13 TeV and the cross sections for beauty and charm are expected to grow proportionately. We plan to increase the Hlt2 output to 12 kHz, some for immediate offline processing, some for later offline processing, and some ready for immediate analysis. By increasing the absolute computing power of the EFF, and buffering data for processing between machine fills, we should be able to significantly increase the efficiency for signal while improving signal-to-background ratios. In this poster we will present several strategies under consideration and some of th...

  15. Biotransformation of Domestic Wastewater Treatment Plant Sludge by Two-Stage Integrated Processes -Lsb & Ssb

    Directory of Open Access Journals (Sweden)

    Md. Zahangir Alam, A. H. Molla and A. Fakhru’l-Razi

    2012-10-01

    Full Text Available The study of biotransformation of domestic wastewater treatment plant (DWTP sludge was conducted in laboratory-scale by two-stage integrated process i.e. liquid state bioconversion (LSB and solid state bioconversion (SSB processes. The liquid wastewater sludge [4% w/w of total suspended solids (TSS] was treated by mixed filamentous fungi Penicillium corylophilum and Aspergillus niger, isolated, screened and mixed cultured in terms of their higher biodegradation potential to wastewater sludge. The biosolids was increased to about 10% w/w. Conversely, the soluble [i.e. Total dissolve solid (TDS] and insoluble substances (TSS in treated supernatant were decreased effectively in the LSB process. In the developed LSB process, 93.8 g kg-1of biosolids were enriched with fungal biomass protein and nutrients (NPK, and 98.8% of TSS, 98.2% of TDS, 97.3% of turbidity, 80.2% of soluble protein, 98.8% of reducing sugar and 92.7% of chemical oxygen demand (COD in treated sludge supernatant were removed after 8 days of treatment. Specific resistance to filtration (1.39x1012 m/kg was decreased tremendously by the microbial treatment of DWTP sludge after 6 days of fermentation. The treated biosolids in DWTP sludge was considered as pretreated resource materials for composting and converted into compost by SSB process. The SSB process was evaluated for composting by monitoring the microbial growth and its subsequent roles in biodegradation in composting bin (CB. The process was conducted using two mixed fungal cultures, Trichoderma harzianum with Phanerochaete chrysosporium 2094 and (T/P and T. harzianum and Mucor hiemalis (T/M; and two bulking materials, sawdust (SD and rice straw (RS. The most encouraging results of microbial growth and subsequent solid state bioconversion were exhibited in the RS than the SD. Significant decrease of the C/N ratio and germination index (GI were attained as well as the higher value of glucosamine was exhibited in compost; which

  16. Crispy banana obtained by the combination of a high temperature and short time drying stage and a drying process

    Directory of Open Access Journals (Sweden)

    K. Hofsetz

    2005-06-01

    Full Text Available The effect of the high temperature and short time (HTST drying stage was combined with an air drying process to produce crispness in bananas. The fruit was dehydrated in an air drier for five minutes at 70°C and then immediately set at a HTST stage (130, 140, 150°C and 9, 12, 15 minutes and then at 70°C until water activity (a w was around 0.300. Crispness was evaluated as a function of water activity, using sensory and texture analyses. Drying kinetics was evaluated using the empirical Lewis model. Crispy banana was obtained at 140°C-12min and 150°C-15min in the HTST stage, with a w = 0.345 and a w = 0.363, respectively. Analysis of the k parameter (Lewis model suggests that the initial moisture content of the samples effects this parameter, overcoming the HTST effect. Results showed a relationship between sensory crispness, instrumental texture and the HTST stage.

  17. Rapid Large Earthquake and Run-up Characterization in Quasi Real Time

    Science.gov (United States)

    Bravo, F. J.; Riquelme, S.; Koch, P.; Cararo, S.

    2017-12-01

    Several test in quasi real time have been conducted by the rapid response group at CSN (National Seismological Center) to characterize earthquakes in Real Time. These methods are known for its robustness and realibility to create Finite Fault Models. The W-phase FFM Inversion, The Wavelet Domain FFM and The Body Wave and FFM have been implemented in real time at CSN, all these algorithms are running automatically and triggered by the W-phase Point Source Inversion. Dimensions (Large and Width ) are predefined by adopting scaling laws for earthquakes in subduction zones. We tested the last four major earthquakes occurred in Chile using this scheme: The 2010 Mw 8.8 Maule Earthquake, The 2014 Mw 8.2 Iquique Earthquake, The 2015 Mw 8.3 Illapel Earthquake and The 7.6 Melinka Earthquake. We obtain many solutions as time elapses, for each one of those we calculate the run-up using an analytical formula. Our results are in agreements with some FFM already accepted by the sicentific comunnity aswell as run-up observations in the field.

  18. CMS Software and Computing Ready for Run 2

    CERN Document Server

    Bloom, Kenneth

    2015-01-01

    In Run 1 of the Large Hadron Collider, software and computing was a strategic strength of the Compact Muon Solenoid experiment. The timely processing of data and simulation samples and the excellent performance of the reconstruction algorithms played an important role in the preparation of the full suite of searches used for the observation of the Higgs boson in 2012. In Run 2, the LHC will run at higher intensities and CMS will record data at a higher trigger rate. These new running conditions will provide new challenges for the software and computing systems. Over the two years of Long Shutdown 1, CMS has built upon the successes of Run 1 to improve the software and computing to meet these challenges. In this presentation we will describe the new features in software and computing that will once again put CMS in a position of physics leadership.

  19. Lower bounds on the run time of the univariate marginal distribution algorithm on OneMax

    DEFF Research Database (Denmark)

    Krejca, Martin S.; Witt, Carsten

    2017-01-01

    The Univariate Marginal Distribution Algorithm (UMDA), a popular estimation of distribution algorithm, is studied from a run time perspective. On the classical OneMax benchmark function, a lower bound of Ω(μ√n + n log n), where μ is the population size, on its expected run time is proved...... values maintained by the algorithm, including carefully designed potential functions. These techniques may prove useful in advancing the field of run time analysis for estimation of distribution algorithms in general........ This is the first direct lower bound on the run time of the UMDA. It is stronger than the bounds that follow from general black-box complexity theory and is matched by the run time of many evolutionary algorithms. The results are obtained through advanced analyses of the stochastic change of the frequencies of bit...

  20. A two-stage preventive maintenance optimization model incorporating two-dimensional extended warranty

    International Nuclear Information System (INIS)

    Su, Chun; Wang, Xiaolin

    2016-01-01

    In practice, customers can decide whether to buy an extended warranty or not, at the time of item sale or at the end of the basic warranty. In this paper, by taking into account the moments of customers purchasing two-dimensional extended warranty, the optimization of imperfect preventive maintenance for repairable items is investigated from the manufacturer's perspective. A two-dimensional preventive maintenance strategy is proposed, under which the item is preventively maintained according to a specified age interval or usage interval, whichever occurs first. It is highlighted that when the extended warranty is purchased upon the expiration of the basic warranty, the manufacturer faces a two-stage preventive maintenance optimization problem. Moreover, in the second stage, the possibility of reducing the servicing cost over the extended warranty period is explored by classifying customers on the basis of their usage rates and then providing them with customized preventive maintenance programs. Numerical examples show that offering customized preventive maintenance programs can reduce the manufacturer's warranty cost, while a larger saving in warranty cost comes from encouraging customers to buy the extended warranty at the time of item sale. - Highlights: • A two-dimensional PM strategy is investigated. • Imperfect PM strategy is optimized by considering both two-dimensional BW and EW. • Customers are categorized based on their usage rates throughout the BW period. • Servicing cost of the EW is reduced by offering customized PM programs. • Customers buying the EW at the time of sale is preferred for the manufacturer.

  1. Timing and Stages of Puberty

    Science.gov (United States)

    ... and stages of puberty Timing and stages of puberty Adolescence and puberty can be so confusing! Here’s some info on what to expect and when: Puberty in girls usually starts between the ages of ...

  2. Off-Policy Reinforcement Learning: Optimal Operational Control for Two-Time-Scale Industrial Processes.

    Science.gov (United States)

    Li, Jinna; Kiumarsi, Bahare; Chai, Tianyou; Lewis, Frank L; Fan, Jialu

    2017-12-01

    Industrial flow lines are composed of unit processes operating on a fast time scale and performance measurements known as operational indices measured at a slower time scale. This paper presents a model-free optimal solution to a class of two time-scale industrial processes using off-policy reinforcement learning (RL). First, the lower-layer unit process control loop with a fast sampling period and the upper-layer operational index dynamics at a slow time scale are modeled. Second, a general optimal operational control problem is formulated to optimally prescribe the set-points for the unit industrial process. Then, a zero-sum game off-policy RL algorithm is developed to find the optimal set-points by using data measured in real-time. Finally, a simulation experiment is employed for an industrial flotation process to show the effectiveness of the proposed method.

  3. New Process Controls for the Hera Cryogenic Plant

    Science.gov (United States)

    Böckmann, T.; Clausen, M.; Gerke, Chr.; Prüß, K.; Schoeneburg, B.; Urbschat, P.

    2010-04-01

    The cryogenic plant built for the HERA accelerator at DESY in Hamburg (Germany) is now in operation for more than two decades. The commercial process control system for the cryogenic plant is in operation for the same time period. Ever since the operator stations, the control network and the CPU boards in the process controllers went through several upgrade stages. Only the centralized Input/Output system was kept unchanged. Many components have been running beyond the expected lifetime. The control system for one at the three parts of the cryogenic plant has been replaced recently by a distributed I/O system. The I/O nodes are connected to several Profibus-DP field busses. Profibus provides the infrastructure to attach intelligent sensors and actuators directly to the process controllers which run the open source process control software EPICS. This paper describes the modification process on all levels from cabling through I/O configuration, the process control software up to the operator displays.

  4. Novel Real-time Alignment and Calibration of the LHCb detector in Run2

    Science.gov (United States)

    Martinelli, Maurizio; LHCb Collaboration

    2017-10-01

    LHCb has introduced a novel real-time detector alignment and calibration strategy for LHC Run2. Data collected at the start of the fill are processed in a few minutes and used to update the alignment parameters, while the calibration constants are evaluated for each run. This procedure improves the quality of the online reconstruction. For example, the vertex locator is retracted and reinserted for stable beam conditions in each fill to be centred on the primary vertex position in the transverse plane. Consequently its position changes on a fill-by-fill basis. Critically, this new real-time alignment and calibration procedure allows identical constants to be used in the online and offline reconstruction, thus improving the correlation between triggered and offline-selected events. This offers the opportunity to optimise the event selection in the trigger by applying stronger constraints. The required computing time constraints are met thanks to a new dedicated framework using the multi-core farm infrastructure for the trigger. The motivation for a real-time alignment and calibration of the LHCb detector is discussed from both the operational and physics performance points of view. Specific challenges of this novel configuration are discussed, as well as the working procedures of the framework and its performance.

  5. Two-stage SQUID systems and transducers development for MiniGRAIL

    International Nuclear Information System (INIS)

    Gottardi, L; Podt, M; Bassan, M; Flokstra, J; Karbalai-Sadegh, A; Minenkov, Y; Reinke, W; Shumack, A; Srinivas, S; Waard, A de; Frossati, G

    2004-01-01

    We present measurements on a two-stage SQUID system based on a dc-SQUID as a sensor and a DROS as an amplifier. We measured the intrinsic noise of the dc-SQUID at 4.2 K. A new dc-SQUID has been fabricated. It was specially designed to be used with MiniGRAIL transducers. Cooling fins have been added in order to improve the cooling of the SQUID and the design is optimized to achieve the quantum limit of the sensor SQUID at temperatures above 100 mK. In this paper we also report the effect of the deposition of a Nb film on the quality factor of a small mass Al5056 resonator. Finally, the results of Q-factor measurements on a capacitive transducer for the current MiniGRAIL run are presented

  6. ATLAS simulation of boson plus jets processes in Run 2

    CERN Document Server

    The ATLAS collaboration

    2017-01-01

    This note describes the ATLAS simulation setup used to model the production of single electroweak bosons ($W$, $Z\\gamma^\\ast$ and prompt $\\gamma$) in association with jets in proton--proton collisions at centre-of-mass energies of 8 and 13 TeV. Several Monte Carlo generator predictions are compared in regions of phase space relevant for data analyses during the LHC Run-2, or compared to unfolded data distributions measured in previous Run-1 or early Run-2 ATLAS analyses. Comparisons are made for regions of phase space with or without additional requirements on the heavy-flavour content of the accompanying jets, as well as electroweak $Vjj$ production processes. Both higher-order corrections and systematic uncertainties are also discussed.

  7. A high-level dynamic analysis approach for studying global process plant availability and production time in the early stages of mining projects

    Directory of Open Access Journals (Sweden)

    Dennis Travagini Cremonese

    Full Text Available Abstract In the early stage of front-end studies of a Mining Project, the global availability (i.e. number of hours a plant is available for production and production (number of hours a plant is actually operated with material time of the process plant are normally assumed based on the experience of the study team. Understanding and defining the availability hours at the early stages of the project are important for the future stages of the project, as drastic changes in work hours will impact the economics of the project at that stage. An innovative high-level dynamic modeling approach has been developed to assist in the rapid evaluation of assumptions made by the study team. This model incorporates systems or equipment that are commonly used in mining projects from mine to product stockyard discharge after the processing plant. It includes subsystems that will simulate all the component handling, and major process plant systems required for a mining project. The output data provided by this high-level dynamic simulation approach will enhance the confidence level of engineering carried out during the early stage of the project. This study discusses the capabilities of the approach, and a test case compared with standard techniques used in mining project front-end studies.

  8. Influence of two-stage annealing treatment on critical current of bronze-processed multifilamentary Nb/sub 3/Sn superconducting materials

    International Nuclear Information System (INIS)

    Ochiai, S.; Osamura, K.; Ryoji, M.

    1987-01-01

    The influence of changes of volume fraction of Nb/sub 3/Sn, grain size and upper critical magnetic field due to two-stage annealing treatment (low temperature annealing to form fine grains + high temperature annealing to raise upper critical magnetic field) on overall critical current and critical current density were studied at magnetic field of 3-15 T. When annealing temperature was low (773-923 K) and the volume fraction of Nb/sub 3/Sn was low in first stage annealing, second stage annealing could raise the overall critical current over the range of the applied magnetic field due to increase in upper critical magnetic field H/sub c2/ and volume fraction of Nb/sub 3/Sn accompanying with reduction in Sn concentration in the bronze matrix, which played a role to reduce residual strain in Nb/sub 3/Sn, leading to high H/sub c2/ although the loss in pinning force arose from the coarsening of the grains. When the annealing temperature was high (973 K) and the Nb/sub 3/Sn was formed until the Sn was consumed in the first stage, second stage annealing could not raise the critical current due to increase in grain size and no effective increase in H/sub c2/. The critical current density at low magnetic fields below several Teslas was reduced by the second stage annealing due to increase in grain size but that at high fields was raised due to increase in high H/sub c2/. The reverse two-stage annealing treatment (high temperature annealing in the first stage+low temperature annealing in the second stage) reduced the H/sub c2/ slightly with increasing second stage annealing temperature and time. The critical current density at low magnetic fields was determined mainly by the grain size and that at high fields was determined by the combination of the upper critical field and grain size

  9. Dense electro-optic frequency comb generated by two-stage modulation for dual-comb spectroscopy.

    Science.gov (United States)

    Wang, Shuai; Fan, Xinyu; Xu, Bingxin; He, Zuyuan

    2017-10-01

    An electro-optic frequency comb enables frequency-agile comb-based spectroscopy without using sophisticated phase-locking electronics. Nevertheless, dense electro-optic frequency combs over broad spans have yet to be developed. In this Letter, we propose a straightforward and efficient method for electro-optic frequency comb generation with a small line spacing and a large span. This method is based on two-stage modulation: generating an 18 GHz line-spacing comb at the first stage and a 250 MHz line-spacing comb at the second stage. After generating an electro-optic frequency comb covering 1500 lines, we set up an easily established mutually coherent hybrid dual-comb interferometer, which combines the generated electro-optic frequency comb and a free-running mode-locked laser. As a proof of concept, this hybrid dual-comb interferometer is used to measure the absorption and dispersion profiles of the molecular transition of H 13 CN with a spectral resolution of 250 MHz.

  10. Evaluation of hydrogen and methane production from sugarcane bagasse hemicellulose hydrolysates by two-stage anaerobic digestion process.

    Science.gov (United States)

    Baêta, Bruno Eduardo Lobo; Lima, Diego Roberto Sousa; Filho, José Gabriel Balena; Adarme, Oscar Fernando Herrera; Gurgel, Leandro Vinícius Alves; Aquino, Sérgio Francisco de

    2016-10-01

    This study aimed at optimizing the net energy recovery from hydrogen and methane production through anaerobic digestion of the hemicellulose hydrolysate (HH) obtained by desirable conditions (DC) of autohydrolysis pretreatment (AH) of sugarcane bagasse (SB). Anaerobic digestion was carried out in a two-stage (acidogenic-methanogenic) batch system where the acidogenic phase worked as a hydrolysis and biodetoxification step. This allowed the utilization of more severe AH pretreatment conditions, i.e. T=178.6°C and t=55min (DC3) and T=182.9°C and t=40.71min (DC4). Such severe conditions resulted in higher extraction of hemicelluloses from SB (DC1=68.07%, DC2=48.99%, DC3=77.40% and DC4=73.90%), which consequently improved the net energy balance of the proposed process. The estimated energy from the combustion of both biogases (H2 and CH4) accumulated during the two-stage anaerobic digestion of HH generated by DC4 condition was capable of producing a net energy of 3.15MJ·kgSB(-1)dry weight. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. A Modular Environment for Geophysical Inversion and Run-time Autotuning using Heterogeneous Computing Systems

    Science.gov (United States)

    Myre, Joseph M.

    Heterogeneous computing systems have recently come to the forefront of the High-Performance Computing (HPC) community's interest. HPC computer systems that incorporate special purpose accelerators, such as Graphics Processing Units (GPUs), are said to be heterogeneous. Large scale heterogeneous computing systems have consistently ranked highly on the Top500 list since the beginning of the heterogeneous computing trend. By using heterogeneous computing systems that consist of both general purpose processors and special- purpose accelerators, the speed and problem size of many simulations could be dramatically increased. Ultimately this results in enhanced simulation capabilities that allows, in some cases for the first time, the execution of parameter space and uncertainty analyses, model optimizations, and other inverse modeling techniques that are critical for scientific discovery and engineering analysis. However, simplifying the usage and optimization of codes for heterogeneous computing systems remains a challenge. This is particularly true for scientists and engineers for whom understanding HPC architectures and undertaking performance analysis may not be primary research objectives. To enable scientists and engineers to remain focused on their primary research objectives, a modular environment for geophysical inversion and run-time autotuning on heterogeneous computing systems is presented. This environment is composed of three major components: 1) CUSH---a framework for reducing the complexity of programming heterogeneous computer systems, 2) geophysical inversion routines which can be used to characterize physical systems, and 3) run-time autotuning routines designed to determine configurations of heterogeneous computing systems in an attempt to maximize the performance of scientific and engineering codes. Using three case studies, a lattice-Boltzmann method, a non-negative least squares inversion, and a finite-difference fluid flow method, it is shown that

  12. The global stability of a delayed predator-prey system with two stage-structure

    International Nuclear Information System (INIS)

    Wang Fengyan; Pang Guoping

    2009-01-01

    Based on the classical delayed stage-structured model and Lotka-Volterra predator-prey model, we introduce and study a delayed predator-prey system, where prey and predator have two stages, an immature stage and a mature stage. The time delays are the time lengths between the immature's birth and maturity of prey and predator species. Results on global asymptotic stability of nonnegative equilibria of the delay system are given, which generalize and suggest that good continuity exists between the predator-prey system and its corresponding stage-structured system.

  13. Two-stage energy storage equalization system for lithium-ion battery pack

    Science.gov (United States)

    Chen, W.; Yang, Z. X.; Dong, G. Q.; Li, Y. B.; He, Q. Y.

    2017-11-01

    How to raise the efficiency of energy storage and maximize storage capacity is a core problem in current energy storage management. For that, two-stage energy storage equalization system which contains two-stage equalization topology and control strategy based on a symmetric multi-winding transformer and DC-DC (direct current-direct current) converter is proposed with bidirectional active equalization theory, in order to realize the objectives of consistent lithium-ion battery packs voltages and cells voltages inside packs by using a method of the Range. Modeling analysis demonstrates that the voltage dispersion of lithium-ion battery packs and cells inside packs can be kept within 2 percent during charging and discharging. Equalization time was 0.5 ms, which shortened equalization time of 33.3 percent compared with DC-DC converter. Therefore, the proposed two-stage lithium-ion battery equalization system can achieve maximum storage capacity between lithium-ion battery packs and cells inside packs, meanwhile efficiency of energy storage is significantly improved.

  14. Staging Collaborative Innovation Processes

    DEFF Research Database (Denmark)

    Pedersen, Signe; Clausen, Christian

    Organisations are currently challenged by demands for increased collaborative innovation internally as well as with external and new entities - e.g. across the value chain. The authors seek to develop new approaches to managing collaborative innovative processes in the context of open innovation ...... the diverse matters of concern into a coherent product or service concept, and 2) in the same process move these diverse holders of the matters of concern into a translated actor network which carry or support the concept.......Organisations are currently challenged by demands for increased collaborative innovation internally as well as with external and new entities - e.g. across the value chain. The authors seek to develop new approaches to managing collaborative innovative processes in the context of open innovation...... and public private innovation partnerships. Based on a case study of a collaborative design process in a large electronics company the paper points to the key importance of staging and navigation of collaborative innovation process. Staging and navigation is presented as a combined activity: 1) to translate...

  15. Preliminary technical data summary defense waste processing facility stage 2

    International Nuclear Information System (INIS)

    1980-12-01

    This Preliminary Technical Data Summary presents the technical basis for design of Stage 2 of the Staged Defense Waste Processing Facility (DWPF). Process changes incorporated in the staged DWPF relative to the Alternative DWPF described in PTDS No. 3 (DPSTD-77-13-3) are the result of ongoing research and development and are aimed at reducing initial capital investment and developing a process to efficiently immobilize the radionuclides in Savannah River Plant (SRP) high-level liquid waste. The radionuclides in SRP waste are present in sludge that has settled to the bottom of waste storage tanks and in crystallized salt and salt solution (supernate). Stage 1 of the DWPF receives washed, aluminum dissolved sludge from the waste tank farms and immobilizes it in a borosilicate glass matrix. The supernate is retained in the waste tank farms until completion of Stage 2 of the DWPF at which time it is filtered and decontaminated by ion exchange in the Stage 2 facility. The decontaminated supernate is concentrated by evaporation and mixed with cement for burial. The radioactivity removed from the supernate is fixed in borosilicate glass along with the sludge. This document gives flowsheets, material and curie balances, material and curie balance bases, and other technical data for design of Stage 2 of the DWPF. Stage 1 technical data are presented in DPSTD-80-38

  16. Science advancements key to increasing management value of life stage monitoring networks for endangered Sacramento River winter-run Chinook salmon in California

    Science.gov (United States)

    Johnson, Rachel C.; Windell, Sean; Brandes, Patricia L.; Conrad, J. Louise; Ferguson, John; Goertler, Pascale A. L.; Harvey, Brett N.; Heublein, Joseph; Isreal, Joshua A.; Kratville, Daniel W.; Kirsch, Joseph E.; Perry, Russell W.; Pisciotto, Joseph; Poytress, William R.; Reece, Kevin; Swart, Brycen G.

    2017-01-01

    A robust monitoring network that provides quantitative information about the status of imperiled species at key life stages and geographic locations over time is fundamental for sustainable management of fisheries resources. For anadromous species, management actions in one geographic domain can substantially affect abundance of subsequent life stages that span broad geographic regions. Quantitative metrics (e.g., abundance, movement, survival, life history diversity, and condition) at multiple life stages are needed to inform how management actions (e.g., hatcheries, harvest, hydrology, and habitat restoration) influence salmon population dynamics. The existing monitoring network for endangered Sacramento River winterrun Chinook Salmon (SRWRC, Oncorhynchus tshawytscha) in California’s Central Valley was compared to conceptual models developed for each life stage and geographic region of the life cycle to identify relevant SRWRC metrics. We concluded that the current monitoring network was insufficient to diagnose when (life stage) and where (geographic domain) chronic or episodic reductions in SRWRC cohorts occur, precluding within- and among-year comparisons. The strongest quantitative data exist in the Upper Sacramento River, where abundance estimates are generated for adult spawners and emigrating juveniles. However, once SRWRC leave the upper river, our knowledge of their identity, abundance, and condition diminishes, despite the juvenile monitoring enterprise. We identified six system-wide recommended actions to strengthen the value of data generated from the existing monitoring network to assess resource management actions: (1) incorporate genetic run identification; (2) develop juvenile abundance estimates; (3) collect data for life history diversity metrics at multiple life stages; (4) expand and enhance real-time fish survival and movement monitoring; (5) collect fish condition data; and (6) provide timely public access to monitoring data in open data

  17. Two-Stage Centrifugal Fan

    Science.gov (United States)

    Converse, David

    2011-01-01

    Fan designs are often constrained by envelope, rotational speed, weight, and power. Aerodynamic performance and motor electrical performance are heavily influenced by rotational speed. The fan used in this work is at a practical limit for rotational speed due to motor performance characteristics, and there is no more space available in the packaging for a larger fan. The pressure rise requirements keep growing. The way to ordinarily accommodate a higher DP is to spin faster or grow the fan rotor diameter. The invention is to put two radially oriented stages on a single disk. Flow enters the first stage from the center; energy is imparted to the flow in the first stage blades, the flow is redirected some amount opposite to the direction of rotation in the fixed stators, and more energy is imparted to the flow in the second- stage blades. Without increasing either rotational speed or disk diameter, it is believed that as much as 50 percent more DP can be achieved with this design than with an ordinary, single-stage centrifugal design. This invention is useful primarily for fans having relatively low flow rates with relatively high pressure rise requirements.

  18. Modelling of Two-Stage Anaerobic Treating Wastewater from a Molasses-Based Ethanol Distillery with the IWA Anaerobic Digestion Model No.1

    Directory of Open Access Journals (Sweden)

    Kittikhun Taruyanon

    2010-03-01

    Full Text Available This paper presents the application of ADM1 model to simulate the dynamic behaviour of a two-stage anaerobic treatment process treating the wastewater generated from the ethanol distillery process. The laboratory-scale process comprised an anaerobic continuous stirred tank reactor (CSTR and an upflow anaerobic sludge blanket (UASB connecting in series, was used to treat wastewater from the ethanol distillery process. The CSTR and UASB hydraulic retention times (HRT were 12 and 70 hours, respectively. The model was developed based on ADM1 basic structure and implemented with the simulation software AQUASIM. The simulated results were compared with measured data obtained from using the laboratory-scale two-stage anaerobic treatment process to treat wastewater. The sensitivity analysis identified maximum specific uptake rate (km and half-saturation constant (Ks of acetate degrader and sulfate reducing bacteria as the kinetic parameters which highly affected the process behaviour, which were further estimated. The study concluded that the model could predict the dynamic behaviour of a two-stage anaerobic treatment process treating the ethanol distillery process wastewater with varying strength of influents with reasonable accuracy.

  19. Performance and genome-centric metagenomics of thermophilic single and two-stage anaerobic digesters treating cheese wastes

    DEFF Research Database (Denmark)

    Fontana, Alessandra; Campanaro, Stefano; Treu, Laura

    2018-01-01

    -depth characterization of the microbial community structure using genome-centric metagenomics. Both reactor configurations showed acidification problems under the tested organic loading rates (OLRs) of 3.6 and 2.4 g COD/L-reactor day and the hydraulic retention time (HRT) of 15 days. However, the two-stage design...... of the main population genomes highlighted specific metabolic pathways responsible for the AD process and the mechanisms of main intermediates production. Particularly, the acetate accumulation experienced by the single stage configuration was mainly correlated to the low abundant syntrophic acetate oxidizer...

  20. The Run-2 ATLAS Trigger System

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00222798; The ATLAS collaboration

    2016-01-01

    The ATLAS trigger successfully collected collision data during the first run of the LHC between 2009-2013 at different centre-of-mass energies between 900 GeV and 8 TeV. The trigger system consists of a hardware Level-1 and a software-based high level trigger (HLT) that reduces the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of a few hundred Hz. In Run-2, the LHC will operate at centre-of-mass energies of 13 and 14 TeV and higher luminosity, resulting in roughly five times higher trigger rates. A brief review of the ATLAS trigger system upgrades that were implemented between Run-1 and Run-2, allowing to cope with the increased trigger rates while maintaining or even improving the efficiency to select physics processes of interest, will be given. This includes changes to the Level-1 calorimeter and muon trigger systems, the introduction of a new Level-1 topological trigger module and the merging of the previously two-level HLT system into a single event filter farm. A ...

  1. Managing the CMS Data and Monte Carlo Processing during LHC Run 2

    Science.gov (United States)

    Wissing, C.; CMS Collaboration

    2017-10-01

    In order to cope with the challenges expected during the LHC Run 2 CMS put in a number of enhancements into the main software packages and the tools used for centrally managed processing. In the presentation we will highlight these improvements that allow CMS to deal with the increased trigger output rate, the increased pileup and the evolution in computing technology. The overall system aims at high flexibility, improved operational flexibility and largely automated procedures. The tight coupling of workflow classes to types of sites has been drastically relaxed. Reliable and high-performing networking between most of the computing sites and the successful deployment of a data-federation allow the execution of workflows using remote data access. That required the development of a largely automatized system to assign workflows and to handle necessary pre-staging of data. Another step towards flexibility has been the introduction of one large global HTCondor Pool for all types of processing workflows and analysis jobs. Besides classical Grid resources also some opportunistic resources as well as Cloud resources have been integrated into that Pool, which gives reach to more than 200k CPU cores.

  2. Advanced counter-current multi-stage centrifugal extractor for solvent extraction process

    International Nuclear Information System (INIS)

    Ionita, Gheorghe; Mirica, Dumitru; Croitoru, Cornelia; Stefanescu, Ioan; Steflea, Dumitru; Mihaila, V.; Peteu, Gh.

    2002-01-01

    Total actinide recovery, lanthanide/actinide separation and the selective partitioning of actinide from high level waste (HLW) are nowadays of a major interest. Actinide partitioning with a view to safe disposing of HLW or utilization in many other applications of recovered elements involve an extraction process usually by means of mixer-settler, pulse column or centrifugal contactor. The latter, presents some doubtless advantages and responds to the above mentioned goals. A new type of counter-current multistage centrifugal extractor has been designed and built. The counter-current multi-stage centrifugal extractor is a stainless steel cylinder with an effective length of 346 mm, the effective diameter of 100 mm and a volume of 1.5 liters, having horizontal position as working position. The new internal structure and geometry of the new advanced centrifugal extractor is shown. It consists of nine cells (units): five rotation units, two mixing units, two propelling units and two final plates which ensures the counter-current running of the two phases. The central shaft having the rotation cells fixed on it is connected to an electric motor of high rotation speed. The extractor has been tested at 1000-3000 rot/min for a ratio of the aqueous/organic phase = 1. The mechanical and hydrodynamic behavior of the two phases in counter-current are described. The results showed that the performances have been generally good. The new facility appears to be a promising idea to increase extraction rate of radionuclides and metals from liquid effluents. (authors)

  3. A farm-scale pilot plant for biohydrogen and biomethane production by two-stage fermentation

    Directory of Open Access Journals (Sweden)

    R. Oberti

    2013-09-01

    Full Text Available Hydrogen is considered one of the possible main energy carriers for the future, thanks to its unique environmental properties. Indeed, its energy content (120 MJ/kg can be exploited virtually without emitting any exhaust in the atmosphere except for water. Renewable production of hydrogen can be obtained through common biological processes on which relies anaerobic digestion, a well-established technology in use at farm-scale for treating different biomass and residues. Despite two-stage hydrogen and methane producing fermentation is a simple variant of the traditional anaerobic digestion, it is a relatively new approach mainly studied at laboratory scale. It is based on biomass fermentation in two separate, seuqential stages, each maintaining conditions optimized to promote specific bacterial consortia: in the first acidophilic reactorhydrogen is produced production, while volatile fatty acids-rich effluent is sent to the second reactor where traditional methane rich biogas production is accomplished. A two-stage pilot-scale plant was designed, manufactured and installed at the experimental farm of the University of Milano and operated using a biomass mixture of livestock effluents mixed with sugar/starch-rich residues (rotten fruits and potatoes and expired fruit juices, afeedstock mixture based on waste biomasses directly available in the rural area where plant is installed. The hydrogenic and the methanogenic reactors, both CSTR type, had a total volume of 0.7m3 and 3.8 m3 respectively, and were operated in thermophilic conditions (55 2 °C without any external pH control, and were fully automated. After a brief description of the requirements of the system, this contribution gives a detailed description of its components and of engineering solutions to the problems encountered during the plant realization and start-up. The paper also discusses the results obtained in a first experimental run which lead to production in the range of previous

  4. Composite likelihood and two-stage estimation in family studies

    DEFF Research Database (Denmark)

    Andersen, Elisabeth Anne Wreford

    2004-01-01

    In this paper register based family studies provide the motivation for linking a two-stage estimation procedure in copula models for multivariate failure time data with a composite likelihood approach. The asymptotic properties of the estimators in both parametric and semi-parametric models are d...

  5. Teaching basic life support with an automated external defibrillator using the two-stage or the four-stage teaching technique.

    Science.gov (United States)

    Bjørnshave, Katrine; Krogh, Lise Q; Hansen, Svend B; Nebsbjerg, Mette A; Thim, Troels; Løfgren, Bo

    2018-02-01

    Laypersons often hesitate to perform basic life support (BLS) and use an automated external defibrillator (AED) because of self-perceived lack of knowledge and skills. Training may reduce the barrier to intervene. Reduced training time and costs may allow training of more laypersons. The aim of this study was to compare BLS/AED skills' acquisition and self-evaluated BLS/AED skills after instructor-led training with a two-stage versus a four-stage teaching technique. Laypersons were randomized to either two-stage or four-stage teaching technique courses. Immediately after training, the participants were tested in a simulated cardiac arrest scenario to assess their BLS/AED skills. Skills were assessed using the European Resuscitation Council BLS/AED assessment form. The primary endpoint was passing the test (17 of 17 skills adequately performed). A prespecified noninferiority margin of 20% was used. The two-stage teaching technique (n=72, pass rate 57%) was noninferior to the four-stage technique (n=70, pass rate 59%), with a difference in pass rates of -2%; 95% confidence interval: -18 to 15%. Neither were there significant differences between the two-stage and four-stage groups in the chest compression rate (114±12 vs. 115±14/min), chest compression depth (47±9 vs. 48±9 mm) and number of sufficient rescue breaths between compression cycles (1.7±0.5 vs. 1.6±0.7). In both groups, all participants believed that their training had improved their skills. Teaching laypersons BLS/AED using the two-stage teaching technique was noninferior to the four-stage teaching technique, although the pass rate was -2% (95% confidence interval: -18 to 15%) lower with the two-stage teaching technique.

  6. A web-based Tamsui River flood early-warning system with correction of real-time water stage using monitoring data

    Science.gov (United States)

    Liao, H. Y.; Lin, Y. J.; Chang, H. K.; Shang, R. K.; Kuo, H. C.; Lai, J. S.; Tan, Y. C.

    2017-12-01

    Taiwan encounters heavy rainfalls frequently. There are three to four typhoons striking Taiwan every year. To provide lead time for reducing flood damage, this study attempt to build a flood early-warning system (FEWS) in Tanshui River using time series correction techniques. The predicted rainfall is used as the input for the rainfall-runoff model. Then, the discharges calculated by the rainfall-runoff model is converted to the 1-D river routing model. The 1-D river routing model will output the simulating water stages in 487 cross sections for the future 48-hr. The downstream water stage at the estuary in 1-D river routing model is provided by storm surge simulation. Next, the water stages of 487 cross sections are corrected by time series model such as autoregressive (AR) model using real-time water stage measurements to improve the predicted accuracy. The results of simulated water stages are displayed on a web-based platform. In addition, the models can be performed remotely by any users with web browsers through a user interface. The on-line video surveillance images, real-time monitoring water stages, and rainfalls can also be shown on this platform. If the simulated water stage exceeds the embankments of Tanshui River, the alerting lights of FEWS will be flashing on the screen. This platform runs periodically and automatically to generate the simulation graphic data of flood water stages for flood disaster prevention and decision making.

  7. Forward conditioning with wheel running causes place aversion in rats.

    Science.gov (United States)

    Masaki, Takahisa; Nakajima, Sadahiko

    2008-09-01

    Backward pairings of a distinctive chamber as a conditioned stimulus and wheel running as an unconditioned stimulus (i.e., running-then-chamber) can produce a conditioned place preference in rats. The present study explored whether a forward conditioning procedure with these stimuli (i.e., chamber-then-running) would yield place preference or aversion. Confinement of a rat in one of two distinctive chambers was followed by a 20- or 60-min running opportunity, but confinement in the other was not. After four repetitions of this treatment (i.e., differential conditioning), a choice preference test was given in which the rat had free access to both chambers. This choice test showed that the rats given 60-min running opportunities spent less time in the running-paired chamber than in the unpaired chamber. Namely, a 60-min running opportunity after confinement in a distinctive chamber caused conditioned aversion to that chamber after four paired trials. This result was discussed with regard to the opponent-process theory of motivation.

  8. Study for process and equipment design of wet gelation stages in vibropacking process

    International Nuclear Information System (INIS)

    Tanimoto, Ryoji; Kikuchi, Toshiaki; Tanaka, Hirokazu; Amino, Masaki; Yanai, Minoru

    2004-02-01

    Process and layout design of external wet gelation stages in vibropacking process was examined for the feasibility study of commercialized FBR cycle system. In this study, following process stages for the oxide core fuel production line were covered, that is, solidification, washing, drying, calcination, reduction, sintering stages including interim storage of sintering particles and reagent recovery stage. The main results obtained by this study are as follows: (1) Based on the process examination results conducted previously, process-flow, mass-balance and number of production line/equipment were clarified. The process is covered from the receive tank of feed solution to the interim storage equipment. Reagent recovery process-flow, mass-balance were also clarified. And preliminary design of the main equipment was reexamined. (2) Normal operation procedure and the procedure after process failure were summarized along with a remote automated operation procedure. Operation sequence of each production line was mapped out by using a time-chart. (3) Design outline of reagent recovery equipments, installed to recover waste liquid from the wet gelation stages in the view of environmental impact were examined. Effective techniques such as collection of solvent, residue waste treatment method were examined its applicability and selected. Schematic block diagram was presented. (4) Analytical items and analyzing apparatus were extracted taking into account of quality control and process management. Analytical sample taking position and frequency of sampling were also examined. (5) A schematic layout drawing of main manufacturing process and reagent recovery process was presented taking into account of material handling. (6) A feature of the operating rate at each process stage was examined by analyzing failure rate reliability of each component. applying the reliability-centred method. (RCM), the operating rate was evaluated and annual maintenance period was estimated using

  9. Production of endo-pectate lyase by two stage cultivation of Erwinia carotovora

    Energy Technology Data Exchange (ETDEWEB)

    Fukuoka, Satoshi; Kobayashi, Yoshiaki

    1987-02-26

    The productivity of endo-pectate lyase from Erwinia carotovora GIR 1044 was found to be greatly improved by two stage cultivation: in the first stage the bacterium was grown with an inducing carbon source, e.g., pectin, and in the second stage it was cultivated with glycerol, xylose, or fructose with the addition of monosodium L-glutamate as nitrogen source. In the two stage cultivation using pectin or glycerol as the carbon source the enzyme activity reached 400 units/ml, almost 3 times as much as that of one stage cultivation in a 10 liter fermentor. Using two stage cultivation in the 200 liter fermentor improved enzyme productivity over that in the 10 liter fermentor, with 500 units/ml of activity. Compared with the cultivation in Erlenmeyer flasks, fermentor cultivation improved enzyme productivity. The optimum cultivating conditions were agitation of 480 rpm with aeration of 0.5 vvm at 28 /sup 0/C. (4 figs, 4 tabs, 14 refs)

  10. Two-stage heterotrophic and phototrophic culture strategy for algal biomass and lipid production.

    Science.gov (United States)

    Zheng, Yubin; Chi, Zhanyou; Lucker, Ben; Chen, Shulin

    2012-01-01

    A two-stage heterotrophic and phototrophic culture strategy for algal biomass and lipid production was studied, wherein high density heterotrophic cultures of Chlorellasorokiniana serve as seed for subsequent phototrophic growth. The data showed growth rate, cell density and productivity of heterotrophic C.sorokiniana were 3.0, 3.3 and 7.4 times higher than phototrophic counterpart, respectively. Hetero- and phototrophic algal seeds had similar biomass/lipid production and fatty acid profile when inoculated into phototrophic culture system. To expand the application, food waste and wastewater were tested as feedstock for heterotrophic growth, and supported cell growth successfully. These results demonstrated the advantages of using heterotrophic algae cells as seeds for open algae culture system. Additionally, high inoculation rate of heterotrophic algal seed can be utilized as an effective method for contamination control. This two-stage heterotrophic phototrophic process is promising to provide a more efficient way for large scale production of algal biomass and biofuels. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. Relationship between running kinematic changes and time limit at vVO2max

    Directory of Open Access Journals (Sweden)

    Leonardo De Lucca

    2012-06-01

    Exhaustive running at maximal oxygen uptake velocity (vVO2max can alter running kinematic parameters and increase energy cost along the time. The aims of the present study were to compare characteristics of ankle and knee kinematics during running at vVO2max and to verify the relationship between changes in kinematic variables and time limit (Tlim. Eleven male volunteers, recreational players of team sports, performed an incremental running test until volitional exhaustion to determine vVO2max and a constant velocity test at vVO2max. Subjects were filmed continuously from the left sagittal plane at 210 Hz for further kinematic analysis. The maximal plantar flexion during swing (p<0.01 was the only variable that increased significantly from beginning to end of the run. Increase in ankle angle at contact was the only variable related to Tlim (r=0.64; p=0.035 and explained 34% of the performance in the test. These findings suggest that the individuals under study maintained a stable running style at vVO2max and that increase in plantar flexion explained the performance in this test when it was applied in non-runners.

  12. Aggregated channels network for real-time pedestrian detection

    Science.gov (United States)

    Ghorban, Farzin; Marín, Javier; Su, Yu; Colombo, Alessandro; Kummert, Anton

    2018-04-01

    Convolutional neural networks (CNNs) have demonstrated their superiority in numerous computer vision tasks, yet their computational cost results prohibitive for many real-time applications such as pedestrian detection which is usually performed on low-consumption hardware. In order to alleviate this drawback, most strategies focus on using a two-stage cascade approach. Essentially, in the first stage a fast method generates a significant but reduced amount of high quality proposals that later, in the second stage, are evaluated by the CNN. In this work, we propose a novel detection pipeline that further benefits from the two-stage cascade strategy. More concretely, the enriched and subsequently compressed features used in the first stage are reused as the CNN input. As a consequence, a simpler network architecture, adapted for such small input sizes, allows to achieve real-time performance and obtain results close to the state-of-the-art while running significantly faster without the use of GPU. In particular, considering that the proposed pipeline runs in frame rate, the achieved performance is highly competitive. We furthermore demonstrate that the proposed pipeline on itself can serve as an effective proposal generator.

  13. Results from the two-tower run of the Cryogenic Dark Matter Search

    Energy Technology Data Exchange (ETDEWEB)

    Reisetter, Angela Jean [Minnesota U.

    2005-01-01

    The Cryogenic Dark Matter Search has completed two runs at the Soudan Underground Laboratory In the second, two towers of detectors were operated from March to August 2004. CDMS used Ge and Si ZIP (Z-sensitive, Ionization, and Phonon) detectors, operated at 50mK, to look for Weakly Interacting Massive Particles (WIMPs) which may make up most of the dark matter in our universe. These detectors are surrounded by lead and polyethylene shielding as well as an active muon veto. These shields, as well as the overburden of Soudan rock, provide a low background environment for the detectors. The ZIP detectors record the ratio of ionization signal to phonon signal to discriminate between nuclear recoils, characteristic of WIMPs and neutrons, and electron recoils, characteristic of gamma and beta backgrounds. They also provide timing information from the four phonon channels that is used to reject surface events, for which ionization collection is poor. A blind analysis, dened using calibration data taken in situ throughout the run, provides a denition of the WIMP signal region by rejecting backgrounds. This analysis applied to the WIMP search data gives a limit on the spin independent WIMP-nucleon cross-section that is an order of magnitude lower than any other experiment has published.

  14. A stage-wise approach to exploring performance effects of cycle time reduction

    NARCIS (Netherlands)

    Eling, K.; Langerak, F.; Griffin, A.

    2013-01-01

    Research on reducing new product development (NPD) cycle time has shown that firms tend to adopt different cycle time reduction mechanisms for different process stages. However, the vast majority of previous studies investigating the relationship between new product performance and NPD cycle time

  15. Removal of cesium from simulated liquid waste with countercurrent two-stage adsorption followed by microfiltration.

    Science.gov (United States)

    Han, Fei; Zhang, Guang-Hui; Gu, Ping

    2012-07-30

    Copper ferrocyanide (CuFC) was used as an adsorbent to remove cesium. Jar test results showed that the adsorption capacity of CuFC was better than that of potassium zinc hexacyanoferrate. Lab-scale tests were performed by an adsorption-microfiltration process, and the mean decontamination factor (DF) was 463 when the initial cesium concentration was 101.3μg/L, the dosage of CuFC was 40mg/L and the adsorption time was 20min. The cesium concentration in the effluent continuously decreased with the operation time, which indicated that the used adsorbent retained its adsorption capacity. To use this capacity, experiments on a countercurrent two-stage adsorption (CTA)-microfiltration (MF) process were carried out with CuFC adsorption combined with membrane separation. A calculation method for determining the cesium concentration in the effluent was given, and batch tests in a pressure cup were performed to verify the calculated method. The results showed that the experimental values fitted well with the calculated values in the CTA-MF process. The mean DF was 1123 when the dilution factor was 0.4, the initial cesium concentration was 98.75μg/L and the dosage of CuFC and adsorption time were the same as those used in the lab-scale test. The DF obtained by CTA-MF process was more than three times higher than the single-stage adsorption in the jar test. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. Characteristics of bio-oil from the pyrolysis of palm kernel shell in a newly developed two-stage pyrolyzer

    International Nuclear Information System (INIS)

    Oh, Seung-Jin; Choi, Gyung-Goo; Kim, Joo-Sik

    2016-01-01

    Pyrolysis of palm kernel shell was performed using a two-stage pyrolyzer consisting of an auger reactor and a fluidized bed reactor within the auger reactor temperature range of ∼290–380 °C at the fluidized bed reactor temperature of ∼520 °C, and with a variable residence time of the feed material in the auger reactor. The highest bio-oil yield of the two-stage pyrolysis was ∼56 wt%. The bio-oil derived from the auger reactor contained degradation products of the hemicelluloses of PKS, such as acetic acid, and furfural, whereas the fluidized bed reactor produced a bio-oil with high concentrations of acetic acid and phenol. The auger reactor temperature and the residence time of PKS in the auger reactor had an influence on the acetic acid concentration in the bio-oil, while their changes did not induce an observable trend on the phenol concentration in the bio-oil derived from the fluidized bed reactor. The maximum concentrations of acetic acid and phenol in bio-oil were ∼78 and 12 wt% dry basis, respectively. As a result, it was possible for the two-stage pyrolyzer to separately produce two different bio-oils in one operation without any costly fractionation process of bio-oils. - Highlights: • The two-stage pyrolyzer is composed of an auger and a fluidized bed reactor. • The two-stage pyrolyzer produced two different bio-oils in a single operation. • The maximum bio-oil yield of the two-stage pyrolysis was ∼56 wt%. • The maximum concentration of acetic acid in bio-oil was ∼78 wt% dry basis. • The maximum concentration of phenol in bio-oil was ∼12 wt% dry basis.

  17. QUICKGUN: An algorithm for estimating the performance of two-stage light gas guns

    International Nuclear Information System (INIS)

    Milora, S.L.; Combs, S.K.; Gouge, M.J.; Kincaid, R.W.

    1990-09-01

    An approximate method is described for solving the equation of motion of a projectile accelerated by a two-stage light gas gun that uses high-pressure (<100-bar) gas from a storage reservoir to drive a piston to moderate speed (<400 m/s) for the purpose of compressing the low molecular weight propellant gas (hydrogen or helium) to high pressure (1000 to 10,000 bar) and temperature (1000 to 10,000 K). Zero-dimensional, adiabatic (isentropic) processes are used to describe the time dependence of the ideal gas thermodynamic properties of the storage reservoir and the first and second stages of the system. A one-dimensional model based on an approximate method of characteristics, or wave diagram analysis, for flow with friction (nonisentropic) is used to describe the nonsteady compressible flow processes in the launch tube. Linear approximations are used for the characteristic and fluid particle trajectories by averaging the values of the flow parameters at the breech and at the base of the projectile. An assumed functional form for the Mach number at the breech provides the necessary boundary condition. Results of the calculation are compared with data obtained from two-stage light gas gun experiments at Oak Ridge National Laboratory for solid deuterium and nylon projectiles with masses ranging from 10 to 35 mg and for projectile speeds between 1.6 and 4.5 km/s. The predicted and measured velocities generally agree to within 15%. 19 refs., 3 figs., 2 tabs

  18. How should I regulate my emotions if I want to run faster?

    Science.gov (United States)

    Lane, Andrew M; Devonport, Tracey J; Friesen, Andrew P; Beedie, Christopher J; Fullerton, Christopher L; Stanley, Damian M

    2016-01-01

    The present study investigated the effects of emotion regulation strategies on self-reported emotions and 1600 m track running performance. In stage 1 of a three-stage study, participants (N = 15) reported emotional states associated with best, worst and ideal performance. Results indicated that a best and ideal emotional state for performance composed of feeling happy, calm, energetic and moderately anxious whereas the worst emotional state for performance composed of feeling downhearted, sluggish and highly anxious. In stage 2, emotion regulation interventions were developed using online material and supported by electronic feedback. One intervention motivated participants to increase the intensity of unpleasant emotions (e.g. feel more angry and anxious). A second intervention motivated participants to reduce the intensity of unpleasant emotions (e.g. feel less angry and anxious). In stage 3, using a repeated measures design, participants used each intervention before running a 1600 m time trial. Data were compared with a no treatment control condition. The intervention designed to increase the intensity of unpleasant emotions resulted in higher anxiety and lower calmness scores but no significant effects on 1600 m running time. The intervention designed to reduce the intensity of unpleasant emotions was associated with significantly slower times for the first 400 m. We suggest future research should investigate emotion regulation, emotion and performance using quasi-experimental methods with performance measures that are meaningful to participants.

  19. Effect of treadmill versus overground running on the structure of variability of stride timing.

    Science.gov (United States)

    Lindsay, Timothy R; Noakes, Timothy D; McGregor, Stephen J

    2014-04-01

    Gait timing dynamics of treadmill and overground running were compared. Nine trained runners ran treadmill and track trials at 80, 100, and 120% of preferred pace for 8 min. each. Stride time series were generated for each trial. To each series, detrended fluctuation analysis (DFA), power spectral density (PSD), and multiscale entropy (MSE) analysis were applied to infer the regime of control along the randomness-regularity axis. Compared to overground running, treadmill running exhibited a higher DFA and PSD scaling exponent, as well as lower entropy at non-preferred speeds. This indicates a more ordered control for treadmill running, especially at non-preferred speeds. The results suggest that the treadmill itself brings about greater constraints and requires increased voluntary control. Thus, the quantification of treadmill running gait dynamics does not necessarily reflect movement in overground settings.

  20. Preferential processing of self-relevant stimuli occurs mainly at the perceptual and conscious stages of information processing.

    Science.gov (United States)

    Tacikowski, P; Ehrsson, H H

    2016-04-01

    Self-related stimuli, such as one's own name or face, are processed faster and more accurately than other types of stimuli. However, what remains unknown is at which stage of the information processing hierarchy this preferential processing occurs. Our first aim was to determine whether preferential self-processing involves mainly perceptual stages or also post-perceptual stages. We found that self-related priming was stronger than other-related priming only because of perceptual prime-target congruency. Our second aim was to dissociate the role of conscious and unconscious factors in preferential self-processing. To this end, we compared the "self" and "other" conditions in trials where primes were masked or unmasked. In two separate experiments, we found that self-related priming was stronger than other-related priming but only in the unmasked trials. Together, our results suggest that preferential access to the self-concept occurs mainly at the perceptual and conscious stages of the stimulus processing hierarchy. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  1. Repetitive, small-bore two-stage light gas gun

    International Nuclear Information System (INIS)

    Combs, S.K.; Foust, C.R.; Fehling, D.T.; Gouge, M.J.; Milora, S.L.

    1991-01-01

    A repetitive two-stage light gas gun for high-speed pellet injection has been developed at Oak Ridge National Laboratory. In general, applications of the two-stage light gas gun have been limited to only single shots, with a finite time (at least minutes) needed for recovery and preparation for the next shot. The new device overcomes problems associated with repetitive operation, including rapidly evacuating the propellant gases, reloading the gun breech with a new projectile, returning the piston to its initial position, and refilling the first- and second-stage gas volumes to the appropriate pressure levels. In addition, some components are subjected to and must survive severe operating conditions, which include rapid cycling to high pressures and temperatures (up to thousands of bars and thousands of kelvins) and significant mechanical shocks. Small plastic projectiles (4-mm nominal size) and helium gas have been used in the prototype device, which was equipped with a 1-m-long pump tube and a 1-m-long gun barrel, to demonstrate repetitive operation (up to 1 Hz) at relatively high pellet velocities (up to 3000 m/s). The equipment is described, and experimental results are presented. 124 refs., 6 figs., 5 tabs

  2. Anaerobic mesophilic co-digestion of ensiled sorghum, cheese whey and liquid cow manure in a two-stage CSTR system: Effect of hydraulic retention time.

    Science.gov (United States)

    Dareioti, Margarita Andreas; Kornaros, Michael

    2015-01-01

    The aim of this study was to investigate the effect of hydraulic retention time (HRT) on hydrogen and methane production using a two-stage anaerobic process. Two continuously stirred tank reactors (CSTRs) were used under mesophilic conditions (37°C) in order to enhance acidogenesis and methanogenesis. A mixture of pretreated ensiled sorghum, cheese whey and liquid cow manure (55:40:5, v/v/v) was used. The acidogenic reactor was operated at six different HRTs of 5, 3, 2, 1, 0.75 and 0.5d, under controlled pH5.5, whereas the methanogenic reactor was operated at three HRTs of 24, 16 and 12d. The maximum H2 productivity (2.14L/LRd) and maximum H2 yield (0.70mol H2/mol carbohydrates consumed) were observed at 0.5d HRT. On the other hand, the maximum CH4 production rate of 0.90L/LRd was achieved at HRT of 16d, whereas at lower HRT the process appeared to be inhibited and/or overloaded. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Two-stage, high power X-band amplifier experiment

    International Nuclear Information System (INIS)

    Kuang, E.; Davis, T.J.; Ivers, J.D.; Kerslick, G.S.; Nation, J.A.; Schaechter, L.

    1993-01-01

    At output powers in excess of 100 MW the authors have noted the development of sidebands in many TWT structures. To address this problem an experiment using a narrow bandwidth, two-stage TWT is in progress. The TWT amplifier consists of a dielectric (e = 5) slow-wave structure, a 30 dB sever section and a 8.8-9.0 GHz passband periodic, metallic structure. The electron beam used in this experiment is a 950 kV, 1 kA, 50 ns pencil beam propagating along an applied axial field of 9 kG. The dielectric first stage has a maximum gain of 30 dB measured at 8.87 GHz, with output powers of up to 50 MW in the TM 01 mode. In these experiments the dielectric amplifier output power is about 3-5 MW and the output power of the complete two-stage device is ∼160 MW at the input frequency. The sidebands detected in earlier experiments have been eliminated. The authors also report measurements of the energy spread of the electron beam resulting from the amplification process. These experimental results are compared with MAGIC code simulations and analytic work they have carried out on such devices

  4. Second stage gasifier in staged gasification and integrated process

    Science.gov (United States)

    Liu, Guohai; Vimalchand, Pannalal; Peng, Wan Wang

    2015-10-06

    A second stage gasification unit in a staged gasification integrated process flow scheme and operating methods are disclosed to gasify a wide range of low reactivity fuels. The inclusion of second stage gasification unit operating at high temperatures closer to ash fusion temperatures in the bed provides sufficient flexibility in unit configurations, operating conditions and methods to achieve an overall carbon conversion of over 95% for low reactivity materials such as bituminous and anthracite coals, petroleum residues and coke. The second stage gasification unit includes a stationary fluidized bed gasifier operating with a sufficiently turbulent bed of predefined inert bed material with lean char carbon content. The second stage gasifier fluidized bed is operated at relatively high temperatures up to 1400.degree. C. Steam and oxidant mixture can be injected to further increase the freeboard region operating temperature in the range of approximately from 50 to 100.degree. C. above the bed temperature.

  5. An adaptive two-stage analog/regression model for probabilistic prediction of small-scale precipitation in France

    Science.gov (United States)

    Chardon, Jérémy; Hingray, Benoit; Favre, Anne-Catherine

    2018-01-01

    Statistical downscaling models (SDMs) are often used to produce local weather scenarios from large-scale atmospheric information. SDMs include transfer functions which are based on a statistical link identified from observations between local weather and a set of large-scale predictors. As physical processes driving surface weather vary in time, the most relevant predictors and the regression link are likely to vary in time too. This is well known for precipitation for instance and the link is thus often estimated after some seasonal stratification of the data. In this study, we present a two-stage analog/regression model where the regression link is estimated from atmospheric analogs of the current prediction day. Atmospheric analogs are identified from fields of geopotential heights at 1000 and 500 hPa. For the regression stage, two generalized linear models are further used to model the probability of precipitation occurrence and the distribution of non-zero precipitation amounts, respectively. The two-stage model is evaluated for the probabilistic prediction of small-scale precipitation over France. It noticeably improves the skill of the prediction for both precipitation occurrence and amount. As the analog days vary from one prediction day to another, the atmospheric predictors selected in the regression stage and the value of the corresponding regression coefficients can vary from one prediction day to another. The model allows thus for a day-to-day adaptive and tailored downscaling. It can also reveal specific predictors for peculiar and non-frequent weather configurations.

  6. Effective conversion of maize straw wastes into bio-hydrogen by two-stage process integrating H2 fermentation and MECs.

    Science.gov (United States)

    Li, Yan-Hong; Bai, Yan-Xia; Pan, Chun-Mei; Li, Wei-Wei; Zheng, Hui-Qin; Zhang, Jing-Nan; Fan, Yao-Ting; Hou, Hong-Wei

    2015-12-01

    The enhanced H2 production from maize straw had been achieved through the two-stage process of integrating H2 fermentation and microbial electrolysis cells (MECs) in the present work. Several key parameters affecting hydrolysis of maize straw through subcritical H2O were optimized by orthogonal design for saccharification of maize straw followed by H2 production through H2 fermentation. The maximum reducing sugar (RS) content of maize straw reached 469.7 mg/g-TS under the optimal hydrolysis condition with subcritical H2O combining with dilute HCl of 0.3% at 230 °C. The maximum H2 yield, H2 production rate, and H2 content was 115.1 mL/g-TVS, 2.6 mL/g-TVS/h, and 48.9% by H2 fermentation, respectively. In addition, the effluent from H2 fermentation was used as feedstock of MECs for additional H2 production. The maximum H2 yield of 1060 mL/g-COD appeared at an applied voltage of 0.8 V, and total COD removal reached about 35%. The overall H2 yield from maize straw reached 318.5 mL/g-TVS through two-stage processes. The structural characterization of maize straw was also carefully investigated by scanning electron microscopy (SEM), Fourier transform infrared spectroscopy (FTIR), and X-ray diffraction (XRD) spectra.

  7. Similar Running Economy With Different Running Patterns Along the Aerial-Terrestrial Continuum.

    Science.gov (United States)

    Lussiana, Thibault; Gindre, Cyrille; Hébert-Losier, Kim; Sagawa, Yoshimasa; Gimenez, Philippe; Mourot, Laurent

    2017-04-01

    No unique or ideal running pattern is the most economical for all runners. Classifying the global running patterns of individuals into 2 categories (aerial and terrestrial) using the Volodalen method could permit a better understanding of the relationship between running economy (RE) and biomechanics. The main purpose was to compare the RE of aerial and terrestrial runners. Two coaches classified 58 runners into aerial (n = 29) or terrestrial (n = 29) running patterns on the basis of visual observations. RE, muscle activity, kinematics, and spatiotemporal parameters of both groups were measured during a 5-min run at 12 km/h on a treadmill. Maximal oxygen uptake (V̇O 2 max) and peak treadmill speed (PTS) were assessed during an incremental running test. No differences were observed between aerial and terrestrial patterns for RE, V̇O 2 max, and PTS. However, at 12 km/h, aerial runners exhibited earlier gastrocnemius lateralis activation in preparation for contact, less dorsiflexion at ground contact, higher coactivation indexes, and greater leg stiffness during stance phase than terrestrial runners. Terrestrial runners had more pronounced semitendinosus activation at the start and end of the running cycle, shorter flight time, greater leg compression, and a more rear-foot strike. Different running patterns were associated with similar RE. Aerial runners appear to rely more on elastic energy utilization with a rapid eccentric-concentric coupling time, whereas terrestrial runners appear to propel the body more forward rather than upward to limit work against gravity. Excluding runners with a mixed running pattern from analyses did not affect study interpretation.

  8. What makes us think? A three-stage dual-process model of analytic engagement.

    Science.gov (United States)

    Pennycook, Gordon; Fugelsang, Jonathan A; Koehler, Derek J

    2015-08-01

    The distinction between intuitive and analytic thinking is common in psychology. However, while often being quite clear on the characteristics of the two processes ('Type 1' processes are fast, autonomous, intuitive, etc. and 'Type 2' processes are slow, deliberative, analytic, etc.), dual-process theorists have been heavily criticized for being unclear on the factors that determine when an individual will think analytically or rely on their intuition. We address this issue by introducing a three-stage model that elucidates the bottom-up factors that cause individuals to engage Type 2 processing. According to the model, multiple Type 1 processes may be cued by a stimulus (Stage 1), leading to the potential for conflict detection (Stage 2). If successful, conflict detection leads to Type 2 processing (Stage 3), which may take the form of rationalization (i.e., the Type 1 output is verified post hoc) or decoupling (i.e., the Type 1 output is falsified). We tested key aspects of the model using a novel base-rate task where stereotypes and base-rate probabilities cued the same (non-conflict problems) or different (conflict problems) responses about group membership. Our results support two key predictions derived from the model: (1) conflict detection and decoupling are dissociable sources of Type 2 processing and (2) conflict detection sometimes fails. We argue that considering the potential stages of reasoning allows us to distinguish early (conflict detection) and late (decoupling) sources of analytic thought. Errors may occur at both stages and, as a consequence, bias arises from both conflict monitoring and decoupling failures. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Continuous production of biohythane from hydrothermal liquefied cornstalk biomass via two-stage high-rate anaerobic reactors.

    Science.gov (United States)

    Si, Bu-Chun; Li, Jia-Ming; Zhu, Zhang-Bing; Zhang, Yuan-Hui; Lu, Jian-Wen; Shen, Rui-Xia; Zhang, Chong; Xing, Xin-Hui; Liu, Zhidan

    2016-01-01

    Biohythane production via two-stage fermentation is a promising direction for sustainable energy recovery from lignocellulosic biomass. However, the utilization of lignocellulosic biomass suffers from specific natural recalcitrance. Hydrothermal liquefaction (HTL) is an emerging technology for the liquefaction of biomass, but there are still several challenges for the coupling of HTL and two-stage fermentation. One particular challenge is the limited efficiency of fermentation reactors at a high solid content of the treated feedstock. Another is the conversion of potential inhibitors during fermentation. Here, we report a novel strategy for the continuous production of biohythane from cornstalk through the integration of HTL and two-stage fermentation. Cornstalk was converted to solid and liquid via HTL, and the resulting liquid could be subsequently fed into the two-stage fermentation systems. The systems consisted of two typical high-rate reactors: an upflow anaerobic sludge blanket (UASB) and a packed bed reactor (PBR). The liquid could be efficiently converted into biohythane via the UASB and PBR with a high density of microbes at a high organic loading rate. Biohydrogen production decreased from 2.34 L/L/day in UASB (1.01 L/L/day in PBR) to 0 L/L/day as the organic loading rate (OLR) of the HTL liquid products increased to 16 g/L/day. The methane production rate achieved a value of 2.53 (UASB) and 2.54 L/L/day (PBR), respectively. The energy and carbon recovery of the integrated HTL and biohythane fermentation system reached up to 79.0 and 67.7%, respectively. The fermentation inhibitors, i.e., 5-hydroxymethyl furfural (41.4-41.9% of the initial quantity detected) and furfural (74.7-85.0% of the initial quantity detected), were degraded during hydrogen fermentation. Compared with single-stage fermentation, the methane process during two-stage fermentation had a more efficient methane production rate, acetogenesis, and COD removal. The microbial distribution

  10. Confidence interval estimation of the difference between two sensitivities to the early disease stage.

    Science.gov (United States)

    Dong, Tuochuan; Kang, Le; Hutson, Alan; Xiong, Chengjie; Tian, Lili

    2014-03-01

    Although most of the statistical methods for diagnostic studies focus on disease processes with binary disease status, many diseases can be naturally classified into three ordinal diagnostic categories, that is normal, early stage, and fully diseased. For such diseases, the volume under the ROC surface (VUS) is the most commonly used index of diagnostic accuracy. Because the early disease stage is most likely the optimal time window for therapeutic intervention, the sensitivity to the early diseased stage has been suggested as another diagnostic measure. For the purpose of comparing the diagnostic abilities on early disease detection between two markers, it is of interest to estimate the confidence interval of the difference between sensitivities to the early diseased stage. In this paper, we present both parametric and non-parametric methods for this purpose. An extensive simulation study is carried out for a variety of settings for the purpose of evaluating and comparing the performance of the proposed methods. A real example of Alzheimer's disease (AD) is analyzed using the proposed approaches. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Two stage treatment of dairy effluent using immobilized Chlorella pyrenoidosa

    Science.gov (United States)

    2013-01-01

    Background Dairy effluents contains high organic load and unscrupulous discharge of these effluents into aquatic bodies is a matter of serious concern besides deteriorating their water quality. Whilst physico-chemical treatment is the common mode of treatment, immobilized microalgae can be potentially employed to treat high organic content which offer numerous benefits along with waste water treatment. Methods A novel low cost two stage treatment was employed for the complete treatment of dairy effluent. The first stage consists of treating the diary effluent in a photobioreactor (1 L) using immobilized Chlorella pyrenoidosa while the second stage involves a two column sand bed filtration technique. Results Whilst NH4+-N was completely removed, a 98% removal of PO43--P was achieved within 96 h of two stage purification processes. The filtrate was tested for toxicity and no mortality was observed in the zebra fish which was used as a model at the end of 96 h bioassay. Moreover, a significant decrease in biological oxygen demand and chemical oxygen demand was achieved by this novel method. Also the biomass separated was tested as a biofertilizer to the rice seeds and a 30% increase in terms of length of root and shoot was observed after the addition of biomass to the rice plants. Conclusions We conclude that the two stage treatment of dairy effluent is highly effective in removal of BOD and COD besides nutrients like nitrates and phosphates. The treatment also helps in discharging treated waste water safely into the receiving water bodies since it is non toxic for aquatic life. Further, the algal biomass separated after first stage of treatment was highly capable of increasing the growth of rice plants because of nitrogen fixation ability of the green alga and offers a great potential as a biofertilizer. PMID:24355316

  12. A comparison of two procedures for verbal response time fractionation

    Directory of Open Access Journals (Sweden)

    Lotje evan der Linden

    2014-10-01

    Full Text Available To describe the mental architecture between stimulus and response, cognitive models often divide the stimulus-response (SR interval into stages or modules. Predictions derived from such models are typically tested by focusing on the moment of response emission, through the analysis of response time (RT distributions. To go beyond the single response event, we recently proposed a method to fractionate verbal RTs into two physiologically defined intervals that are assumed to reflect different processing stages. The analysis of the durations of these intervals can be used to study the interaction between cognitive and motor processing during speech production. Our method is inspired by studies on decision making that used manual responses, in which RTs were fractionated into a premotor time (PMT, assumed to reflect cognitive processing, and a motor time (MT, assumed to reflect motor processing. In these studies, surface EMG activity was recorded from participants' response fingers. EMG onsets, reflecting the initiation of a motor response, were used as the point of fractionation. We adapted this method to speech-production research by measuring verbal responses in combination with EMG activity from facial muscles involved in articulation. However, in contrast to button-press tasks, the complex task of producing speech often resulted in multiple EMG bursts within the SR interval. This observation forced us to decide how to operationalize the point of fractionation: as the first EMG burst after stimulus onset (the stimulus-locked approach, or as the EMG burst that is coupled to the vocal response (the response-locked approach. The point of fractionation has direct consequences on how much of the overall task effect is captured by either interval. Therefore, the purpose of the current paper was to compare both onset-detection procedures in order to make an informed decision about which of the two is preferable. We concluded in favour or the response

  13. A simple two stage optimization algorithm for constrained power economic dispatch

    International Nuclear Information System (INIS)

    Huang, G.; Song, K.

    1994-01-01

    A simple two stage optimization algorithm is proposed and investigated for fast computation of constrained power economic dispatch control problems. The method is a simple demonstration of the hierarchical aggregation-disaggregation (HAD) concept. The algorithm first solves an aggregated problem to obtain an initial solution. This aggregated problem turns out to be classical economic dispatch formulation, and it can be solved in 1% of overall computation time. In the second stage, linear programming method finds optimal solution which satisfies power balance constraints, generation and transmission inequality constraints and security constraints. Implementation of the algorithm for IEEE systems and EPRI Scenario systems shows that the two stage method obtains average speedup ratio 10.64 as compared to classical LP-based method

  14. Waiting Endurance Time Estimation of Electric Two-Wheelers at Signalized Intersections

    Directory of Open Access Journals (Sweden)

    Mei Huan

    2014-01-01

    Full Text Available The paper proposed a model for estimating waiting endurance times of electric two-wheelers at signalized intersections using survival analysis method. Waiting duration times were collected by video cameras and they were assigned as censored and uncensored data to distinguish between normal crossing and red-light running behavior. A Cox proportional hazard model was introduced, and variables revealing personal characteristics and traffic conditions were defined as covariates to describe the effects of internal and external factors. Empirical results show that riders do not want to wait too long to cross intersections. As signal waiting time increases, electric two-wheelers get impatient and violate the traffic signal. There are 12.8% of electric two-wheelers with negligible wait time. 25.0% of electric two-wheelers are generally nonrisk takers who can obey the traffic rules after waiting for 100 seconds. Half of electric two-wheelers cannot endure 49.0 seconds or longer at red-light phase. Red phase time, motor vehicle volume, and conformity behavior have important effects on riders’ waiting times. Waiting endurance times would decrease with the longer red-phase time, the lower traffic volume, or the bigger number of other riders who run against the red light. The proposed model may be applicable in the design, management and control of signalized intersections in other developing cities.

  15. Waiting endurance time estimation of electric two-wheelers at signalized intersections.

    Science.gov (United States)

    Huan, Mei; Yang, Xiao-bao

    2014-01-01

    The paper proposed a model for estimating waiting endurance times of electric two-wheelers at signalized intersections using survival analysis method. Waiting duration times were collected by video cameras and they were assigned as censored and uncensored data to distinguish between normal crossing and red-light running behavior. A Cox proportional hazard model was introduced, and variables revealing personal characteristics and traffic conditions were defined as covariates to describe the effects of internal and external factors. Empirical results show that riders do not want to wait too long to cross intersections. As signal waiting time increases, electric two-wheelers get impatient and violate the traffic signal. There are 12.8% of electric two-wheelers with negligible wait time. 25.0% of electric two-wheelers are generally nonrisk takers who can obey the traffic rules after waiting for 100 seconds. Half of electric two-wheelers cannot endure 49.0 seconds or longer at red-light phase. Red phase time, motor vehicle volume, and conformity behavior have important effects on riders' waiting times. Waiting endurance times would decrease with the longer red-phase time, the lower traffic volume, or the bigger number of other riders who run against the red light. The proposed model may be applicable in the design, management and control of signalized intersections in other developing cities.

  16. Two-stage unified stretched-exponential model for time-dependence of threshold voltage shift under positive-bias-stresses in amorphous indium-gallium-zinc oxide thin-film transistors

    Science.gov (United States)

    Jeong, Chan-Yong; Kim, Hee-Joong; Hong, Sae-Young; Song, Sang-Hun; Kwon, Hyuck-In

    2017-08-01

    In this study, we show that the two-stage unified stretched-exponential model can more exactly describe the time-dependence of threshold voltage shift (ΔV TH) under long-term positive-bias-stresses compared to the traditional stretched-exponential model in amorphous indium-gallium-zinc oxide (a-IGZO) thin-film transistors (TFTs). ΔV TH is mainly dominated by electron trapping at short stress times, and the contribution of trap state generation becomes significant with an increase in the stress time. The two-stage unified stretched-exponential model can provide useful information not only for evaluating the long-term electrical stability and lifetime of the a-IGZO TFT but also for understanding the stress-induced degradation mechanism in a-IGZO TFTs.

  17. A stage is a stage is a stage: a direct comparison of two scoring systems.

    Science.gov (United States)

    Dawson, Theo L

    2003-09-01

    L. Kohlberg (1969) argued that his moral stages captured a developmental sequence specific to the moral domain. To explore that contention, the author compared stage assignments obtained with the Standard Issue Scoring System (A. Colby & L. Kohlberg, 1987a, 1987b) and those obtained with a generalized content-independent stage-scoring system called the Hierarchical Complexity Scoring System (T. L. Dawson, 2002a), on 637 moral judgment interviews (participants' ages ranged from 5 to 86 years). The correlation between stage scores produced with the 2 systems was .88. Although standard issue scoring and hierarchical complexity scoring often awarded different scores up to Kohlberg's Moral Stage 2/3, from his Moral Stage 3 onward, scores awarded with the two systems predominantly agreed. The author explores the implications for developmental research.

  18. Two-stage acid saccharification of fractionated Gelidium amansii minimizing the sugar decomposition.

    Science.gov (United States)

    Jeong, Tae Su; Kim, Young Soo; Oh, Kyeong Keun

    2011-11-01

    Two-stage acid hydrolysis was conducted on easy reacting cellulose and resistant reacting cellulose of fractionated Gelidium amansii (f-GA). Acid hydrolysis of f-GA was performed at between 170 and 200 °C for a period of 0-5 min, and an acid concentration of 2-5% (w/v, H2SO4) to determine the optimal conditions for acid hydrolysis. In the first stage of the acid hydrolysis, an optimum glucose yield of 33.7% was obtained at a reaction temperature of 190 °C, an acid concentration of 3.0%, and a reaction time of 3 min. In the second stage, a glucose yield of 34.2%, on the basis the amount of residual cellulose from the f-GA, was obtained at a temperature of 190 °C, a sulfuric acid concentration of 4.0%, and a reaction time 3.7 min. Finally, 68.58% of the cellulose derived from f-GA was converted into glucose through two-stage acid saccharification under aforementioned conditions. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. A Process For Performance Evaluation Of Real-Time Systems

    Directory of Open Access Journals (Sweden)

    Andrew J. Kornecki

    2003-12-01

    Full Text Available Real-time developers and engineers must not only meet the system functional requirements, but also the stringent timing requirements. One of the critical decisions leading to meeting these timing requirements is the selection of an operating system under which the software will be developed and run. Although there is ample documentation on real-time systems performance and evaluation, little can be found that combines such information into an efficient process for use by developers. As the software industry moves towards clearly defined processes, creation of appropriate guidelines describing a process for performance evaluation of real-time system would greatly benefit real-time developers. This technology transition research focuses on developing such a process. PROPERT (PROcess for Performance Evaluation of Real Time systems - the process described in this paper - is based upon established techniques for evaluating real-time systems. It organizes already existing real-time performance criteria and assessment techniques in a manner consistent with a well-formed process, based on the Personal Software Process concepts.

  20. Willingness-to-pay for steelhead trout fishing: Implications of two-step consumer decisions with short-run endowments

    Science.gov (United States)

    McKean, John R.; Johnson, Donn; Taylor, R. Garth

    2010-09-01

    Choice of the appropriate model of economic behavior is important for the measurement of nonmarket demand and benefits. Several travel cost demand model specifications are currently in use. Uncertainty exists over the efficacy of these approaches, and more theoretical and empirical study is warranted. Thus travel cost models with differing assumptions about labor markets and consumer behavior were applied to estimate the demand for steelhead trout sportfishing on an unimpounded reach of the Snake River near Lewiston, Idaho. We introduce a modified two-step decision model that incorporates endogenous time value using a latent index variable approach. The focus is on the importance of distinguishing between short-run and long-run consumer decision variables in a consistent manner. A modified Barnett two-step decision model was found superior to other models tested.

  1. Ethanol production from rape straw by a two-stage pretreatment under mild conditions.

    Science.gov (United States)

    Romero, Inmaculada; López-Linares, Juan C; Delgado, Yaimé; Cara, Cristóbal; Castro, Eulogio

    2015-08-01

    The growing interest on rape oil as raw material for biodiesel production has resulted in an increasing availability of rape straw, an agricultural residue that is an attractive renewable source for the production of second-generation bioethanol. Pretreatment is one of the key steps in such a conversion process. In this work, a sequential two-stage pretreatment with dilute sulfuric acid (130 °C, 60 min, 2% w/v H2SO4) followed by H2O2 (1-5% w/v) in alkaline medium (NaOH) at low temperature (60, 90 °C) and at different pretreatment times (30-90 min) was investigated. The first-acid stage allows the solubilisation of hemicellulose fraction into fermentable sugars. The second-alkaline peroxide stage allows the delignification of the solid material whilst the cellulose remaining in rape straw turned highly digestible by cellulases. Simultaneous saccharification and fermentation with 15% (w/v) delignified substrate at 90 °C, 5% H2O2 for 60 min, led to a maximum ethanol production of 53 g/L and a yield of 85% of the theoretical.

  2. Visualization of synchronization of the uterine contraction signals: running cross-correlation and wavelet running cross-correlation methods.

    Science.gov (United States)

    Oczeretko, Edward; Swiatecka, Jolanta; Kitlas, Agnieszka; Laudanski, Tadeusz; Pierzynski, Piotr

    2006-01-01

    In physiological research, we often study multivariate data sets, containing two or more simultaneously recorded time series. The aim of this paper is to present the cross-correlation and the wavelet cross-correlation methods to assess synchronization between contractions in different topographic regions of the uterus. From a medical point of view, it is important to identify time delays between contractions, which may be of potential diagnostic significance in various pathologies. The cross-correlation was computed in a moving window with a width corresponding to approximately two or three contractions. As a result, the running cross-correlation function was obtained. The propagation% parameter assessed from this function allows quantitative description of synchronization in bivariate time series. In general, the uterine contraction signals are very complicated. Wavelet transforms provide insight into the structure of the time series at various frequencies (scales). To show the changes of the propagation% parameter along scales, a wavelet running cross-correlation was used. At first, the continuous wavelet transforms as the uterine contraction signals were received and afterwards, a running cross-correlation analysis was conducted for each pair of transformed time series. The findings show that running functions are very useful in the analysis of uterine contractions.

  3. A study of the process of two staged anaerobic fermentation as a possible method for purifying sewage

    Energy Technology Data Exchange (ETDEWEB)

    Inoue, Y; Kouama, K; Matsuo, T

    1983-01-01

    Great attention has recently been given to the study of the processes of anaerobic fermentation, which may become alternatives to the existing methods for purifying waste waters which use aerobic microorganisms. A series of experimentswere conducted with the use of an artificially prepared liquid (fermented milk and starch) which imitates the waste to be purified, in order to explain the capabilities of the process of two staged anaerobic fermentation (DAS) as a method for purifying waste waters. The industrial system of the process includes: a fermentation vat for acetic fermentation with recirculation of the sediment, a primary settler, a fermentation tank for methane fermentation and a secondary settler. The process was conducted at a loading speed (based on Carbon) from 0.15 to 0.4 kilograms per cubic meter per day at a temperature of 38C. The degree of conversion of the fermented organic substances into volatile organic acids was not a function of the loading speed and was 54 to 57 percent in the acetic fermentation tank, where 95 to 97 percent of the organic material was broken down with the production of methane and carbon dioxide.

  4. Shorter Ground Contact Time and Better Running Economy: Evidence From Female Kenyan Runners.

    Science.gov (United States)

    Mooses, Martin; Haile, Diresibachew W; Ojiambo, Robert; Sang, Meshack; Mooses, Kerli; Lane, Amy R; Hackney, Anthony C

    2018-06-25

    Mooses, M, Haile, DW, Ojiambo, R, Sang, M, Mooses, K, Lane, AR, and Hackney, AC. Shorter ground contact time and better running economy: evidence from female Kenyan runners. J Strength Cond Res XX(X): 000-000, 2018-Previously, it has been concluded that the improvement in running economy (RE) might be considered as a key to the continued improvement in performance when no further increase in V[Combining Dot Above]O2max is observed. To date, RE has been extensively studied among male East African distance runners. By contrast, there is a paucity of data on the RE of female East African runners. A total of 10 female Kenyan runners performed 3 × 1,600-m steady-state run trials on a flat outdoor clay track (400-m lap) at the intensities that corresponded to their everyday training intensities for easy, moderate, and fast running. Running economy together with gait characteristics was determined. Participants showed moderate to very good RE at the first (202 ± 26 ml·kg·km) and second (188 ± 12 ml·kg·km) run trials, respectively. Correlation analysis revealed significant relationship between ground contact time (GCT) and RE at the second run (r = 0.782; p = 0.022), which represented the intensity of anaerobic threshold. This study is the first to report the RE and gait characteristics of East African female athletes measured under everyday training settings. We provided the evidence that GCT is associated with the superior RE of the female Kenyan runners.

  5. The NLstart2run study: Training-related factors associated with running-related injuries in novice runners.

    Science.gov (United States)

    Kluitenberg, Bas; van der Worp, Henk; Huisstede, Bionka M A; Hartgens, Fred; Diercks, Ron; Verhagen, Evert; van Middelkoop, Marienke

    2016-08-01

    The incidence of running-related injuries is high. Some risk factors for injury were identified in novice runners, however, not much is known about the effect of training factors on injury risk. Therefore, the purpose of this study was to examine the associations between training factors and running-related injuries in novice runners, taking the time varying nature of these training-related factors into account. Prospective cohort study. 1696 participants completed weekly diaries on running exposure and injuries during a 6-week running program for novice runners. Total running volume (min), frequency and mean intensity (Rate of Perceived Exertion) were calculated for the seven days prior to each training session. The association of these time-varying variables with injury was determined in an extended Cox regression analysis. The results of the multivariable analysis showed that running with a higher intensity in the previous week was associated with a higher injury risk. Running frequency was not significantly associated with injury, however a trend towards running three times per week being more hazardous than two times could be observed. Finally, lower running volume was associated with a higher risk of sustaining an injury. These results suggest that running more than 60min at a lower intensity is least injurious. This finding is contrary to our expectations and is presumably the result of other factors. Therefore, the findings should not be used plainly as a guideline for novices. More research is needed to establish the person-specific training patterns that are associated with injury. Copyright © 2015 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  6. A Novel Process for Joining Ti Alloy and Al Alloy using Two-Stage Sintering Powder Metallurgy

    Science.gov (United States)

    Long, Luping; Liu, Wensheng; Ma, Yunzhu; Wu, Lei; Liu, Chao

    2018-04-01

    The major challenges for conventional diffusion bonding of joining Ti alloy and Al alloy are the undesirable interfacial reaction, low matrixes and joint strength. To avoid the problem in diffusion bonding, a novel two-stage sintering powder metallurgy process is developed. In the present work, the interface characterization and joint performance of the bonds obtained by powder metallurgy bonding are investigated and are compared with the diffusion bonded Ti/Al joints obtained with the same and the optimized process parameters. The results show that no intermetallic compound is visible in the Ti/Al joint obtained by powder metallurgy bonding, while a new layer formed at the joint diffusion bonded with the same parameters. The maximum tensile strength of joint obtained by diffusion bonding is 58 MPa, while a higher tensile strength reaching 111 MPa for a bond made by powder metallurgy bonding. Brittle fractures occur at all the bonds. It is shown that the powder metallurgy bonding of Ti/Al is better than diffusion bonding. The results of this study should benefit the bonding quality.

  7. Reinventing the wheel: comparison of two wheel cage styles for assessing mouse voluntary running activity.

    Science.gov (United States)

    Seward, T; Harfmann, B D; Esser, K A; Schroder, E A

    2018-04-01

    Voluntary wheel cage assessment of mouse activity is commonly employed in exercise and behavioral research. Currently, no standardization for wheel cages exists resulting in an inability to compare results among data from different laboratories. The purpose of this study was to determine whether the distance run or average speed data differ depending on the use of two commonly used commercially available wheel cage systems. Two different wheel cages with structurally similar but functionally different wheels (electromechanical switch vs. magnetic switch) were compared side-by-side to measure wheel running data differences. Other variables, including enrichment and cage location, were also tested to assess potential impacts on the running wheel data. We found that cages with the electromechanical switch had greater inherent wheel resistance and consistently led to greater running distance per day and higher average running speed. Mice rapidly, within 1-2 days, adapted their running behavior to the type of experimental switch used, suggesting these running differences are more behavioral than due to intrinsic musculoskeletal, cardiovascular, or metabolic limits. The presence of enrichment or location of the cage had no detectable impact on voluntary wheel running. These results demonstrate that mice run differing amounts depending on the type of cage and switch mechanism used and thus investigators need to report wheel cage type/wheel resistance and use caution when interpreting distance/speed run across studies. NEW & NOTEWORTHY The results of this study highlight that mice will run different distances per day and average speed based on the inherent resistance present in the switch mechanism used to record data. Rapid changes in running behavior for the same mouse in the different cages demonstrate that a strong behavioral factor contributes to classic exercise outcomes in mice. Caution needs to be taken when interpreting mouse voluntary wheel running activity to

  8. Running Boot Camp

    CERN Document Server

    Toporek, Chuck

    2008-01-01

    When Steve Jobs jumped on stage at Macworld San Francisco 2006 and announced the new Intel-based Macs, the question wasn't if, but when someone would figure out a hack to get Windows XP running on these new "Mactels." Enter Boot Camp, a new system utility that helps you partition and install Windows XP on your Intel Mac. Boot Camp does all the heavy lifting for you. You won't need to open the Terminal and hack on system files or wave a chicken bone over your iMac to get XP running. This free program makes it easy for anyone to turn their Mac into a dual-boot Windows/OS X machine. Running Bo

  9. FIRST DIRECT EVIDENCE OF TWO STAGES IN FREE RECALL

    Directory of Open Access Journals (Sweden)

    Eugen Tarnow

    2015-12-01

    Full Text Available I find that exactly two stages can be seen directly in sequential free recall distributions. These distributions show that the first three recalls come from the emptying of working memory, recalls 6 and above come from a second stage and the 4th and 5th recalls are mixtures of the two.A discontinuity, a rounded step function, is shown to exist in the fitted linear slope of the recall distributions as the recall shifts from the emptying of working memory (positive slope to the second stage (negative slope. The discontinuity leads to a first estimate of the capacity of working memory at 4-4.5 items. The total recall is shown to be a linear combination of the content of working memory and items recalled in the second stage with 3.0-3.9 items coming from working memory, a second estimate of the capacity of working memory. A third, separate upper limit on the capacity of working memory is found (3.06 items, corresponding to the requirement that the content of working memory cannot exceed the total recall, item by item. This third limit is presumably the best limit on the average capacity of unchunked working memory.The second stage of recall is shown to be reactivation: The average times to retrieve additional items in free recall obey a linear relationship as a function of the recall probability which mimics recognition and cued recall, both mechanisms using reactivation (Tarnow, 2008.

  10. Two-Stage Load Shedding for Secondary Control in Hierarchical Operation of Islanded Microgrids

    DEFF Research Database (Denmark)

    Zhou, Quan; Li, Zhiyi; Wu, Qiuwei

    2018-01-01

    A two-stage load shedding scheme is presented to cope with the severe power deficit caused by microgrid islanding. Coordinated with the fast response of inverter-based distributed energy resources (DERs), load shedding at each stage and the resulting power flow redistribution are estimated....... The first stage of load shedding will cease rapid frequency decline in which the measured frequency deviation is employed to guide the load shedding level and process. Once a new steady-state is reached, the second stage is activated, which performs load shedding according to the priorities of loads...

  11. Attainability and minimum energy of single-stage membrane and membrane/distillation hybrid processes

    KAUST Repository

    Alshehri, Ali

    2014-12-01

    As an energy-efficient separation method, membrane technology has attracted more and more attentions in many challenging separation processes. The attainability and the energy consumption of a membrane process are the two basic fundamental questions that need to be answered. This report aims to use process simulations to find: (1) at what conditions a single-stage membrane process can meet the separation task that is defined by product purity and recovery ratio and (2) what are the most important parameters that determine the energy consumption. To perform a certain separation task, it was found that both membrane selectivity and pressure ratio exhibit a minimum value that is defined only by product purity and recovery ratio. The membrane/distillation hybrid system was used to study the energy consumption. A shortcut method was developed to calculate the minimum practical separation energy (MPSE) of the membrane process and the distillation process. It was found that the MPSE of the hybrid system is only determined by the membrane selectivity and the applied transmembrane pressure ratio in three stages. At the first stage when selectivity is low, the membrane process is not competitive to the distillation process. Adding a membrane unit to a distillation tower will not help in reducing energy. At the second medium selectivity stage, the membrane/distillation hybrid system can help reduce the energy consumption, and the higher the membrane selectivity, the lower is the energy. The energy conservation is further improved as pressure ratio increases. At the third stage when both selectivity and pressure ratio are high, the hybrid system will change to a single-stage membrane unit and this change will cause significant reduction in energy consumption. The energy at this stage keeps decreasing with selectivity at slow rate, but slightly increases with pressure ratio. Overall, the higher the membrane selectivity, the more the energy is saved. Therefore, the two

  12. An adaptive two-stage analog/regression model for probabilistic prediction of small-scale precipitation in France

    Directory of Open Access Journals (Sweden)

    J. Chardon

    2018-01-01

    Full Text Available Statistical downscaling models (SDMs are often used to produce local weather scenarios from large-scale atmospheric information. SDMs include transfer functions which are based on a statistical link identified from observations between local weather and a set of large-scale predictors. As physical processes driving surface weather vary in time, the most relevant predictors and the regression link are likely to vary in time too. This is well known for precipitation for instance and the link is thus often estimated after some seasonal stratification of the data. In this study, we present a two-stage analog/regression model where the regression link is estimated from atmospheric analogs of the current prediction day. Atmospheric analogs are identified from fields of geopotential heights at 1000 and 500 hPa. For the regression stage, two generalized linear models are further used to model the probability of precipitation occurrence and the distribution of non-zero precipitation amounts, respectively. The two-stage model is evaluated for the probabilistic prediction of small-scale precipitation over France. It noticeably improves the skill of the prediction for both precipitation occurrence and amount. As the analog days vary from one prediction day to another, the atmospheric predictors selected in the regression stage and the value of the corresponding regression coefficients can vary from one prediction day to another. The model allows thus for a day-to-day adaptive and tailored downscaling. It can also reveal specific predictors for peculiar and non-frequent weather configurations.

  13. The design of the run Clever randomized trial: running volume, -intensity and running-related injuries.

    Science.gov (United States)

    Ramskov, Daniel; Nielsen, Rasmus Oestergaard; Sørensen, Henrik; Parner, Erik; Lind, Martin; Rasmussen, Sten

    2016-04-23

    Injury incidence and prevalence in running populations have been investigated and documented in several studies. However, knowledge about injury etiology and prevention is needed. Training errors in running are modifiable risk factors and people engaged in recreational running need evidence-based running schedules to minimize the risk of injury. The existing literature on running volume and running intensity and the development of injuries show conflicting results. This may be related to previously applied study designs, methods used to quantify the performed running and the statistical analysis of the collected data. The aim of the Run Clever trial is to investigate if a focus on running intensity compared with a focus on running volume in a running schedule influences the overall injury risk differently. The Run Clever trial is a randomized trial with a 24-week follow-up. Healthy recreational runners between 18 and 65 years and with an average of 1-3 running sessions per week the past 6 months are included. Participants are randomized into two intervention groups: Running schedule-I and Schedule-V. Schedule-I emphasizes a progression in running intensity by increasing the weekly volume of running at a hard pace, while Schedule-V emphasizes a progression in running volume, by increasing the weekly overall volume. Data on the running performed is collected by GPS. Participants who sustain running-related injuries are diagnosed by a diagnostic team of physiotherapists using standardized diagnostic criteria. The members of the diagnostic team are blinded. The study design, procedures and informed consent were approved by the Ethics Committee Northern Denmark Region (N-20140069). The Run Clever trial will provide insight into possible differences in injury risk between running schedules emphasizing either running intensity or running volume. The risk of sustaining volume- and intensity-related injuries will be compared in the two intervention groups using a competing

  14. The time course of attentional modulation on emotional conflict processing.

    Science.gov (United States)

    Zhou, Pingyan; Yang, Guochun; Nan, Weizhi; Liu, Xun

    2016-01-01

    Cognitive conflict resolution is critical to human survival in a rapidly changing environment. However, emotional conflict processing seems to be particularly important for human interactions. This study examined whether the time course of attentional modulation on emotional conflict processing was different from cognitive conflict processing during a flanker task. Results showed that emotional N200 and P300 effects, similar to colour conflict processing, appeared only during the relevant task. However, the emotional N200 effect preceded the colour N200 effect, indicating that emotional conflict can be identified earlier than cognitive conflict. Additionally, a significant emotional N100 effect revealed that emotional valence differences could be perceived during early processing based on rough aspects of input. The present data suggest that emotional conflict processing is modulated by top-down attention, similar to cognitive conflict processing (reflected by N200 and P300 effects). However, emotional conflict processing seems to have more time advantages during two different processing stages.

  15. Tritium recovery from tritiated water with a two-stage palladium membrane reactor

    International Nuclear Information System (INIS)

    Birdsell, S.A.; Willms, R.S.

    1997-01-01

    A process to recover tritium from tritiated water has been successfully demonstrated at TSTA. The 2-stage palladium membrane reactor (PMR) is capable of recovering tritium from water without generating additional waste. This device can be used to recover tritium from the substantial amount of tritiated water that is expected to be generated in the International Thermonuclear Experimental Reactor both from torus exhaust and auxiliary operations. A large quantity of tritiated waste water exists world wide because the predominant method of cleaning up tritiated streams is to oxidize tritium to tritiated water. The latter can be collected with high efficiency for subsequent disposal. The PMR is a combined catalytic reactor/permeator. Cold (non-tritium) water processing experiments were run in preparation for the tritiated water processing tests. Tritium was recovered from a container of molecular sieve loaded with 2,050 g (2,550 std. L) of water and 4.5 g of tritium. During this experiment, 27% (694 std. L) of the water was processed resulting in recovery of 1.2 g of tritium. The maximum water processing rate for the PMR system used was determined to be 0.5 slpm. This correlates well with the maximum processing rate determined from the smaller PMR system on the cold test bench and has resulted in valuable scale-up and design information

  16. Tritium recovery from tritiated water with a two-stage palladium membrane reactor

    Energy Technology Data Exchange (ETDEWEB)

    Birdsell, S.A.; Willms, R.S.

    1997-04-01

    A process to recover tritium from tritiated water has been successfully demonstrated at TSTA. The 2-stage palladium membrane reactor (PMR) is capable of recovering tritium from water without generating additional waste. This device can be used to recover tritium from the substantial amount of tritiated water that is expected to be generated in the International Thermonuclear Experimental Reactor both from torus exhaust and auxiliary operations. A large quantity of tritiated waste water exists world wide because the predominant method of cleaning up tritiated streams is to oxidize tritium to tritiated water. The latter can be collected with high efficiency for subsequent disposal. The PMR is a combined catalytic reactor/permeator. Cold (non-tritium) water processing experiments were run in preparation for the tritiated water processing tests. Tritium was recovered from a container of molecular sieve loaded with 2,050 g (2,550 std. L) of water and 4.5 g of tritium. During this experiment, 27% (694 std. L) of the water was processed resulting in recovery of 1.2 g of tritium. The maximum water processing rate for the PMR system used was determined to be 0.5 slpm. This correlates well with the maximum processing rate determined from the smaller PMR system on the cold test bench and has resulted in valuable scale-up and design information.

  17. LHCb : Novel real-time alignment and calibration of the LHCb Detector in Run2

    CERN Multimedia

    Tobin, Mark

    2015-01-01

    LHCb has introduced a novel real-time detector alignment and calibration strategy for LHC Run 2. Data collected at the start of the fill will be processed in a few minutes and used to update the alignment, while the calibration constants will be evaluated for each run. This procedure will improve the quality of the online alignment. For example, the vertex locator is retracted and reinserted for stable beam collisions in each fill to be centred on the primary vertex position in the transverse plane. Consequently its position changes on a fill-by-fill basis. Critically, this new realtime alignment and calibration procedure allows identical constants to be used in the online and offline reconstruction, thus improving the correlation between triggered and offline selected events. This offers the opportunity to optimise the event selection in the trigger by applying stronger constraints. The online calibration facilitates the use of hadronic particle identification using the RICH detectors at the trigger level. T...

  18. Performance study of a heat pump driven and hollow fiber membrane-based two-stage liquid desiccant air dehumidification system

    International Nuclear Information System (INIS)

    Zhang, Ning; Yin, Shao-You; Zhang, Li-Zhi

    2016-01-01

    Graphical abstract: A heat pump driven, hollow fiber membrane-based two-stage liquid desiccant air dehumidification system. - Highlights: • A two-stage hollow fiber membrane based air dehumidification is proposed. • It is heat pump driven liquid desiccant system. • Performance is improved 20% upon single stage system. • The optimal first to second stage dehumidification area ratio is 1.4. - Abstract: A novel compression heat pump driven and hollow fiber membrane-based two-stage liquid desiccant air dehumidification system is presented. The liquid desiccant droplets are prevented from crossing over into the process air by the semi-permeable membranes. The isoenthalpic processes are changed to quasi-isothermal processes by the two-stage dehumidification processes. The system is set up and a model is proposed for simulation. Heat and mass capacities in the system, including the membrane modules, the condenser, the evaporator and the heat exchangers are modeled in detail. The model is also validated experimentally. Compared with a single-stage dehumidification system, the two-stage system has a lower solution concentration exiting from the dehumidifier and a lower condensing temperature. Thus, a better thermodynamic system performance is realized and the COP can be increased by about 20% under the typical hot and humid conditions in Southern China. The allocations of heat and mass transfer areas in the system are also investigated. It is found that the optimal regeneration to dehumidification area ratio is 1.33. The optimal first to second stage dehumidification area ratio is 1.4; and the optimal first to second stage regeneration area ratio is 1.286.

  19. Stuart oil shale project stage two: executive summary: draft environmental impact statement

    International Nuclear Information System (INIS)

    1999-09-01

    The project is an oil shale open pit mine and processing operation that is currently being commissioned 15 km north of Gladstone, Queensland, Australia, and is owned as a joint venture by Southern Pacific Petroleum N.L., Central Pacific Minerals N.L, and Suncor Energy Inc., a leading Canadian company that is an integrated energy company. The results of a comprehensive investigation are included of the potential environmental impacts of the project, and which are described in the Draft Environmental Impact Statement (EIS). In stage two, there is included the existing mine expansion as well as the construction of an additional process plant based around a larger commercial scale ATP oil shale processing plant. The new stage two operation will be developed next to and integral with services and infrastructure provided for stage one. Described are: the assessment process, regulatory framework and the project area, the needs for an alternative to the project, the proposal itself, the existing natural, social and economic impacts, and the environmental impacts as well as plans for their mitigation. In appendices there are included a draft environmental management overview strategy and an environmental management plan. The elements covered in the report by section are: background, need for the project, the proponent, legislation and approvals, project description, environmental issues and impact management

  20. One-stage versus two-stage exchange arthroplasty for infected total knee arthroplasty: a systematic review.

    Science.gov (United States)

    Nagra, Navraj S; Hamilton, Thomas W; Ganatra, Sameer; Murray, David W; Pandit, Hemant

    2016-10-01

    Infection complicating total knee arthroplasty (TKA) has serious implications. Traditionally the debate on whether one- or two-stage exchange arthroplasty is the optimum management of infected TKA has favoured two-stage procedures; however, a paradigm shift in opinion is emerging. This study aimed to establish whether current evidence supports one-stage revision for managing infected TKA based on reinfection rates and functional outcomes post-surgery. MEDLINE/PubMed and CENTRAL databases were reviewed for studies that compared one- and two-stage exchange arthroplasty TKA in more than ten patients with a minimum 2-year follow-up. From an initial sample of 796, five cohort studies with a total of 231 patients (46 single-stage/185 two-stage; median patient age 66 years, range 61-71 years) met inclusion criteria. Overall, there were no significant differences in risk of reinfection following one- or two-stage exchange arthroplasty (OR -0.06, 95 % confidence interval -0.13, 0.01). Subgroup analysis revealed that in studies published since 2000, one-stage procedures have a significantly lower reinfection rate. One study investigated functional outcomes and reported that one-stage surgery was associated with superior functional outcomes. Scarcity of data, inconsistent study designs, surgical technique and antibiotic regime disparities limit recommendations that can be made. Recent studies suggest one-stage exchange arthroplasty may provide superior outcomes, including lower reinfection rates and superior function, in select patients. Clinically, for some patients, one-stage exchange arthroplasty may represent optimum treatment; however, patient selection criteria and key components of surgical and post-operative anti-microbial management remain to be defined. III.

  1. Operating Characteristics of a Continuous Two-Stage Bubbling Fluidized-Bed Process

    International Nuclear Information System (INIS)

    Youn, Pil-Sang; Choi, Jeong-Hoo

    2014-01-01

    Flow characteristics and the operating range of gas velocity was investigated for a two-stage bubbling fluidized-bed (0.1 m-i.d., 1.2 m-high) that had continuous solids feed and discharge. Solids were fed in to the upper fluidized-bed and overflowed into the bed section of the lower fluidized-bed through a standpipe (0.025 m-i.d.). The standpipe was simply a dense solids bed with no mechanical or non-mechanical valves. The solids overflowed the lower bed for discharge. The fluidizing gas was fed to the lower fluidized-bed and the exit gas was also used to fluidize the upper bed. Air was used as fluidizing gas and mixture of coarse (<1000 μm in diameter and 3090 kg/m 3 in apparent density) and fine (<100 μm in diameter and 4400 kg/m 3 in apparent density) particles were used as bed materials. The proportion of fine particles was employed as the experimental variable. The gas velocity of the lower fluidized-bed was defined as collapse velocity in the condition that the standpipe was emptied by upflow gas bypassing from the lower fluidized-bed. It could be used as the maximum operating velocity of the present process. The collapse velocity decreased after an initial increase as the proportion of fine particles increased. The maximum took place at the proportion of fine particles 30%. The trend of the collapse velocity was similar with that of standpipe pressure drop. The collapse velocity was expressed as a function of bulk density of particles and voidage of static bed. It increased with an increase of bulk density, however, decreased with an increase of voidage of static bed

  2. DataStager: scalable data staging services for petascale applications

    International Nuclear Information System (INIS)

    Wolf, Matthew D.; Eisenhauer, Greg S.; Klasky, Scott A.

    2009-01-01

    Known challenges for petascale machines are that (1) the costs of I/O for high performance applications can be substantial, especially for output tasks like checkpointing, and (2) noise from I/O actions can inject undesirable delays into the runtimes of such codes on individual compute nodes. This paper introduces the flexible 'DataStager' framework for data staging and alternative services within that jointly address (1) and (2). Data staging services moving output data from compute nodes to staging or I/O nodes prior to storage are used to reduce I/O overheads on applications total processing times, and explicit management of data staging offers reduced perturbation when extracting output data from a petascale machine's compute partition. Experimental evaluations of DataStager on the Cray XT machine at Oak Ridge National Laboratory establish both the necessity of intelligent data staging and the high performance of our approach, using the GTC fusion modeling code and benchmarks running on 1000+ processors.

  3. Effects of earthworm casts and zeolite on the two-stage composting of green waste

    International Nuclear Information System (INIS)

    Zhang, Lu; Sun, Xiangyang

    2015-01-01

    Highlights: • Earthworm casts (EWCs) and clinoptilolite (CL) were used in green waste composting. • Addition of EWCs + CL improved physico-chemical and microbiological properties. • Addition of EWCs + CL extended the duration of thermophilic periods during composting. • Addition of EWCs + CL enhanced humification, cellulose degradation, and nutrients. • Combined addition of 0.30% EWCs + 25% CL reduced composting time to 21 days. - Abstract: Because it helps protect the environment and encourages economic development, composting has become a viable method for organic waste disposal. The objective of this study was to investigate the effects of earthworm casts (EWCs) (at 0.0%, 0.30%, and 0.60%) and zeolite (clinoptilolite, CL) (at 0%, 15%, and 25%) on the two-stage composting of green waste. The combination of EWCs and CL improved the conditions of the composting process and the quality of the compost products in terms of the thermophilic phase, humification, nitrification, microbial numbers and enzyme activities, the degradation of cellulose and hemicellulose, and physico-chemical characteristics and nutrient contents of final composts. The compost matured in only 21 days with the optimized two-stage composting method rather than in the 90–270 days required for traditional composting. The optimal two-stage composting and the best quality compost were obtained with 0.30% EWCs and 25% CL

  4. Effects of earthworm casts and zeolite on the two-stage composting of green waste

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Lu, E-mail: zhanglu1211@gmail.com; Sun, Xiangyang, E-mail: xysunbjfu@gmail.com

    2015-05-15

    Highlights: • Earthworm casts (EWCs) and clinoptilolite (CL) were used in green waste composting. • Addition of EWCs + CL improved physico-chemical and microbiological properties. • Addition of EWCs + CL extended the duration of thermophilic periods during composting. • Addition of EWCs + CL enhanced humification, cellulose degradation, and nutrients. • Combined addition of 0.30% EWCs + 25% CL reduced composting time to 21 days. - Abstract: Because it helps protect the environment and encourages economic development, composting has become a viable method for organic waste disposal. The objective of this study was to investigate the effects of earthworm casts (EWCs) (at 0.0%, 0.30%, and 0.60%) and zeolite (clinoptilolite, CL) (at 0%, 15%, and 25%) on the two-stage composting of green waste. The combination of EWCs and CL improved the conditions of the composting process and the quality of the compost products in terms of the thermophilic phase, humification, nitrification, microbial numbers and enzyme activities, the degradation of cellulose and hemicellulose, and physico-chemical characteristics and nutrient contents of final composts. The compost matured in only 21 days with the optimized two-stage composting method rather than in the 90–270 days required for traditional composting. The optimal two-stage composting and the best quality compost were obtained with 0.30% EWCs and 25% CL.

  5. Staging of gastric adenocarcinoma using two-phase spiral CT: correlation with pathologic staging

    International Nuclear Information System (INIS)

    Seo, Tae Seok; Lee, Dong Ho; Ko, Young Tae; Lim, Joo Won

    1998-01-01

    To correlate the preoperative staging of gastric adenocarcinoma using two-phase spiral CT with pathologic staging. One hundred and eighty patients with gastric cancers confirmed during surgery underwent two-phase spiral CT, and were evaluated retrospectively. CT scans were obtained in the prone position after ingestion of water. Scans were performed 35 and 80 seconds after the start of infusion of 120mL of non-ionic contrast material with the speed of 3mL/sec. Five mm collimation, 7mm/sec table feed and 5mm reconstruction interval were used. T-and N-stage were determined using spiral CT images, without knowledge of the pathologic results. Pathologic staging was later compared with CT staging. Pathologic T-stage was T1 in 70 cases(38.9%), T2 in 33(18.3%), T3 in 73(40.6%), and T4 in 4(2.2%). Type-I or IIa elevated lesions accouted for 10 of 70 T1 cases(14.3%) and flat or depressed lesions(type IIb, IIc, or III) for 60(85.7%). Pathologic N-stage was NO in 85 cases(47.2%), N1 in 42(23.3%), N2 in 31(17.2%), and N3 in 22(12,2%). The detection rate of early gastric cancer using two-phase spiral CT was 100.0%(10 of 10 cases) among elevated lesions and 78.3%(47 of 60 cases) among flat or depressed lesions. With regard to T-stage, there was good correlation between CT image and pathology in 86 of 180 cases(47.8%). Overstaging occurred in 23.3%(42 of 180 cases) and understaging in 28.9%(52 of 180 cases). With regard to N-stage, good correlation between CT image and pathology was noted in 94 of 180 cases(52.2%). The rate of understaging(31.7%, 57 of 180 cases) was higher than that of overstaging(16.1%, 29 of 180 cases)(p<0.001). The detection rate of early gastric cancer using two-phase spiral CT was 81.4%, and there was no significant difference in detectability between elevated and depressed lesions. Two-phase spiral CT for determing the T-and N-stage of gastric cancer was not effective;it was accurate in abont 50% of cases understaging tended to occur.=20

  6. Robust Frequency-Domain Constrained Feedback Design via a Two-Stage Heuristic Approach.

    Science.gov (United States)

    Li, Xianwei; Gao, Huijun

    2015-10-01

    Based on a two-stage heuristic method, this paper is concerned with the design of robust feedback controllers with restricted frequency-domain specifications (RFDSs) for uncertain linear discrete-time systems. Polytopic uncertainties are assumed to enter all the system matrices, while RFDSs are motivated by the fact that practical design specifications are often described in restricted finite frequency ranges. Dilated multipliers are first introduced to relax the generalized Kalman-Yakubovich-Popov lemma for output feedback controller synthesis and robust performance analysis. Then a two-stage approach to output feedback controller synthesis is proposed: at the first stage, a robust full-information (FI) controller is designed, which is used to construct a required output feedback controller at the second stage. To improve the solvability of the synthesis method, heuristic iterative algorithms are further formulated for exploring the feedback gain and optimizing the initial FI controller at the individual stage. The effectiveness of the proposed design method is finally demonstrated by the application to active control of suspension systems.

  7. Design of an EEG-based brain-computer interface (BCI) from standard components running in real-time under Windows.

    Science.gov (United States)

    Guger, C; Schlögl, A; Walterspacher, D; Pfurtscheller, G

    1999-01-01

    An EEG-based brain-computer interface (BCI) is a direct connection between the human brain and the computer. Such a communication system is needed by patients with severe motor impairments (e.g. late stage of Amyotrophic Lateral Sclerosis) and has to operate in real-time. This paper describes the selection of the appropriate components to construct such a BCI and focuses also on the selection of a suitable programming language and operating system. The multichannel system runs under Windows 95, equipped with a real-time Kernel expansion to obtain reasonable real-time operations on a standard PC. Matlab controls the data acquisition and the presentation of the experimental paradigm, while Simulink is used to calculate the recursive least square (RLS) algorithm that describes the current state of the EEG in real-time. First results of the new low-cost BCI show that the accuracy of differentiating imagination of left and right hand movement is around 95%.

  8. Production of poly(hydroxybutyrate-hydroxyvalerate) from waste organics by the two-stage process: focus on the intermediate volatile fatty acids.

    Science.gov (United States)

    Shen, Liang; Hu, Hongyou; Ji, Hongfang; Cai, Jiyuan; He, Ning; Li, Qingbiao; Wang, Yuanpeng

    2014-08-01

    The two-stage process, coupling volatile fatty acids (VFAs) fermentation and poly(hydroxybutyrate-hydroxyvalerate) (P(HB/HV)) biosynthesis, was investigated for five waste organic materials. The overall conversion efficiencies were glycerol>starch>molasses>waste sludge>protein, meanwhile the maximum P(HB/HV) (1.674 g/L) was obtained from waste starch. Altering the waste type brought more effects on VFAs composition other than the yield in the first stage, which in turn greatly changed the yield in the second stage. Further study showed that even-number carbon VFAs (or odd-number ones) had a good positive linear relationship with P(HB/HV) content of HB (or HV). Additionally, VFA producing microbiota was analyzed by pyrosequencing methods for five wastes, which indicated that specific species (e.g., Lactobacillus for protein; Ethanoligenens for starch; Ruminococcus and Limnobacter for glycerol) were dominant in the community for VFAs production. Potential competition among acidogenic bacteria specially involved to produce some VFA was proposed as well. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Frequency analysis of a two-stage planetary gearbox using two different methodologies

    Science.gov (United States)

    Feki, Nabih; Karray, Maha; Khabou, Mohamed Tawfik; Chaari, Fakher; Haddar, Mohamed

    2017-12-01

    This paper is focused on the characterization of the frequency content of vibration signals issued from a two-stage planetary gearbox. To achieve this goal, two different methodologies are adopted: the lumped-parameter modeling approach and the phenomenological modeling approach. The two methodologies aim to describe the complex vibrations generated by a two-stage planetary gearbox. The phenomenological model describes directly the vibrations as measured by a sensor fixed outside the fixed ring gear with respect to an inertial reference frame, while results from a lumped-parameter model are referenced with respect to a rotating frame and then transferred into an inertial reference frame. Two different case studies of the two-stage planetary gear are adopted to describe the vibration and the corresponding spectra using both models. Each case presents a specific geometry and a specific spectral structure.

  10. Simulative design and process optimization of the two-stage stretch-blow molding process

    Energy Technology Data Exchange (ETDEWEB)

    Hopmann, Ch.; Rasche, S.; Windeck, C. [Institute of Plastics Processing at RWTH Aachen University (IKV) Pontstraße 49, 52062 Aachen (Germany)

    2015-05-22

    The total production costs of PET bottles are significantly affected by the costs of raw material. Approximately 70 % of the total costs are spent for the raw material. Therefore, stretch-blow molding industry intends to reduce the total production costs by an optimized material efficiency. However, there is often a trade-off between an optimized material efficiency and required product properties. Due to a multitude of complex boundary conditions, the design process of new stretch-blow molded products is still a challenging task and is often based on empirical knowledge. Application of current CAE-tools supports the design process by reducing development time and costs. This paper describes an approach to determine optimized preform geometry and corresponding process parameters iteratively. The wall thickness distribution and the local stretch ratios of the blown bottle are calculated in a three-dimensional process simulation. Thereby, the wall thickness distribution is correlated with an objective function and preform geometry as well as process parameters are varied by an optimization algorithm. Taking into account the correlation between material usage, process history and resulting product properties, integrative coupled simulation steps, e.g. structural analyses or barrier simulations, are performed. The approach is applied on a 0.5 liter PET bottle of Krones AG, Neutraubling, Germany. The investigations point out that the design process can be supported by applying this simulative optimization approach. In an optimization study the total bottle weight is reduced from 18.5 g to 15.5 g. The validation of the computed results is in progress.

  11. Simulative design and process optimization of the two-stage stretch-blow molding process

    International Nuclear Information System (INIS)

    Hopmann, Ch.; Rasche, S.; Windeck, C.

    2015-01-01

    The total production costs of PET bottles are significantly affected by the costs of raw material. Approximately 70 % of the total costs are spent for the raw material. Therefore, stretch-blow molding industry intends to reduce the total production costs by an optimized material efficiency. However, there is often a trade-off between an optimized material efficiency and required product properties. Due to a multitude of complex boundary conditions, the design process of new stretch-blow molded products is still a challenging task and is often based on empirical knowledge. Application of current CAE-tools supports the design process by reducing development time and costs. This paper describes an approach to determine optimized preform geometry and corresponding process parameters iteratively. The wall thickness distribution and the local stretch ratios of the blown bottle are calculated in a three-dimensional process simulation. Thereby, the wall thickness distribution is correlated with an objective function and preform geometry as well as process parameters are varied by an optimization algorithm. Taking into account the correlation between material usage, process history and resulting product properties, integrative coupled simulation steps, e.g. structural analyses or barrier simulations, are performed. The approach is applied on a 0.5 liter PET bottle of Krones AG, Neutraubling, Germany. The investigations point out that the design process can be supported by applying this simulative optimization approach. In an optimization study the total bottle weight is reduced from 18.5 g to 15.5 g. The validation of the computed results is in progress

  12. Multiple heavy metals extraction and recovery from hazardous electroplating sludge waste via ultrasonically enhanced two-stage acid leaching.

    Science.gov (United States)

    Li, Chuncheng; Xie, Fengchun; Ma, Yang; Cai, Tingting; Li, Haiying; Huang, Zhiyuan; Yuan, Gaoqing

    2010-06-15

    An ultrasonically enhanced two-stage acid leaching process on extracting and recovering multiple heavy metals from actual electroplating sludge was studied in lab tests. It provided an effective technique for separation of valuable metals (Cu, Ni and Zn) from less valuable metals (Fe and Cr) in electroplating sludge. The efficiency of the process had been measured with the leaching efficiencies and recovery rates of the metals. Enhanced by ultrasonic power, the first-stage acid leaching demonstrated leaching rates of 96.72%, 97.77%, 98.00%, 53.03%, and 0.44% for Cu, Ni, Zn, Cr, and Fe respectively, effectively separated half of Cr and almost all of Fe from mixed metals. The subsequent second-stage leaching achieved leaching rates of 75.03%, 81.05%, 81.39%, 1.02%, and 0% for Cu, Ni, Zn, Cr, and Fe that further separated Cu, Ni, and Zn from mixed metals. With the stabilized two-stage ultrasonically enhanced leaching, the resulting over all recovery rates of Cu, Ni, Zn, Cr and Fe from electroplating sludge could be achieved at 97.42%, 98.46%, 98.63%, 98.32% and 100% respectively, with Cr and Fe in solids and the rest of the metals in an aqueous solution discharged from the leaching system. The process performance parameters studied were pH, ultrasonic power, and contact time. The results were also confirmed in an industrial pilot-scale test, and same high metal recoveries were performed. Copyright 2010 Elsevier B.V. All rights reserved.

  13. The design and performance of the ATLAS Inner Detector trigger for Run 2

    CERN Document Server

    Penc, Ondrej; The ATLAS collaboration

    2016-01-01

    The design and performance of the ATLAS Inner Detector (ID) trigger algorithms running online on the high level trigger (HLT) processor farm with the early LHC Run 2 data are discussed. The redesign of the ID trigger, which took place during the 2013-15 long shutdown, in order to satisfy the demands of the higher energy LHC Run 2 operation is described. The ID trigger HLT algorithms are essential for nearly all trigger signatures within the ATLAS trigger. The detailed performance of the tracking algorithms with the early Run 2 data for the different trigger signatures is presented, including the detailed timing performance for the algorithms running on the redesigned single stage ATLAS HLT Farm. Comparison with the Run 1 strategy are made and demonstrate the superior performance of the strategy adopted for Run 2.

  14. Experimental studies of two-stage centrifugal dust concentrator

    Science.gov (United States)

    Vechkanova, M. V.; Fadin, Yu M.; Ovsyannikov, Yu G.

    2018-03-01

    The article presents data of experimental results of two-stage centrifugal dust concentrator, describes its design, and shows the development of a method of engineering calculation and laboratory investigations. For the experiments, the authors used quartz, ceramic dust and slag. Experimental dispersion analysis of dust particles was obtained by sedimentation method. To build a mathematical model of the process, dust collection was built using central composite rotatable design of the four factorial experiment. A sequence of experiments was conducted in accordance with the table of random numbers. Conclusion were made.

  15. ANALYSIS OF POSSIBILITY TO AVOID A RUNNING-DOW ACCIDENT TIMELY BRAKING

    Directory of Open Access Journals (Sweden)

    Sarayev, A.

    2013-06-01

    Full Text Available Such circumstances under which the drive can stop the vehicle by applying timely braking before reaching the pedestrian crossing or decrease the speed to the safe limit to avoid a running-down accident is considered.

  16. Two-stage discrete-continuous multi-objective load optimization: An industrial consumer utility approach to demand response

    International Nuclear Information System (INIS)

    Abdulaal, Ahmed; Moghaddass, Ramin; Asfour, Shihab

    2017-01-01

    Highlights: •Two-stage model links discrete-optimization to real-time system dynamics operation. •The solutions obtained are non-dominated Pareto optimal solutions. •Computationally efficient GA solver through customized chromosome coding. •Modest to considerable savings are achieved depending on the consumer’s preference. -- Abstract: In the wake of today’s highly dynamic and competitive energy markets, optimal dispatching of energy sources requires effective demand responsiveness. Suppliers have adopted a dynamic pricing strategy in efforts to control the downstream demand. This method however requires consumer awareness, flexibility, and timely responsiveness. While residential activities are more flexible and schedulable, larger commercial consumers remain an obstacle due to the impacts on industrial performance. This paper combines methods from quadratic, stochastic, and evolutionary programming with multi-objective optimization and continuous simulation, to propose a two-stage discrete-continuous multi-objective load optimization (DiCoMoLoOp) autonomous approach for industrial consumer demand response (DR). Stage 1 defines discrete-event load shifting targets. Accordingly, controllable loads are continuously optimized in stage 2 while considering the consumer’s utility. Utility functions, which measure the loads’ time value to the consumer, are derived and weights are assigned through an analytical hierarchy process (AHP). The method is demonstrated for an industrial building model using real data. The proposed method integrates with building energy management system and solves in real-time with autonomous and instantaneous load shifting in the hour-ahead energy price (HAP) market. The simulation shows the occasional existence of multiple load management options on the Pareto frontier. Finally, the computed savings, based on the simulation analysis with real consumption, climate, and price data, ranged from modest to considerable amounts

  17. An unit cost adjusting heuristic algorithm for the integrated planning and scheduling of a two-stage supply chain

    Directory of Open Access Journals (Sweden)

    Jianhua Wang

    2014-10-01

    Full Text Available Purpose: The stable relationship of one-supplier-one-customer is replaced by a dynamic relationship of multi-supplier-multi-customer in current market gradually, and efficient scheduling techniques are important tools of the dynamic supply chain relationship establishing process. This paper studies the optimization of the integrated planning and scheduling problem of a two-stage supply chain with multiple manufacturers and multiple retailers to obtain a minimum supply chain operating cost, whose manufacturers have different production capacities, holding and producing cost rates, transportation costs to retailers.Design/methodology/approach: As a complex task allocation and scheduling problem, this paper sets up an INLP model for it and designs a Unit Cost Adjusting (UCA heuristic algorithm that adjust the suppliers’ supplying quantity according to their unit costs step by step to solve the model.Findings: Relying on the contrasting analysis between the UCA and the Lingo solvers for optimizing many numerical experiments, results show that the INLP model and the UCA algorithm can obtain its near optimal solution of the two-stage supply chain’s planning and scheduling problem within very short CPU time.Research limitations/implications: The proposed UCA heuristic can easily help managers to optimizing the two-stage supply chain scheduling problems which doesn’t include the delivery time and batch of orders. For two-stage supply chains are the most common form of actual commercial relationships, so to make some modification and study on the UCA heuristic should be able to optimize the integrated planning and scheduling problems of a supply chain with more reality constraints.Originality/value: This research proposes an innovative UCA heuristic for optimizing the integrated planning and scheduling problem of two-stage supply chains with the constraints of suppliers’ production capacity and the orders’ delivering time, and has a great

  18. LHCb-The LHCb trigger in Run II

    CERN Multimedia

    Michielin, Emanuele

    2016-01-01

    The LHCb trigger system has been upgraded to exploit the real-time alignment, calibration and analysis capabilities of LHCb in Run-II. An increase in the CPU and disk capacity of the event filter farm, combined with improvements to the reconstruction software, mean that efficient, exclusive selections can be made in the first stage of the High Level Trigger (HLT1). The output of HLT1 is buffered to the 5 PB of disk on the event filter farm, while the detector is aligned and calibrated in real time. The second stage, HLT2, performs complete, offline quality, event reconstruction. Physics analyses can be performed directly on this information, and for the majority of charm physics selections, a reduced event format can be written out, which permits higher event rates.

  19. Constraint-Led Changes in Internal Variability in Running

    OpenAIRE

    Haudum, Anita; Birklbauer, Jürgen; Kröll, Josef; Müller, Erich

    2012-01-01

    We investigated the effect of a one-time application of elastic constraints on movement-inherent variability during treadmill running. Eleven males ran two 35-min intervals while surface EMG was measured. In one of two 35-min intervals, after 10 min of running without tubes, elastic tubes (between hip and heels) were attached, followed by another 5 min of running without tubes. To assess variability, stride-to-stride iEMG variability was calculated. Significant increases in variability (36 % ...

  20. Biomechanical characteristics of skeletal muscles and associations between running speed and contraction time in 8- to 13-year-old children.

    Science.gov (United States)

    Završnik, Jernej; Pišot, Rado; Šimunič, Boštjan; Kokol, Peter; Blažun Vošner, Helena

    2017-02-01

    Objective To investigate associations between running speeds and contraction times in 8- to 13-year-old children. Method This longitudinal study analyzed tensiomyographic measurements of vastus lateralis and biceps femoris muscles' contraction times and maximum running speeds in 107 children (53 boys, 54 girls). Data were evaluated using multiple correspondence analysis. Results A gender difference existed between the vastus lateralis contraction times and running speeds. The running speed was less dependent on vastus lateralis contraction times in boys than in girls. Analysis of biceps femoris contraction times and running speeds revealed that running speeds of boys were much more structurally associated with contraction times than those of girls, for whom the association seemed chaotic. Conclusion Joint category plots showed that contraction times of biceps femoris were associated much more closely with running speed than those of the vastus lateralis muscle. These results provide insight into a new dimension of children's development.

  1. Preliminary technical data summary for the Defense Waste Processing Facility, Stage 1

    International Nuclear Information System (INIS)

    1980-09-01

    This Preliminary Technical Data Summary presents the technical basis for design of Stage 1 of the Staged Defense Waste Processing Facility (DWPF), a process to efficiently immobilize the radionuclides in Savannah River Plant (SRP) high-level liquid waste. The radionuclides in SRP waste are present in sludge that has settled to the bottom of waste storage tanks and in crystallized salt and salt solution (supernate). Stage 1 of the DWPF receives washed, aluminum dissolved sludge from the waste tank farms and immobilizes it in a borosilicate glass matrix. The supernate is retained in the waste tank farms until completion of Stage 2 of the DWPF at which time it filtered and decontaminated by ion exchange in the Stage 2 facility. The decontaminated supernate is concentrated by evaporation and mixed with cement for burial. The radioactivity removed from the supernate is fixed in borosilicate glass along with the sludge. This document gives flowsheets, material, and curie balances, material and curie balance bases, and other technical data for design of the Stage 1 DWPF

  2. Adjuvant therapy in stage I and stage II epithelial ovarian cancer. Results of two prospective randomized trials

    International Nuclear Information System (INIS)

    Young, R.C.; Walton, L.A.; Ellenberg, S.S.; Homesley, H.D.; Wilbanks, G.D.; Decker, D.G.; Miller, A.; Park, R.; Major, F. Jr.

    1990-01-01

    About a third of patients with ovarian cancer present with localized disease; despite surgical resection, up to half the tumors recur. Since it has not been established whether adjuvant treatment can benefit such patients, we conducted two prospective, randomized national cooperative trials of adjuvant therapy in patients with localized ovarian carcinoma. All patients underwent surgical resection plus comprehensive staging and, 18 months later, surgical re-exploration. In the first trial, 81 patients with well-differentiated or moderately well differentiated cancers confined to the ovaries (Stages Iai and Ibi) were assigned to receive either no chemotherapy or melphalan (0.2 mg per kilogram of body weight per day for five days, repeated every four to six weeks for up to 12 cycles). After a median follow-up of more than six years, there were no significant differences between the patients given no chemotherapy and those treated with melphalan with respect to either five-year disease-free survival or overall survival. In the second trial, 141 patients with poorly differentiated Stage I tumors or with cancer outside the ovaries but limited to the pelvis (Stage II) were randomly assigned to treatment with either melphalan (in the same regimen as above) or a single intraperitoneal dose of 32P (15 mCi) at the time of surgery. In this trial (median follow-up, greater than 6 years) the outcomes for the two treatment groups were similar with respect to five-year disease-free survival (80 percent in both groups) and overall survival (81 percent with melphalan vs. 78 percent with 32P; P = 0.48). We conclude that in patients with localized ovarian cancer, comprehensive staging at the time of surgical resection can serve to identify those patients (as defined by the first trial) who can be followed without adjuvant chemotherapy

  3. Two-Stage Fuzzy Portfolio Selection Problem with Transaction Costs

    Directory of Open Access Journals (Sweden)

    Yanju Chen

    2015-01-01

    Full Text Available This paper studies a two-period portfolio selection problem. The problem is formulated as a two-stage fuzzy portfolio selection model with transaction costs, in which the future returns of risky security are characterized by possibility distributions. The objective of the proposed model is to achieve the maximum utility in terms of the expected value and variance of the final wealth. Given the first-stage decision vector and a realization of fuzzy return, the optimal value expression of the second-stage programming problem is derived. As a result, the proposed two-stage model is equivalent to a single-stage model, and the analytical optimal solution of the two-stage model is obtained, which helps us to discuss the properties of the optimal solution. Finally, some numerical experiments are performed to demonstrate the new modeling idea and the effectiveness. The computational results provided by the proposed model show that the more risk-averse investor will invest more wealth in the risk-free security. They also show that the optimal invested amount in risky security increases as the risk-free return decreases and the optimal utility increases as the risk-free return increases, whereas the optimal utility increases as the transaction costs decrease. In most instances the utilities provided by the proposed two-stage model are larger than those provided by the single-stage model.

  4. Wide-bandwidth bilateral control using two-stage actuator system

    International Nuclear Information System (INIS)

    Kokuryu, Saori; Izutsu, Masaki; Kamamichi, Norihiro; Ishikawa, Jun

    2015-01-01

    This paper proposes a two-stage actuator system that consists of a coarse actuator driven by a ball screw with an AC motor (the first stage) and a fine actuator driven by a voice coil motor (the second stage). The proposed two-stage actuator system is applied to make a wide-bandwidth bilateral control system without needing expensive high-performance actuators. In the proposed system, the first stage has a wide moving range with a narrow control bandwidth, and the second stage has a narrow moving range with a wide control bandwidth. By consolidating these two inexpensive actuators with different control bandwidths in a complementary manner, a wide bandwidth bilateral control system can be constructed based on a mechanical impedance control. To show the validity of the proposed method, a prototype of the two-stage actuator system has been developed and basic performance was evaluated by experiment. The experimental results showed that a light mechanical impedance with a mass of 10 g and a damping coefficient of 2.5 N/(m/s) that is an important factor to establish good transparency in bilateral control has been successfully achieved and also showed that a better force and position responses between a master and slave is achieved by using the proposed two-stage actuator system compared with a narrow bandwidth case using a single ball screw system. (author)

  5. An efficient and accurate two-stage fourth-order gas-kinetic scheme for the Euler and Navier-Stokes equations

    Science.gov (United States)

    Pan, Liang; Xu, Kun; Li, Qibing; Li, Jiequan

    2016-12-01

    provides a dynamic process of evolution from the kinetic scale particle free transport to the hydrodynamic scale wave propagation, which provides the physics for the non-equilibrium numerical shock structure construction to the near equilibrium NS solution. As a result, with the implementation of the fifth-order WENO initial reconstruction, in the smooth region the current two-stage GKS provides an accuracy of O ((Δx) 5 ,(Δt) 4) for the Euler equations, and O ((Δx) 5 ,τ2 Δt) for the NS equations, where τ is the time between particle collisions. Many numerical tests, including difficult ones for the Navier-Stokes solvers, have been used to validate the current method. Perfect numerical solutions can be obtained from the high Reynolds number boundary layer to the hypersonic viscous heat conducting flow. Following the two-stage time-stepping framework, the third-order GKS flux function can be used as well to construct a fifth-order method with the usage of both first-order and second-order time derivatives of the flux function. The use of time-accurate flux function may have great advantages on the development of higher-order CFD methods.

  6. Spread and Control of Mobile Benign Worm Based on Two-Stage Repairing Mechanism

    Directory of Open Access Journals (Sweden)

    Meng Wang

    2014-01-01

    Full Text Available Both in traditional social network and in mobile network environment, the worm is a serious threat, and this threat is growing all the time. Mobile smartphones generally promote the development of mobile network. The traditional antivirus technologies have become powerless when facing mobile networks. The development of benign worms, especially active benign worms and passive benign worms, has become a new network security measure. In this paper, we focused on the spread of worm in mobile environment and proposed the benign worm control and repair mechanism. The control process of mobile benign worms is divided into two stages: the first stage is rapid repair control, which uses active benign worm to deal with malicious worm in the mobile network; when the network is relatively stable, it enters the second stage of postrepair and uses passive mode to optimize the environment for the purpose of controlling the mobile network. Considering whether the existence of benign worm, we simplified the model and analyzed the four situations. Finally, we use simulation to verify the model. This control mechanism for benign worm propagation is of guiding significance to control the network security.

  7. Real-time configuration changes of the ATLAS High Level Trigger

    CERN Document Server

    Winklmeier, F

    2010-01-01

    The ATLAS High Level Trigger (HLT) is a distributed real-time software system that performs the final online selection of events produced during proton-proton collisions at the Large Hadron Collider (LHC). It is designed as a two-stage trigger and event filter running on a farm of commodity PC hardware. Currently the system consists of about 850 processing nodes and will be extended incrementally following the expected increase in luminosity of the LHC to about 2000 nodes. The event selection within the HLT applications is carried out by specialized reconstruction algorithms. The selection can be controlled via properties that are stored in a central database and are retrieved at the startup of the HLT processes, which then usually run continuously for many hours. To be able to respond to changes in the LHC beam conditions, it is essential that the algorithms can be re-configured without disrupting data taking while ensuring a consistent and reproducible configuration across the entire HLT farm. The technique...

  8. Instrument Front-Ends at Fermilab During Run II

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, Thomas; Slimmer, David; Voy, Duane; /Fermilab

    2011-07-13

    The optimization of an accelerator relies on the ability to monitor the behavior of the beam in an intelligent and timely fashion. The use of processor-driven front-ends allowed for the deployment of smart systems in the field for improved data collection and analysis during Run II. This paper describes the implementation of the two main systems used: National Instruments LabVIEW running on PCs, and WindRiver's VxWorks real-time operating system running in a VME crate processor.

  9. Instrument front-ends at Fermilab during Run II

    International Nuclear Information System (INIS)

    Meyer, T; Slimmer, D; Voy, D

    2011-01-01

    The optimization of an accelerator relies on the ability to monitor the behavior of the beam in an intelligent and timely fashion. The use of processor-driven front-ends allowed for the deployment of smart systems in the field for improved data collection and analysis during Run II. This paper describes the implementation of the two main systems used: National Instruments LabVIEW running on PCs, and WindRiver's VxWorks real-time operating system running in a VME crate processor.

  10. Instrument Front-Ends at Fermilab During Run II

    International Nuclear Information System (INIS)

    Meyer, Thomas; Slimmer, David; Voy, Duane

    2011-01-01

    The optimization of an accelerator relies on the ability to monitor the behavior of the beam in an intelligent and timely fashion. The use of processor-driven front-ends allowed for the deployment of smart systems in the field for improved data collection and analysis during Run II. This paper describes the implementation of the two main systems used: National Instruments LabVIEW running on PCs, and WindRiver's VxWorks real-time operating system running in a VME crate processor.

  11. Two-stage dental implants inserted in a one-stage procedure : a prospective comparative clinical study

    NARCIS (Netherlands)

    Heijdenrijk, Kees

    2002-01-01

    The results of this study indicate that dental implants designed for a submerged implantation procedure can be used in a single-stage procedure and may be as predictable as one-stage implants. Although one-stage implant systems and two-stage.

  12. How to run ions in the future?

    International Nuclear Information System (INIS)

    Küchler, D; Manglunki, D; Scrivens, R

    2014-01-01

    In the light of different running scenarios potential source improvements will be discussed (e.g. one month every year versus two month every other year and impact of the different running options [e.g. an extended ion run] on the source). As the oven refills cause most of the down time the oven design and refilling strategies will be presented. A test stand for off-line developments will be taken into account. Also the implications on the necessary manpower for extended runs will be discussed

  13. A Novel Two-Stage Dynamic Spectrum Sharing Scheme in Cognitive Radio Networks

    Institute of Scientific and Technical Information of China (English)

    Guodong Zhang; Wei Heng; Tian Liang; Chao Meng; Jinming Hu

    2016-01-01

    In order to enhance the efficiency of spectrum utilization and reduce communication overhead in spectrum sharing process,we propose a two-stage dynamic spectrum sharing scheme in which cooperative and noncooperative modes are analyzed in both stages.In particular,the existence and the uniqueness of Nash Equilibrium (NE) strategies for noncooperative mode are proved.In addition,a distributed iterative algorithm is proposed to obtain the optimal solutions of the scheme.Simulation studies are carried out to show the performance comparison between two modes as well as the system revenue improvement of the proposed scheme compared with a conventional scheme without a virtual price control factor.

  14. Dynamics in two-elevator traffic system with real-time information

    Energy Technology Data Exchange (ETDEWEB)

    Nagatani, Takashi, E-mail: wadokeioru@yahoo.co.jp

    2013-12-17

    We study the dynamics of traffic system with two elevators using a elevator choice scenario. The two-elevator traffic system with real-time information is similar to the two-route vehicular traffic system. The dynamics of two-elevator traffic system is described by the two-dimensional nonlinear map. An elevator runs a neck-and-neck race with another elevator. The motion of two elevators displays such a complex behavior as quasi-periodic one. The return map of two-dimensional map shows a piecewise map.

  15. An Investigation on the Formation of Carbon Nanotubes by Two-Stage Chemical Vapor Deposition

    Directory of Open Access Journals (Sweden)

    M. S. Shamsudin

    2012-01-01

    Full Text Available High density of carbon nanotubes (CNTs has been synthesized from agricultural hydrocarbon: camphor oil using a one-hour synthesis time and a titanium dioxide sol gel catalyst. The pyrolysis temperature is studied in the range of 700–900°C at increments of 50°C. The synthesis process is done using a custom-made two-stage catalytic chemical vapor deposition apparatus. The CNT characteristics are investigated by field emission scanning electron microscopy and micro-Raman spectroscopy. The experimental results showed that structural properties of CNT are highly dependent on pyrolysis temperature changes.

  16. Constraint-led changes in internal variability in running.

    Science.gov (United States)

    Haudum, Anita; Birklbauer, Jürgen; Kröll, Josef; Müller, Erich

    2012-01-01

    We investigated the effect of a one-time application of elastic constraints on movement-inherent variability during treadmill running. Eleven males ran two 35-min intervals while surface EMG was measured. In one of two 35-min intervals, after 10 min of running without tubes, elastic tubes (between hip and heels) were attached, followed by another 5 min of running without tubes. To assess variability, stride-to-stride iEMG variability was calculated. Significant increases in variability (36 % to 74 %) were observed during tube running, whereas running without tubes after the tube running block showed no significant differences. Results show that elastic tubes affect variability on a muscular level despite the constant environmental conditions and underline the nervous system's adaptability to cope with somehow unpredictable constraints since stride duration was unaltered.

  17. Stages of processing in associative recognition: evidence from behavior, EEG, and classification.

    Science.gov (United States)

    Borst, Jelmer P; Schneider, Darryl W; Walsh, Matthew M; Anderson, John R

    2013-12-01

    In this study, we investigated the stages of information processing in associative recognition. We recorded EEG data while participants performed an associative recognition task that involved manipulations of word length, associative fan, and probe type, which were hypothesized to affect the perceptual encoding, retrieval, and decision stages of the recognition task, respectively. Analyses of the behavioral and EEG data, supplemented with classification of the EEG data using machine-learning techniques, provided evidence that generally supported the sequence of stages assumed by a computational model developed in the Adaptive Control of Thought-Rational cognitive architecture. However, the results suggested a more complex relationship between memory retrieval and decision-making than assumed by the model. Implications of the results for modeling associative recognition are discussed. The study illustrates how a classifier approach, in combination with focused manipulations, can be used to investigate the timing of processing stages.

  18. Optimising the refrigeration cycle with a two-stage centrifugal compressor and a flash intercooler

    Energy Technology Data Exchange (ETDEWEB)

    Roeyttae, Pekka; Turunen-Saaresti, Teemu; Honkatukia, Juha [Lappeenranta University of Technology, Laboratory of Energy and Environmental Technology, PO Box 20, 53851 Lappeenranta (Finland)

    2009-09-15

    The optimisation of a refrigeration process with a two-stage centrifugal compressor and flash intercooler is presented in this paper. The two-stage centrifugal compressor stages are on the same shaft and the electric motor is cooled with the refrigerant. The performance of the centrifugal compressor is evaluated based on semi-empirical specific-speed curves and the effect of the Reynolds number, surface roughness and tip clearance have also been taken into account. The thermodynamic and transport properties of the working fluids are modelled with a real-gas model. The condensing and evaporation temperatures, the temperature after the flash intercooler, and cooling power have been chosen as fixed values in the process. The aim is to gain a maximum coefficient of performance (COP). The method of optimisation, the operation of the compressor and flash intercooler, and the method for estimating the electric motor cooling are also discussed in the article. (author)

  19. Mass-transfer in extraction and reextraction as a single-stage process

    International Nuclear Information System (INIS)

    Rodriguez del Cerro, M.; Trilleros, J.A.; Otero de la Gandara, J.L.

    1987-01-01

    The rate of mass transfer between water and naftenic acid and threebutilphosphate in kerosen are studied in the two possibilities to or from water. The two insoluble phases are brought in to intimate contact with dispersed phase droplets, in a single-stage process. The evolution of the equilibrium distribution of solute is taken in consideration. (author)

  20. Condensate from a two-stage gasifier

    DEFF Research Database (Denmark)

    Bentzen, Jens Dall; Henriksen, Ulrik Birk; Hindsgaul, Claus

    2000-01-01

    Condensate, produced when gas from downdraft biomass gasifier is cooled, contains organic compounds that inhibit nitrifiers. Treatment with activated carbon removes most of the organics and makes the condensate far less inhibitory. The condensate from an optimised two-stage gasifier is so clean...... that the organic compounds and the inhibition effect are very low even before treatment with activated carbon. The moderate inhibition effect relates to a high content of ammonia in the condensate. The nitrifiers become tolerant to the condensate after a few weeks of exposure. The level of organic compounds...... and the level of inhibition are so low that condensate from the optimised two-stage gasifier can be led to the public sewer....

  1. Target tracking system based on preliminary and precise two-stage compound cameras

    Science.gov (United States)

    Shen, Yiyan; Hu, Ruolan; She, Jun; Luo, Yiming; Zhou, Jie

    2018-02-01

    Early detection of goals and high-precision of target tracking is two important performance indicators which need to be balanced in actual target search tracking system. This paper proposed a target tracking system with preliminary and precise two - stage compound. This system using a large field of view to achieve the target search. After the target was searched and confirmed, switch into a small field of view for two field of view target tracking. In this system, an appropriate filed switching strategy is the key to achieve tracking. At the same time, two groups PID parameters are add into the system to reduce tracking error. This combination way with preliminary and precise two-stage compound can extend the scope of the target and improve the target tracking accuracy and this method has practical value.

  2. Evidence of two-stage melting of Wigner solids

    Science.gov (United States)

    Knighton, Talbot; Wu, Zhe; Huang, Jian; Serafin, Alessandro; Xia, J. S.; Pfeiffer, L. N.; West, K. W.

    2018-02-01

    Ultralow carrier concentrations of two-dimensional holes down to p =1 ×109cm-2 are realized. Remarkable insulating states are found below a critical density of pc=4 ×109cm-2 or rs≈40 . Sensitive dc V-I measurement as a function of temperature and electric field reveals a two-stage phase transition supporting the melting of a Wigner solid as a two-stage first-order transition.

  3. Real time analysis with the upgraded LHCb trigger in Run III

    Science.gov (United States)

    Szumlak, Tomasz

    2017-10-01

    The current LHCb trigger system consists of a hardware level, which reduces the LHC bunch-crossing rate of 40 MHz to 1.1 MHz, a rate at which the entire detector is read out. A second level, implemented in a farm of around 20k parallel processing CPUs, the event rate is reduced to around 12.5 kHz. The LHCb experiment plans a major upgrade of the detector and DAQ system in the LHC long shutdown II (2018-2019). In this upgrade, a purely software based trigger system is being developed and it will have to process the full 30 MHz of bunch crossings with inelastic collisions. LHCb will also receive a factor of 5 increase in the instantaneous luminosity, which further contributes to the challenge of reconstructing and selecting events in real time with the CPU farm. We discuss the plans and progress towards achieving efficient reconstruction and selection with a 30 MHz throughput. Another challenge is to exploit the increased signal rate that results from removing the 1.1 MHz readout bottleneck, combined with the higher instantaneous luminosity. Many charm hadron signals can be recorded at up to 50 times higher rate. LHCb is implementing a new paradigm in the form of real time data analysis, in which abundant signals are recorded in a reduced event format that can be fed directly to the physics analyses. These data do not need any further offline event reconstruction, which allows a larger fraction of the grid computing resources to be devoted to Monte Carlo productions. We discuss how this real-time analysis model is absolutely critical to the LHCb upgrade, and how it will evolve during Run-II.

  4. Two-stage exchange knee arthroplasty: does resistance of the infecting organism influence the outcome?

    Science.gov (United States)

    Kurd, Mark F; Ghanem, Elie; Steinbrecher, Jill; Parvizi, Javad

    2010-08-01

    Periprosthetic joint infection after TKA is a challenging complication. Two-stage exchange arthroplasty is the accepted standard of care, but reported failure rates are increasing. It has been suggested this is due to the increased prevalence of methicillin-resistant infections. We asked the following questions: (1) What is the reinfection rate after two-stage exchange arthroplasty? (2) Which risk factors predict failure? (3) Which variables are associated with acquiring a resistant organism periprosthetic joint infection? This was a case-control study of 102 patients with infected TKA who underwent a two-stage exchange arthroplasty. Ninety-six patients were followed for a minimum of 2 years (mean, 34.5 months; range, 24-90.1 months). Cases were defined as failures of two-stage exchange arthroplasty. Two-stage exchange arthroplasty was successful in controlling the infection in 70 patients (73%). Patients who failed two-stage exchange arthroplasty were 3.37 times more likely to have been originally infected with a methicillin-resistant organism. Older age, higher body mass index, and history of thyroid disease were predisposing factors to infection with a methicillin-resistant organism. Innovative interventions are needed to improve the effectiveness of two-stage exchange arthroplasty for TKA infection with a methicillin-resistant organism as current treatment protocols may not be adequate for control of these virulent pathogens. Level IV, prognostic study. See Guidelines for Authors for a complete description of levels of evidence.

  5. Product prioritization in a two-stage food production system with intermediate storage

    DEFF Research Database (Denmark)

    Akkerman, Renzo; van Donk, Dirk Pieter

    2007-01-01

    In the food-processing industry, usually a limited number of storage tanks for intermediate storage is available, which are used for different products. The market sometimes requires extremely short lead times for some products, leading to prioritization of these products, partly through...... the performance improvements for the prioritized product, as well as the negative effects for the other products. We also show how the effect decreases with more storage tanks, and increases with more products....... the dedication of a storage tank. This type of situation has hardly been investigated, although planners struggle with it in practice. This paper aims at investigating the fundamental effect of prioritization and dedicated storage in a two-stage production system, for various product mixes. We show...

  6. Solving no-wait two-stage flexible flow shop scheduling problem with unrelated parallel machines and rework time by the adjusted discrete Multi Objective Invasive Weed Optimization and fuzzy dominance approach

    Energy Technology Data Exchange (ETDEWEB)

    Jafarzadeh, Hassan; Moradinasab, Nazanin; Gerami, Ali

    2017-07-01

    Adjusted discrete Multi-Objective Invasive Weed Optimization (DMOIWO) algorithm, which uses fuzzy dominant approach for ordering, has been proposed to solve No-wait two-stage flexible flow shop scheduling problem. Design/methodology/approach: No-wait two-stage flexible flow shop scheduling problem by considering sequence-dependent setup times and probable rework in both stations, different ready times for all jobs and rework times for both stations as well as unrelated parallel machines with regards to the simultaneous minimization of maximum job completion time and average latency functions have been investigated in a multi-objective manner. In this study, the parameter setting has been carried out using Taguchi Method based on the quality indicator for beater performance of the algorithm. Findings: The results of this algorithm have been compared with those of conventional, multi-objective algorithms to show the better performance of the proposed algorithm. The results clearly indicated the greater performance of the proposed algorithm. Originality/value: This study provides an efficient method for solving multi objective no-wait two-stage flexible flow shop scheduling problem by considering sequence-dependent setup times, probable rework in both stations, different ready times for all jobs, rework times for both stations and unrelated parallel machines which are the real constraints.

  7. Solving no-wait two-stage flexible flow shop scheduling problem with unrelated parallel machines and rework time by the adjusted discrete Multi Objective Invasive Weed Optimization and fuzzy dominance approach

    International Nuclear Information System (INIS)

    Jafarzadeh, Hassan; Moradinasab, Nazanin; Gerami, Ali

    2017-01-01

    Adjusted discrete Multi-Objective Invasive Weed Optimization (DMOIWO) algorithm, which uses fuzzy dominant approach for ordering, has been proposed to solve No-wait two-stage flexible flow shop scheduling problem. Design/methodology/approach: No-wait two-stage flexible flow shop scheduling problem by considering sequence-dependent setup times and probable rework in both stations, different ready times for all jobs and rework times for both stations as well as unrelated parallel machines with regards to the simultaneous minimization of maximum job completion time and average latency functions have been investigated in a multi-objective manner. In this study, the parameter setting has been carried out using Taguchi Method based on the quality indicator for beater performance of the algorithm. Findings: The results of this algorithm have been compared with those of conventional, multi-objective algorithms to show the better performance of the proposed algorithm. The results clearly indicated the greater performance of the proposed algorithm. Originality/value: This study provides an efficient method for solving multi objective no-wait two-stage flexible flow shop scheduling problem by considering sequence-dependent setup times, probable rework in both stations, different ready times for all jobs, rework times for both stations and unrelated parallel machines which are the real constraints.

  8. Individual differences influence two-digit number processing, but not their analog magnitude processing: a large-scale online study.

    Science.gov (United States)

    Huber, Stefan; Nuerk, Hans-Christoph; Reips, Ulf-Dietrich; Soltanlou, Mojtaba

    2017-12-23

    Symbolic magnitude comparison is one of the most well-studied cognitive processes in research on numerical cognition. However, while the cognitive mechanisms of symbolic magnitude processing have been intensively studied, previous studies have paid less attention to individual differences influencing symbolic magnitude comparison. Employing a two-digit number comparison task in an online setting, we replicated previous effects, including the distance effect, the unit-decade compatibility effect, and the effect of cognitive control on the adaptation to filler items, in a large-scale study in 452 adults. Additionally, we observed that the most influential individual differences were participants' first language, time spent playing computer games and gender, followed by reported alcohol consumption, age and mathematical ability. Participants who used a first language with a left-to-right reading/writing direction were faster than those who read and wrote in the right-to-left direction. Reported playing time for computer games was correlated with faster reaction times. Female participants showed slower reaction times and a larger unit-decade compatibility effect than male participants. Participants who reported never consuming alcohol showed overall slower response times than others. Older participants were slower, but more accurate. Finally, higher grades in mathematics were associated with faster reaction times. We conclude that typical experiments on numerical cognition that employ a keyboard as an input device can also be run in an online setting. Moreover, while individual differences have no influence on domain-specific magnitude processing-apart from age, which increases the decade distance effect-they generally influence performance on a two-digit number comparison task.

  9. Changes in Running Mechanics During a 6-Hour Running Race.

    Science.gov (United States)

    Giovanelli, Nicola; Taboga, Paolo; Lazzer, Stefano

    2017-05-01

    To investigate changes in running mechanics during a 6-h running race. Twelve ultraendurance runners (age 41.9 ± 5.8 y, body mass 68.3 ± 12.6 kg, height 1.72 ± 0.09 m) were asked to run as many 874-m flat loops as possible in 6 h. Running speed, contact time (t c ), and aerial time (t a ) were measured in the first lap and every 30 ± 2 min during the race. Peak vertical ground-reaction force (F max ), stride length (SL), vertical downward displacement of the center of mass (Δz), leg-length change (ΔL), vertical stiffness (k vert ), and leg stiffness (k leg ) were then estimated. Mean distance covered by the athletes during the race was 62.9 ± 7.9 km. Compared with the 1st lap, running speed decreased significantly from 4 h 30 min onward (mean -5.6% ± 0.3%, P running, reaching the maximum difference after 5 h 30 min (+6.1%, P = .015). Conversely, k vert decreased after 4 h, reaching the lowest value after 5 h 30 min (-6.5%, P = .008); t a and F max decreased after 4 h 30 min through to the end of the race (mean -29.2% and -5.1%, respectively, P running, suggesting a possible time threshold that could affect performance regardless of absolute running speed.

  10. W-026 integrated engineering cold run operational test report for balance of plant (BOP)

    International Nuclear Information System (INIS)

    Kersten, J.K.

    1998-01-01

    This Cold Run test is designed to demonstrate the functionality of systems necessary to move waste drums throughout the plant using approved procedures, and the compatibility of these systems to function as an integrated process. This test excludes all internal functions of the gloveboxes. In the interest of efficiency and support of the facility schedule, the initial revision of the test (rev 0) was limited to the following: Receipt and storage of eight overpacked drums, four LLW and four TRU; Receipt, routing, and staging of eleven empty drums to the process area where they will be used later in this test; Receipt, processing, and shipping of two verification drums (Route 9); Receipt, processing, and shipping of two verification drums (Route 1). The above listed operations were tested using the rev 0 test document, through Section 5.4.25. The document was later revised to include movement of all staged drums to and from the LLW and TRU process and RWM gloveboxes. This testing was performed using Sections 5.5 though 5.11 of the rev 1 test document. The primary focus of this test is to prove the functionality of automatic operations for all mechanical and control processes listed. When necessary, the test demonstrates manual mode operations as well. Though the gloveboxes are listed, only waste and empty drum movement to, from, and between the gloveboxes was tested

  11. One-stage and two-stage penile buccal mucosa urethroplasty

    African Journals Online (AJOL)

    G. Barbagli

    2015-12-02

    Dec 2, 2015 ... there also seems to be a trend of decreasing urethritis and an increase of instrumentation and catheter related strictures in these countries as well [4–6]. The repair of penile urethral strictures may require one- or two- stage urethroplasty [7–10]. Certainly, sexual function can be placed at risk by any surgery ...

  12. Two-staged management for all types of congenital pouch colon

    Directory of Open Access Journals (Sweden)

    Rajendra K Ghritlaharey

    2013-01-01

    Full Text Available Background: The aim of this study was to review our experience with two-staged management for all types of congenital pouch colon (CPC. Patients and Methods: This retrospective study included CPC cases that were managed with two-staged procedures in the Department of Paediatric Surgery, over a period of 12 years from 1 January 2000 to 31 December 2011. Results: CPC comprised of 13.71% (97 of 707 of all anorectal malformations (ARM and 28.19% (97 of 344 of high ARM. Eleven CPC cases (all males were managed with two-staged procedures. Distribution of cases (Narsimha Rao et al.′s classification into types I, II, III, and IV were 1, 2, 6, and 2, respectively. Initial operative procedures performed were window colostomy (n = 6, colostomy proximal to pouch (n = 4, and ligation of colovesical fistula and end colostomy (n = 1. As definitive procedures, pouch excision with abdomino-perineal pull through (APPT of colon in eight, and pouch excision with APPT of ileum in three were performed. The mean age at the time of definitive procedures was 15.6 months (ranges from 3 to 53 months and the mean weight was 7.5 kg (ranges from 4 to 11 kg. Good fecal continence was observed in six and fair in two cases in follow-up periods, while three of our cases lost to follow up. There was no mortality following definitive procedures amongst above 11 cases. Conclusions: Two-staged procedures for all types of CPC can also be performed safely with good results. The most important fact that the definitive procedure is being done without protective stoma and therefore, it avoids stoma closure, stoma-related complications, related cost of stoma closure and hospital stay.

  13. Study on MnCl_2/CaCl_2–NH_3 two-stage solid sorption freezing cycle for refrigerated trucks at low engine load in summer

    International Nuclear Information System (INIS)

    Gao, P.; Zhang, X.F.; Wang, L.W.; Wang, R.Z.; Li, D.P.; Liang, Z.W.; Cai, A.F.

    2016-01-01

    Graphical abstract: A MnCl_2/CaCl_2–NH_3 two-stage solid sorption freezing cycle driven by the engine exhaust gas is proposed for refrigerated trucks. - Highlights: • A two-stage adsorption freezing system is designed and constructed for the refrigerated truck. • Composite adsorbents of MnCl_2 and CaCl_2 with the matrix of ENG-TSA are developed. • The average refrigerating capacity of 2.2 kW in the adsorption process is obtained. • The chilled air outlet temperature of the evaporator is controlled at about −5 °C. • The COP is 0.13 when the heating and refrigerating temperatures are 230 °C and −5 °C. - Abstract: A novel MnCl_2/CaCl_2–NH_3 two-stage solid sorption freezing cycle is designed and established for the refrigerated truck with the rated power of 80 kW. The conventional sorption/desorption process and the resorption process are combined in the two-stage cycle. Theoretical analysis shows that such a cycle could adapt to the low heat source temperature and the high cooling temperature of the sorption beds very well, which is quite essential for the truck when the running speed and the load are low in summer. The expanded natural graphite treated with sulfuric acid (ENG-TSA) is chosen as the matrix, and composite adsorbents of MnCl_2/ENG-TSA and CaCl_2/ENG-TSA are developed. The hot air heated by the electric heater is used to simulate the engine exhaust gas to drive the system. When the hot air, the ambient air and the refrigerating temperature are 230 °C, 30 °C and −5 °C, respectively, the average refrigerating capacity is 2.2 kW in the sorption process. Correspondingly, the COP and SCP are 0.13 and 91.7 W/kg, respectively. The average refrigerating capacity of 1.1 kW in one cycle is gotten, which could meet the required refrigerating capacity of the light refrigerated truck at the low engine load engine in summer.

  14. Maximum Lateness Scheduling on Two-Person Cooperative Games with Variable Processing Times and Common Due Date

    OpenAIRE

    Liu, Peng; Wang, Xiaoli

    2017-01-01

    A new maximum lateness scheduling model in which both cooperative games and variable processing times exist simultaneously is considered in this paper. The job variable processing time is described by an increasing or a decreasing function dependent on the position of a job in the sequence. Two persons have to cooperate in order to process a set of jobs. Each of them has a single machine and their processing cost is defined as the minimum value of maximum lateness. All jobs have a common due ...

  15. A time-domain digitally controlled oscillator composed of a free running ring oscillator and flying-adder

    International Nuclear Information System (INIS)

    Liu Wei; Zhang Shengdong; Wang Yangyuan; Li Wei; Ren Peng; Lin Qinglong

    2009-01-01

    A time-domain digitally controlled oscillator (DCO) is proposed. The DCO is composed of a free-running ring oscillator (FRO) and a two lap-selectors integrated flying-adder (FA). With a coiled cell array which allows uniform loading capacitances of the delay cells, the FRO produces 32 outputs with consistent tap spacing for the FA as reference clocks. The FA uses the outputs from the FRO to generate the output of the DCO according to the control number, resulting in a linear dependence of the output period, instead of the frequency on the digital controlling word input. Thus the proposed DCO ensures a good conversion linearity in a time-domain, and is suitable for time-domain all-digital phase locked loop applications. The DCO was implemented in a standard 0.13 μm digital logic CMOS process. The measurement results show that the DCO has a linear and monotonic tuning curve with gain variation of less than 10%, and a very low root mean square period jitter of 9.3 ps in the output clocks. The DCO works well at supply voltages ranging from 0.6 to 1.2 V, and consumes 4 mW of power with 500 MHz frequency output at 1.2 V supply voltage.

  16. On the optimal use of a slow server in two-stage queueing systems

    Science.gov (United States)

    Papachristos, Ioannis; Pandelis, Dimitrios G.

    2017-07-01

    We consider two-stage tandem queueing systems with a dedicated server in each queue and a slower flexible server that can attend both queues. We assume Poisson arrivals and exponential service times, and linear holding costs for jobs present in the system. We study the optimal dynamic assignment of servers to jobs assuming that two servers cannot collaborate to work on the same job and preemptions are not allowed. We formulate the problem as a Markov decision process and derive properties of the optimal allocation for the dedicated (fast) servers. Specifically, we show that the one downstream should not idle, and the same is true for the one upstream when holding costs are larger there. The optimal allocation of the slow server is investigated through extensive numerical experiments that lead to conjectures on the structure of the optimal policy.

  17. Two-stage Lagrangian modeling of ignition processes in ignition quality tester and constant volume combustion chambers

    KAUST Repository

    Alfazazi, Adamu; Kuti, Olawole Abiola; Naser, Nimal; Chung, Suk-Ho; Sarathy, Mani

    2016-01-01

    The ignition characteristics of isooctane and n-heptane in an ignition quality tester (IQT) were simulated using a two-stage Lagrangian (TSL) model, which is a zero-dimensional (0-D) reactor network method. The TSL model was also used to simulate

  18. Optimisation of the ATLAS Track Reconstruction Software for Run-2

    CERN Document Server

    Salzburger, Andreas; The ATLAS collaboration

    2015-01-01

    Track reconstruction is one of the most complex element of the reconstruction of events recorded by ATLAS from collisions delivered by the LHC. It is the most time consuming reconstruction component in high luminosity environments. The flat budget projections for computing resources for Run-2 of the LHC together with the demands of reconstructing higher pile-up collision data at rates more than double those in Run-1 (an increase from 400 Hz to 1 kHz in trigger output) have put stringent requirements on the track reconstruction software. The ATLAS experiment has performed a two year long software campaign which aimed to reduce the reconstruction rate by a factor of three to meet the resource limitations for Run-2: the majority of the changes to achieve this were improvements to the track reconstruction software. The CPU processing time of ATLAS track reconstruction was reduced by more than a factor of three during this campaign without any loss of output information of the track reconstruction. We present the ...

  19. One-stage exchange with antibacterial hydrogel coated implants provides similar results to two-stage revision, without the coating, for the treatment of peri-prosthetic infection.

    Science.gov (United States)

    Capuano, Nicola; Logoluso, Nicola; Gallazzi, Enrico; Drago, Lorenzo; Romanò, Carlo Luca

    2018-03-16

    Aim of this study was to verify the hypothesis that a one-stage exchange procedure, performed with an antibiotic-loaded, fast-resorbable hydrogel coating, provides similar infection recurrence rate than a two-stage procedure without the coating, in patients affected by peri-prosthetic joint infection (PJI). In this two-center case-control, study, 22 patients, treated with a one-stage procedure, using implants coated with an antibiotic-loaded hydrogel [defensive antibacterial coating (DAC)], were compared with 22 retrospective matched controls, treated with a two-stage revision procedure, without the coating. At a mean follow-up of 29.3 ± 5.0 months, two patients (9.1%) in the DAC group showed an infection recurrence, compared to three patients (13.6%) in the two-stage group. Clinical scores were similar between groups, while average hospital stay and antibiotic treatment duration were significantly reduced after one-stage, compared to two-stage (18.9 ± 2.9 versus 35.8 ± 3.4 and 23.5 ± 3.3 versus 53.7 ± 5.6 days, respectively). Although in a relatively limited series of patients, our data shows similar infection recurrence rate after one-stage exchange with DAC-coated implants, compared to two-stage revision without coating, with reduced overall hospitalization time and antibiotic treatment duration. These findings warrant further studies in the possible applications of antibacterial coating technologies to treat implant-related infections. III.

  20. Three Stages and Two Systems of Visual Processing

    Science.gov (United States)

    1989-01-01

    as squaring do not, in and of themselves, imply second- order processing . For example, the Adelson and Bergen’s (1985) detector of directional motion...rectification, halfwave rectification is a second- order processing scheme. Figure 8. Stimuli for analyzing second- order processing . (a) An x,y,t representation of

  1. High-yield production of vanillin from ferulic acid by a coenzyme-independent decarboxylase/oxygenase two-stage process.

    Science.gov (United States)

    Furuya, Toshiki; Miura, Misa; Kuroiwa, Mari; Kino, Kuniki

    2015-05-25

    Vanillin is one of the world's most important flavor and fragrance compounds in foods and cosmetics. Recently, we demonstrated that vanillin could be produced from ferulic acid via 4-vinylguaiacol in a coenzyme-independent manner using the decarboxylase Fdc and the oxygenase Cso2. In this study, we investigated a new two-pot bioprocess for vanillin production using the whole-cell catalyst of Escherichia coli expressing Fdc in the first stage and that of E. coli expressing Cso2 in the second stage. We first optimized the second-step Cso2 reaction from 4-vinylguaiacol to vanillin, a rate-determining step for the production of vanillin. Addition of FeCl2 to the cultivation medium enhanced the activity of the resulting E. coli cells expressing Cso2, an iron protein belonging to the carotenoid cleavage oxygenase family. Furthermore, a butyl acetate-water biphasic system was effective in improving the production of vanillin. Under the optimized conditions, we attempted to produce vanillin from ferulic acid by a two-pot bioprocess on a flask scale. In the first stage, E. coli cells expressing Fdc rapidly decarboxylated ferulic acid and completely converted 75 mM of this substrate to 4-vinylguaiacol within 2 h at pH 9.0. After the first-stage reaction, cells were removed from the reaction mixture by centrifugation, and the pH of the resulting supernatant was adjusted to 10.5, the optimal pH for Cso2. This solution was subjected to the second-stage reaction. In the second stage, E. coli cells expressing Cso2 efficiently oxidized 4-vinylguaiacol to vanillin. The concentration of vanillin reached 52 mM (7.8 g L(-1)) in 24 h, which is the highest level attained to date for the biotechnological production of vanillin using recombinant cells. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Wastewater treatment and biodiesel production by Scenedesmus obliquus in a two-stage cultivation process.

    Science.gov (United States)

    Álvarez-Díaz, P D; Ruiz, J; Arbib, Z; Barragán, J; Garrido-Pérez, M C; Perales, J A

    2015-04-01

    The microalga Scenedesmus obliquus was cultured in two cultivation stages: (1) in batch with real wastewater; (2) maintaining the stationary phase with different conditions of CO2, light and salinity according to a factorial design in order to improve the lipid content. The presence of the three factors increased lipid content from 35.8% to 49% at the end of the second stage; CO2 presence presented the highest direct effect increasing lipid content followed by light presence and salt presence. The ω-3 fatty acids content increased with CO2 and light presence acting in isolation, nevertheless, when both factors acted together the interaction effect was negative. The ω-3 eicosapentaenoic acid content of the oil from S. obliquus slightly exceeded the 1% maximum to be used as biodiesel source (EU normative). Therefore, it is suggested the blend with other oils or the selective extraction of the ω-3 fatty acids from S. obliquus oil. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Two stages of directed forgetting: Electrophysiological evidence from a short-term memory task.

    Science.gov (United States)

    Gao, Heming; Cao, Bihua; Qi, Mingming; Wang, Jing; Zhang, Qi; Li, Fuhong

    2016-06-01

    In this study, a short-term memory test was used to investigate the temporal course and neural mechanism of directed forgetting under different memory loads. Within each trial, two memory items with high or low load were presented sequentially, followed by a cue indicating whether the presented items should be remembered. After an interval, subjects were asked to respond to the probe stimuli. The ERPs locked to the cues showed that (a) the effect of cue type was initially observed during the P2 (160-240 ms) time window, with more positive ERPs for remembering relative to forgetting cues; (b) load effects were observed during the N2-P3 (250-500 ms) time window, with more positive ERPs for the high-load than low-load condition; (c) the cue effect was also observed during the N2-P3 time window, with more negative ERPs for forgetting versus remembering cues. These results demonstrated that directed forgetting involves two stages: task-relevance identification and information discarding. The cue effects during the N2 epoch supported the view that directed forgetting is an active process. © 2016 Society for Psychophysiological Research.

  4. Divergent methylation pattern in adult stage between two forms of Tetranychus urticae (Acari: Tetranychidae).

    Science.gov (United States)

    Yang, Si-Xia; Guo, Chao; Zhao, Xiu-Ting; Sun, Jing-Tao; Hong, Xiao-Yue

    2017-02-19

    The two-spotted spider mite, Tetranychus urticae Koch has two forms: green form and red form. Understanding the molecular basis of how these two forms established without divergent genetic background is an intriguing area. As a well-known epigenetic process, DNA methylation has particularly important roles in gene regulation and developmental variation across diverse organisms that do not alter genetic background. Here, to investigate whether DNA methylation could be associated with different phenotypic consequences in the two forms of T. urticae, we surveyed the genome-wide cytosine methylation status and expression level of DNA methyltransferase 3 (Tudnmt3) throughout their entire life cycle. Methylation-sensitive amplification polymorphism (MSAP) analyses of 585 loci revealed variable methylation patterns in the different developmental stages. In particular, principal coordinates analysis (PCoA) indicates a significant epigenetic differentiation between female adults of the two forms. The gene expression of Tudnmt3 was detected in all examined developmental stages, which was significantly different in the adult stage of the two forms. Together, our results reveal the epigenetic distance between the two forms of T. urticae, suggesting that DNA methylation might be implicated in different developmental demands, and contribute to different phenotypes in the adult stage of these two forms. © 2017 Institute of Zoology, Chinese Academy of Sciences.

  5. Supporting Multiprocessors in the Icecap Safety-Critical Java Run-Time Environment

    DEFF Research Database (Denmark)

    Zhao, Shuai; Wellings, Andy; Korsholm, Stephan Erbs

    The current version of the Safety Critical Java (SCJ) specification defines three compliance levels. Level 0 targets single processor programs while Level 1 and 2 can support multiprocessor platforms. Level 1 programs must be fully partitioned but Level 2 programs can also be more globally...... scheduled. As of yet, there is no official Reference Implementation for SCJ. However, the icecap project has produced a Safety-Critical Java Run-time Environment based on the Hardware-near Virtual Machine (HVM). This supports SCJ at all compliance levels and provides an implementation of the safety......-critical Java (javax.safetycritical) package. This is still work-in-progress and lacks certain key features. Among these is the ability to support multiprocessor platforms. In this paper, we explore two possible options to adding multiprocessor support to this environment: the “green thread” and the “native...

  6. Micellar casein concentrate production with a 3X, 3-stage, uniform transmembrane pressure ceramic membrane process at 50°C.

    Science.gov (United States)

    Hurt, E; Zulewska, J; Newbold, M; Barbano, D M

    2010-12-01

    The production of serum protein (SP) and micellar casein from skim milk can be accomplished using microfiltration (MF). Potential commercial applications exist for both SP and micellar casein. Our research objective was to determine the total SP removal and SP removal for each stage, and the composition of retentates and permeates, for a 3×, continuous bleed-and-feed, 3-stage, uniform transmembrane pressure (UTP) system with 0.1-μm ceramic membranes, when processing pasteurized skim milk at 50°C with 2 stages of water diafiltration. For each of 4 replicates, about 1,100 kg of skim milk was pasteurized (72°C, 16s) and processed at 3× through the UTP MF system. Retentate from stage 1 was cooled to <4°C and stored until the next processing day, when it was diluted with reverse osmosis water back to a 1× concentration and again processed through the MF system (stage 2) to a 3× concentration. The retentate from stage 2 was stored at <4°C, and, on the next processing day, was diluted with reverse osmosis water back to a 1× concentration, before running through the MF system at 3× for a total of 3 stages. The retentate and permeate from each stage were analyzed for total nitrogen, noncasein nitrogen, and nonprotein nitrogen using Kjeldahl methods; sodium dodecyl sulfate-PAGE analysis was also performed on the retentates from each stage. Theoretically, a 3-stage, 3× MF process could remove 97% of the SP from skim milk, with a cumulative SP removal of 68 and 90% after the first and second stages, respectively. The cumulative SP removal using a 3-stage, 3× MF process with a UTP system with 0.01-μm ceramic membranes in this experiment was 64.8 ± 0.8, 87.8 ± 1.6, and 98.3 ± 2.3% for the first, second, and third stages, respectively, when calculated using the mass of SP removed in the permeate of each stage. Various methods of calculation of SP removal were evaluated. Given the analytical limitations in the various methods for measuring SP removal, calculation

  7. Information processing theory in the early design stages

    DEFF Research Database (Denmark)

    Cash, Philip; Kreye, Melanie

    2014-01-01

    suggestions for improvements and support. One theory that may be particularly applicable to the early design stages is Information Processing Theory (IPT) as it is linked to the design process with regard to the key concepts considered. IPT states that designers search for information if they perceive......, the new knowledge is shared between the design team to reduce ambiguity with regards to its meaning and to build a shared understanding – reducing perceived uncertainty. Thus, we propose that Information-Processing Theory is suitable to describe designer activity in the early design stages...... uncertainty with regard to the knowledge necessary to solve a design challenge. They then process this information and compare if the new knowledge they have gained covers the previous knowledge gap. In engineering design, uncertainty plays a key role, particularly in the early design stages which has been...

  8. Walking, running, and resting under time, distance, and average speed constraints: optimality of walk–run–rest mixtures

    Science.gov (United States)

    Long, Leroy L.; Srinivasan, Manoj

    2013-01-01

    On a treadmill, humans switch from walking to running beyond a characteristic transition speed. Here, we study human choice between walking and running in a more ecological (non-treadmill) setting. We asked subjects to travel a given distance overground in a given allowed time duration. During this task, the subjects carried, and could look at, a stopwatch that counted down to zero. As expected, if the total time available were large, humans walk the whole distance. If the time available were small, humans mostly run. For an intermediate total time, humans often use a mixture of walking at a slow speed and running at a higher speed. With analytical and computational optimization, we show that using a walk–run mixture at intermediate speeds and a walk–rest mixture at the lowest average speeds is predicted by metabolic energy minimization, even with costs for transients—a consequence of non-convex energy curves. Thus, sometimes, steady locomotion may not be energy optimal, and not preferred, even in the absence of fatigue. Assuming similar non-convex energy curves, we conjecture that similar walk–run mixtures may be energetically beneficial to children following a parent and animals on long leashes. Humans and other animals might also benefit energetically from alternating between moving forward and standing still on a slow and sufficiently long treadmill. PMID:23365192

  9. Implications of the two stage clonal expansion model to radiation risk estimation

    International Nuclear Information System (INIS)

    Curtis, S.B.; Hazelton, W.D.; Luebeck, E.G.; Moolgavkar, S.H.

    2003-01-01

    The Two Stage Clonal Expansion Model of carcinogenesis has been applied to the analysis of several cohorts of persons exposed to chronic exposures of high and low LET radiation. The results of these analyses are: (1) the importance of radiation-induced initiation is small and, if present at all, contributes to cancers only late in life and only if exposure begins early in life, (2) radiation-induced promotion dominates and produces the majority of cancers by accelerating proliferation of already-initiated cells, and (3) radiation-induced malignant conversion is important only during and immediately after exposure ceases and tends to dominate only late in life, acting on already initiated and promoted cells. Two populations, the Colorado Plateau miners (high-LET, radon exposed) and the Canadian radiation workers (low-LET, gamma ray exposed) are used as examples to show the time dependence of the hazard function and the relative importance of the three hypothesized processes (initiation, promotion and malignant conversion) for each radiation quality

  10. A Multi-stage Representation of Cell Proliferation as a Markov Process.

    Science.gov (United States)

    Yates, Christian A; Ford, Matthew J; Mort, Richard L

    2017-12-01

    The stochastic simulation algorithm commonly known as Gillespie's algorithm (originally derived for modelling well-mixed systems of chemical reactions) is now used ubiquitously in the modelling of biological processes in which stochastic effects play an important role. In well-mixed scenarios at the sub-cellular level it is often reasonable to assume that times between successive reaction/interaction events are exponentially distributed and can be appropriately modelled as a Markov process and hence simulated by the Gillespie algorithm. However, Gillespie's algorithm is routinely applied to model biological systems for which it was never intended. In particular, processes in which cell proliferation is important (e.g. embryonic development, cancer formation) should not be simulated naively using the Gillespie algorithm since the history-dependent nature of the cell cycle breaks the Markov process. The variance in experimentally measured cell cycle times is far less than in an exponential cell cycle time distribution with the same mean.Here we suggest a method of modelling the cell cycle that restores the memoryless property to the system and is therefore consistent with simulation via the Gillespie algorithm. By breaking the cell cycle into a number of independent exponentially distributed stages, we can restore the Markov property at the same time as more accurately approximating the appropriate cell cycle time distributions. The consequences of our revised mathematical model are explored analytically as far as possible. We demonstrate the importance of employing the correct cell cycle time distribution by recapitulating the results from two models incorporating cellular proliferation (one spatial and one non-spatial) and demonstrating that changing the cell cycle time distribution makes quantitative and qualitative differences to the outcome of the models. Our adaptation will allow modellers and experimentalists alike to appropriately represent cellular

  11. Vernier But Not Grating Acuity Contributes to an Early Stage of Visual Word Processing.

    Science.gov (United States)

    Tan, Yufei; Tong, Xiuhong; Chen, Wei; Weng, Xuchu; He, Sheng; Zhao, Jing

    2018-03-28

    The process of reading words depends heavily on efficient visual skills, including analyzing and decomposing basic visual features. Surprisingly, previous reading-related studies have almost exclusively focused on gross aspects of visual skills, while only very few have investigated the role of finer skills. The present study filled this gap and examined the relations of two finer visual skills measured by grating acuity (the ability to resolve periodic luminance variations across space) and Vernier acuity (the ability to detect/discriminate relative locations of features) to Chinese character-processing as measured by character form-matching and lexical decision tasks in skilled adult readers. The results showed that Vernier acuity was significantly correlated with performance in character form-matching but not visual symbol form-matching, while no correlation was found between grating acuity and character processing. Interestingly, we found no correlation of the two visual skills with lexical decision performance. These findings provide for the first time empirical evidence that the finer visual skills, particularly as reflected in Vernier acuity, may directly contribute to an early stage of hierarchical word processing.

  12. Operating Security System Support for Run-Time Security with a Trusted Execution Environment

    DEFF Research Database (Denmark)

    Gonzalez, Javier

    , it is safe to assume that any complex software is compromised. The problem is then to monitor and contain it when it executes in order to protect sensitive data and other sensitive assets. To really have an impact, any solution to this problem should be integrated in commodity operating systems...... in the Linux operating system. We are in the process of making this driver part of the mainline Linux kernel.......Software services have become an integral part of our daily life. Cyber-attacks have thus become a problem of increasing importance not only for the IT industry, but for society at large. A way to contain cyber-attacks is to guarantee the integrity of IT systems at run-time. Put differently...

  13. The RTD measurement of two stage anaerobic digester using radiotracer in WWTP

    International Nuclear Information System (INIS)

    Jin-Seop, Kim; Jong-Bum, Kim; Sung-Hee, Jung

    2006-01-01

    The aims of this study are to assess the existence and location of the stagnant zone by estimating the MRT (mean residence time) on the two stage anaerobic digester, with the results to be used as informative clue for its better operation

  14. Two-stage distraction lengthening of the forearm.

    Science.gov (United States)

    Taghinia, Amir H; Al-Sheikh, Ayman A; Panossian, Andre E; Upton, Joseph

    2013-01-01

    Single-stage lengthening of the forearm using callus distraction is well described; however, forearm lengthening using a 2-stage technique of distraction followed by bone grafting has received less attention. A 2-staged technique can be a better alternative in cases where the surgeon desires extensive lengthening. A retrospective review was undertaken of eleven 2-stage forearm lengthening procedures performed by 1 surgeon over a 15-year period. Indications were radial longitudinal deficiency (8 patients), neonatal ischemic contractures (2 patients), and septic growth arrest (1 patient). Average follow-up was 2.8 years. Distraction was performed on patients an average of 82 mm over an average duration of 24 weeks. Average time to union from the time of distractor removal and grafting was 87 days. Average healing index was 32.1 d/cm. Distraction problems were common and related to the length of time that the distractor was in place; they included pain, pin-related infections, and multiple mechanical device difficulties. Three patients had nonunion, and another had delayed union; however, additional procedures resulted in ultimate bony union in all patients. Demineralized bone matrix and autologous corticocancellous bone grafts yielded predictable healing and good functional results in short-distance distractions. For longer distractions, free vascularized fibula transfer produced the best outcomes. Intercalary cortical allografts did not heal well. Patients with neonatal Volkmann contractures had the most difficulty with distraction and healing, ultimately obtaining little to no lengthening and poor functional outcomes.

  15. INFLUENCE OF FUNCTIONAL ABILITY IN RUNNING 400 AND 800 METERS

    Directory of Open Access Journals (Sweden)

    Abdulla Elezi

    2013-07-01

    Full Text Available Goal of the research was to assess on the grounds of data collected that were used to assess the functional ability of the cardio-respiratory system and the results of running to determine the relation of these sum of variables. Basic statistical indicators of the physiological variables and results of running were calculated. For determining the relation, the regression analysis was used in the manifested space. Criterion variable (running for 100 meters did not demonstrate statistically significant coefficient of multiple variation with predictor variables. The time span in running 400 meters is short in order to engage mechanisms that supply and transform the energy for oxidative processes. Criterion variable (of running 800 meters has demonstrated statistically significant coefficient of multiple correlations with predictor variable and its value was 0.377 tested through F-test. This is understandable given that the time effect of engagement of systems responsible for transfer and transformation of energy compared to time needed for running 400 meters.

  16. Personal best marathon time and longest training run, not anthropometry, predict performance in recreational 24-hour ultrarunners.

    Science.gov (United States)

    Knechtle, Beat; Knechtle, Patrizia; Rosemann, Thomas; Lepers, Romuald

    2011-08-01

    In recent studies, a relationship between both low body fat and low thicknesses of selected skinfolds has been demonstrated for running performance of distances from 100 m to the marathon but not in ultramarathon. We investigated the association of anthropometric and training characteristics with race performance in 63 male recreational ultrarunners in a 24-hour run using bi and multivariate analysis. The athletes achieved an average distance of 146.1 (43.1) km. In the bivariate analysis, body mass (r = -0.25), the sum of 9 skinfolds (r = -0.32), the sum of upper body skinfolds (r = -0.34), body fat percentage (r = -0.32), weekly kilometers ran (r = 0.31), longest training session before the 24-hour run (r = 0.56), and personal best marathon time (r = -0.58) were related to race performance. Stepwise multiple regression showed that both the longest training session before the 24-hour run (p = 0.0013) and the personal best marathon time (p = 0.0015) had the best correlation with race performance. Performance in these 24-hour runners may be predicted (r2 = 0.46) by the following equation: Performance in a 24-hour run, km) = 234.7 + 0.481 (longest training session before the 24-hour run, km) - 0.594 (personal best marathon time, minutes). For practical applications, training variables such as volume and intensity were associated with performance but not anthropometric variables. To achieve maximum kilometers in a 24-hour run, recreational ultrarunners should have a personal best marathon time of ∼3 hours 20 minutes and complete a long training run of ∼60 km before the race, whereas anthropometric characteristics such as low body fat or low skinfold thicknesses showed no association with performance.

  17. Massively Parallel Signal Processing using the Graphics Processing Unit for Real-Time Brain-Computer Interface Feature Extraction.

    Science.gov (United States)

    Wilson, J Adam; Williams, Justin C

    2009-01-01

    The clock speeds of modern computer processors have nearly plateaued in the past 5 years. Consequently, neural prosthetic systems that rely on processing large quantities of data in a short period of time face a bottleneck, in that it may not be possible to process all of the data recorded from an electrode array with high channel counts and bandwidth, such as electrocorticographic grids or other implantable systems. Therefore, in this study a method of using the processing capabilities of a graphics card [graphics processing unit (GPU)] was developed for real-time neural signal processing of a brain-computer interface (BCI). The NVIDIA CUDA system was used to offload processing to the GPU, which is capable of running many operations in parallel, potentially greatly increasing the speed of existing algorithms. The BCI system records many channels of data, which are processed and translated into a control signal, such as the movement of a computer cursor. This signal processing chain involves computing a matrix-matrix multiplication (i.e., a spatial filter), followed by calculating the power spectral density on every channel using an auto-regressive method, and finally classifying appropriate features for control. In this study, the first two computationally intensive steps were implemented on the GPU, and the speed was compared to both the current implementation and a central processing unit-based implementation that uses multi-threading. Significant performance gains were obtained with GPU processing: the current implementation processed 1000 channels of 250 ms in 933 ms, while the new GPU method took only 27 ms, an improvement of nearly 35 times.

  18. A two-stage ceramic tile grout sealing process using a high power diode laser—Grout development and materials characteristics

    Science.gov (United States)

    Lawrence, J.; Li, L.; Spencer, J. T.

    1998-04-01

    Work has been conducted using a 60 Wcw high power diode laser (HPDL) in order to determine the feasibility and characteristics of sealing the void between adjoining ceramic tiles with a specially developed grout material having an impermeable enamel surface glaze. A two-stage process has been developed using a new grout material which consists of two distinct components: an amalgamated compound substrate and a glazed enamel surface; the amalgamated compound seal providing a tough, heat resistant bulk substrate, whilst the enamel provides an impervious surface. HPDL processing has resulted in crack free seals produced in normal atmospheric conditions. The basic process phenomena are investigated and the laser effects in terms of seal morphology, composition and microstructure are presented. Also, the resultant heat affects are analysed and described, as well as the effects of the shield gases, O 2 and Ar, during laser processing. Tiles were successfully sealed with power densities as low as 500 W/cm 2 and at rates up to 600 mm/min. Contact angle measurements revealed that due to the wettability characteristics of the amalgamated oxide compound grout (AOCG), laser surface treatment was necessary in order to alter the surface from a polycrystalline to a semi-amorphous structure, thus allowing the enamel to adhere. Bonding of the enamel to the AOCG and the ceramic tiles was identified as being principally due to van der Waals forces, and on a very small scale, some of the base AOCG material dissolving into the glaze.

  19. Adaptive Urban Stormwater Management Using a Two-stage Stochastic Optimization Model

    Science.gov (United States)

    Hung, F.; Hobbs, B. F.; McGarity, A. E.

    2014-12-01

    In many older cities, stormwater results in combined sewer overflows (CSOs) and consequent water quality impairments. Because of the expense of traditional approaches for controlling CSOs, cities are considering the use of green infrastructure (GI) to reduce runoff and pollutants. Examples of GI include tree trenches, rain gardens, green roofs, and rain barrels. However, the cost and effectiveness of GI are uncertain, especially at the watershed scale. We present a two-stage stochastic extension of the Stormwater Investment Strategy Evaluation (StormWISE) model (A. McGarity, JWRPM, 2012, 111-24) to explicitly model and optimize these uncertainties in an adaptive management framework. A two-stage model represents the immediate commitment of resources ("here & now") followed by later investment and adaptation decisions ("wait & see"). A case study is presented for Philadelphia, which intends to extensively deploy GI over the next two decades (PWD, "Green City, Clean Water - Implementation and Adaptive Management Plan," 2011). After first-stage decisions are made, the model updates the stochastic objective and constraints (learning). We model two types of "learning" about GI cost and performance. One assumes that learning occurs over time, is automatic, and does not depend on what has been done in stage one (basic model). The other considers learning resulting from active experimentation and learning-by-doing (advanced model). Both require expert probability elicitations, and learning from research and monitoring is modelled by Bayesian updating (as in S. Jacobi et al., JWRPM, 2013, 534-43). The model allocates limited financial resources to GI investments over time to achieve multiple objectives with a given reliability. Objectives include minimizing construction and O&M costs; achieving nutrient, sediment, and runoff volume targets; and community concerns, such as aesthetics, CO2 emissions, heat islands, and recreational values. CVaR (Conditional Value at Risk) and

  20. Unintentionality of affective attention across visual processing stages

    Directory of Open Access Journals (Sweden)

    Andero eUusberg

    2013-12-01

    Full Text Available Affective attention involves bottom-up perceptual selection that prioritizes motivationally significant stimuli. To clarify the extent to which this process is automatic, we investigated the dependence of affective attention on the intention to process emotional meaning. Affective attention was manipulated by presenting IAPS images with variable arousal and intentionality by requiring participants to make affective and non-affective evaluations. Polytomous rather than binary decisions were required from the participants in order to elicit relatively deep emotional processing. The temporal dynamics of prioritized processing were assessed using Early Posterior Negativity (EPN, 175-300 ms as well as P3-like (P3, 300 – 500 ms and Slow Wave (SW, 500 – 1500 ms portions of the Late Positive Potential. All analysed components were differentially sensitive to stimulus categories suggesting that they indeed reflect distinct stages of motivational significance encoding. The intention to perceive emotional meaning had no effect on EPN, an additive effect on P3, and an interactive effect on SW. We concluded that affective attention went from completely unintentional during the EPN to partially unintentional during P3 and SW where top-down signals, respectively, complemented and modulated bottom-up differences in stimulus prioritization. The findings were interpreted in light of two-stage models of visual perception by associating the EPN with large-capacity initial relevance detection and the P3 as well as SW with capacity-limited consolidation and elaboration of affective stimuli.

  1. The ATLAS Trigger System : Ready for Run-2

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00211007; The ATLAS collaboration

    2016-01-01

    The ATLAS trigger has been successfully collecting collision data during the first run of the LHC between 2009-2013 at a centre-of-mass energy between 900 GeV and 8 TeV. The trigger system consists of a hardware based Level-1 (L1) and a software based high-level trigger (HLT) that reduces the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of a few hundred Hz. During the course of the ongoing Run-2 data-taking campaign at 13 TeV centre-of-mass energy the trigger rates will be approximately 5 times higher compared to Run-1. In these proceedings we briefly review the ATLAS trigger system upgrades that were implemented during the shutdown, allowing us to cope with the increased trigger rates while maintaining or even improving our efficiency to select relevant physics processes. This includes changes to the L1 calorimeter and muon trigger system, the introduction of a new L1 topological trigger subsystem and the merging of the previously two-level HLT system into a single ev...

  2. Managing uncertainty - a qualitative study of surgeons' decision-making for one-stage and two-stage revision surgery for prosthetic hip joint infection.

    Science.gov (United States)

    Moore, Andrew J; Blom, Ashley W; Whitehouse, Michael R; Gooberman-Hill, Rachael

    2017-04-12

    where there was uncertainty about the best treatment option. Surgeons highlighted the need for evidence to support their choice of revision. Some surgeons' willingness to consider one-stage revision for infection had increased over time, largely influenced by evidence of successful one-stage revisions. Custom-made articulating spacers also enabled surgeons to manage uncertainty about the superiority of surgical techniques. Surgeons thought that a prospective randomised controlled trial comparing one-stage with two-stage joint replacement is needed and that randomisation would be feasible.

  3. Highly coherent free-running dual-comb chip platform.

    Science.gov (United States)

    Hébert, Nicolas Bourbeau; Lancaster, David G; Michaud-Belleau, Vincent; Chen, George Y; Genest, Jérôme

    2018-04-15

    We characterize the frequency noise performance of a free-running dual-comb source based on an erbium-doped glass chip running two adjacent mode-locked waveguide lasers. This compact laser platform, contained only in a 1.2 L volume, rejects common-mode environmental noise by 20 dB thanks to the proximity of the two laser cavities. Furthermore, it displays a remarkably low mutual frequency noise floor around 10  Hz 2 /Hz, which is enabled by its large-mode-area waveguides and low Kerr nonlinearity. As a result, it reaches a free-running mutual coherence time of 1 s since mode-resolved dual-comb spectra are generated even on this time scale. This design greatly simplifies dual-comb interferometers by enabling mode-resolved measurements without any phase lock.

  4. A Dual-Stage Two-Phase Model of Selective Attention

    Science.gov (United States)

    Hubner, Ronald; Steinhauser, Marco; Lehle, Carola

    2010-01-01

    The dual-stage two-phase (DSTP) model is introduced as a formal and general model of selective attention that includes both an early and a late stage of stimulus selection. Whereas at the early stage information is selected by perceptual filters whose selectivity is relatively limited, at the late stage stimuli are selected more efficiently on a…

  5. Biodiesel production from microalgae Spirulina maxima by two step process: Optimization of process variable

    Directory of Open Access Journals (Sweden)

    M.A. Rahman

    2017-04-01

    Full Text Available Biodiesel from green energy source is gaining tremendous attention for ecofriendly and economically aspect. In this investigation, a two-step process was developed for the production of biodiesel from microalgae Spirulina maxima and determined best operating conditions for the steps. In the first stage, acid esterification was conducted to lessen acid value (AV from 10.66 to 0.51 mgKOH/g of the feedstock and optimal conditions for maximum esterified oil yielding were found at molar ratio 12:1, temperature 60°C, 1% (wt% H2SO4, and mixing intensity 400 rpm for a reaction time of 90 min. The second stage alkali transesterification was carried out for maximum biodiesel yielding (86.1% and optimal conditions were found at molar ratio 9:1, temperature 65°C, mixing intensity 600 rpm, catalyst concentration 0.75% (wt% KOH for a reaction time of 20 min. Biodiesel were analyzed according to ASTM standards and results were within standards limit. Results will helpful to produce third generation algal biodiesel from microalgae Spirulina maxima in an efficient manner.

  6. Breast cancer stage at diagnosis: is travel time important?

    Science.gov (United States)

    Henry, Kevin A; Boscoe, Francis P; Johnson, Christopher J; Goldberg, Daniel W; Sherman, Recinda; Cockburn, Myles

    2011-12-01

    Recent studies have produced inconsistent results in their examination of the potential association between proximity to healthcare or mammography facilities and breast cancer stage at diagnosis. Using a multistate dataset, we re-examine this issue by investigating whether travel time to a patient's diagnosing facility or nearest mammography facility impacts breast cancer stage at diagnosis. We studied 161,619 women 40 years and older diagnosed with invasive breast cancer from ten state population based cancer registries in the United States. For each woman, we calculated travel time to their diagnosing facility and nearest mammography facility. Logistic multilevel models of late versus early stage were fitted, and odds ratios were calculated for travel times, controlling for age, race/ethnicity, census tract poverty, rural/urban residence, health insurance, and state random effects. Seventy-six percent of women in the study lived less than 20 min from their diagnosing facility, and 93 percent lived less than 20 min from the nearest mammography facility. Late stage at diagnosis was not associated with increasing travel time to diagnosing facility or nearest mammography facility. Diagnosis age under 50, Hispanic and Non-Hispanic Black race/ethnicity, high census tract poverty, and no health insurance were all significantly associated with late stage at diagnosis. Travel time to diagnosing facility or nearest mammography facility was not a determinant of late stage of breast cancer at diagnosis, and better geographic proximity did not assure more favorable stage distributions. Other factors beyond geographic proximity that can affect access should be evaluated more closely, including facility capacity, insurance acceptance, public transportation, and travel costs.

  7. Performance Evaluation of Staged Bosch Process for CO2 Reduction to Produce Life Support Consumables

    Science.gov (United States)

    Vilekar, Saurabh A.; Hawley, Kyle; Junaedi, Christian; Walsh, Dennis; Roychoudhury, Subir; Abney. Morgan B.; Mansell, James M.

    2012-01-01

    Utilizing carbon dioxide to produce water and hence oxygen is critical for sustained manned missions in space, and to support both NASA's cabin Atmosphere Revitalization System (ARS) and In-Situ Resource Utilization (ISRU) concepts. For long term missions beyond low Earth orbit, where resupply is significantly more difficult and costly, open loop ARS, like Sabatier, consume inputs such as hydrogen. The Bosch process, on the other hand, has the potential to achieve complete loop closure and is hence a preferred choice. However, current single stage Bosch reactor designs suffer from a large recycle penalty due to slow reaction rates and the inherent limitation in approaching thermodynamic equilibrium. Developmental efforts are seeking to improve upon the efficiency (hence reducing the recycle penalty) of current single stage Bosch reactors which employ traditional steel wool catalysts. Precision Combustion, Inc. (PCI), with support from NASA, has investigated the potential for utilizing catalysts supported over short-contact time Microlith substrates for the Bosch reaction to achieve faster reaction rates, higher conversions, and a reduced recycle flows. Proof-of-concept testing was accomplished for a staged Bosch process by splitting the chemistry in two separate reactors, first being the reverse water-gas-shift (RWGS) and the second being the carbon formation reactor (CFR) via hydrogenation and/or Boudouard. This paper presents the results from this feasibility study at various operating conditions. Additionally, results from two 70 hour durability tests for the RWGS reactor are discussed.

  8. An Integrated Simulation, Inference and Optimization Approach for Groundwater Remediation with Two-stage Health-Risk Assessment

    Directory of Open Access Journals (Sweden)

    Aili Yang

    2018-05-01

    Full Text Available In this study, an integrated simulation, inference and optimization approach with two-stage health risk assessment (i.e., ISIO-THRA is developed for supporting groundwater remediation for a petroleum-contaminated site in western Canada. Both environmental standards and health risk are considered as the constraints in the ISIO-THRA model. The health risk includes two parts: (1 the health risk during the remediation process and (2 the health risk in the natural attenuation period after remediation. In the ISIO-THRA framework, the relationship between contaminant concentrations and time is expressed through first-order decay models. The results demonstrate that: (1 stricter environmental standards and health risk would require larger pumping rates for the same remediation duration; (2 higher health risk may happen in the period of the remediation process; (3 for the same environmental standard and acceptable health-risk level, the remediation techniques that take the shortest time would be chosen. ISIO-THRA can help to systematically analyze interaction among contaminant transport, remediation duration, and environmental and health concerns, and further provide useful supportive information for decision makers.

  9. Stages in the research process.

    Science.gov (United States)

    Gelling, Leslie

    2015-03-04

    Research should be conducted in a systematic manner, allowing the researcher to progress from a general idea or clinical problem to scientifically rigorous research findings that enable new developments to improve clinical practice. Using a research process helps guide this process. This article is the first in a 26-part series on nursing research. It examines the process that is common to all research, and provides insights into ten different stages of this process: developing the research question, searching and evaluating the literature, selecting the research approach, selecting research methods, gaining access to the research site and data, pilot study, sampling and recruitment, data collection, data analysis, and dissemination of results and implementation of findings.

  10. Methods, media and systems for managing a distributed application running in a plurality of digital processing devices

    Science.gov (United States)

    Laadan, Oren; Nieh, Jason; Phung, Dan

    2012-10-02

    Methods, media and systems for managing a distributed application running in a plurality of digital processing devices are provided. In some embodiments, a method includes running one or more processes associated with the distributed application in virtualized operating system environments on a plurality of digital processing devices, suspending the one or more processes, and saving network state information relating to network connections among the one or more processes. The method further include storing process information relating to the one or more processes, recreating the network connections using the saved network state information, and restarting the one or more processes using the stored process information.

  11. [Comparison research on two-stage sequencing batch MBR and one-stage MBR].

    Science.gov (United States)

    Yuan, Xin-Yan; Shen, Heng-Gen; Sun, Lei; Wang, Lin; Li, Shi-Feng

    2011-01-01

    Aiming at resolving problems in MBR operation, like low nitrogen and phosphorous removal efficiency, severe membrane fouling and etc, comparison research on two-stage sequencing batch MBR (TSBMBR) and one-stage aerobic MBR has been done in this paper. The results indicated that TSBMBR owned advantages of SBR in removing nitrogen and phosphorous, which could make up the deficiency of traditional one-stage aerobic MBR in nitrogen and phosphorous removal. During steady operation period, effluent average NH4(+) -N, TN and TP concentration is 2.83, 12.20, 0.42 mg/L, which could reach domestic scenic environment use. From membrane fouling control point of view, TSBMBR has lower SMP in supernatant, specific trans-membrane flux deduction rate, membrane fouling resistant than one-stage aerobic MBR. The sedimentation and gel layer resistant of TSBMBR was only 6.5% and 33.12% of one-stage aerobic MBR. Besides high efficiency in removing nitrogen and phosphorous, TSBMBR could effectively reduce sedimentation and gel layer pollution on membrane surface. Comparing with one-stage MBR, TSBMBR could operate with higher trans-membrane flux, lower membrane fouling rate and better pollutants removal effects.

  12. A Two-Stage Queue Model to Optimize Layout of Urban Drainage System considering Extreme Rainstorms

    Directory of Open Access Journals (Sweden)

    Xinhua He

    2017-01-01

    Full Text Available Extreme rainstorm is a main factor to cause urban floods when urban drainage system cannot discharge stormwater successfully. This paper investigates distribution feature of rainstorms and draining process of urban drainage systems and uses a two-stage single-counter queue method M/M/1→M/D/1 to model urban drainage system. The model emphasizes randomness of extreme rainstorms, fuzziness of draining process, and construction and operation cost of drainage system. Its two objectives are total cost of construction and operation and overall sojourn time of stormwater. An improved genetic algorithm is redesigned to solve this complex nondeterministic problem, which incorporates with stochastic and fuzzy characteristics in whole drainage process. A numerical example in Shanghai illustrates how to implement the model, and comparisons with alternative algorithms show its performance in computational flexibility and efficiency. Discussions on sensitivity of four main parameters, that is, quantity of pump stations, drainage pipe diameter, rainstorm precipitation intensity, and confidence levels, are also presented to provide guidance for designing urban drainage system.

  13. Space-time-modulated stochastic processes

    Science.gov (United States)

    Giona, Massimiliano

    2017-10-01

    Starting from the physical problem associated with the Lorentzian transformation of a Poisson-Kac process in inertial frames, the concept of space-time-modulated stochastic processes is introduced for processes possessing finite propagation velocity. This class of stochastic processes provides a two-way coupling between the stochastic perturbation acting on a physical observable and the evolution of the physical observable itself, which in turn influences the statistical properties of the stochastic perturbation during its evolution. The definition of space-time-modulated processes requires the introduction of two functions: a nonlinear amplitude modulation, controlling the intensity of the stochastic perturbation, and a time-horizon function, which modulates its statistical properties, providing irreducible feedback between the stochastic perturbation and the physical observable influenced by it. The latter property is the peculiar fingerprint of this class of models that makes them suitable for extension to generic curved-space times. Considering Poisson-Kac processes as prototypical examples of stochastic processes possessing finite propagation velocity, the balance equations for the probability density functions associated with their space-time modulations are derived. Several examples highlighting the peculiarities of space-time-modulated processes are thoroughly analyzed.

  14. Energy demand in Portuguese manufacturing: a two-stage model

    International Nuclear Information System (INIS)

    Borges, A.M.; Pereira, A.M.

    1992-01-01

    We use a two-stage model of factor demand to estimate the parameters determining energy demand in Portuguese manufacturing. In the first stage, a capital-labor-energy-materials framework is used to analyze the substitutability between energy as a whole and other factors of production. In the second stage, total energy demand is decomposed into oil, coal and electricity demands. The two stages are fully integrated since the energy composite used in the first stage and its price are obtained from the second stage energy sub-model. The estimates obtained indicate that energy demand in manufacturing responds significantly to price changes. In addition, estimation results suggest that there are important substitution possibilities among energy forms and between energy and other factors of production. The role of price changes in energy-demand forecasting, as well as in energy policy in general, is clearly established. (author)

  15. Refinements of the column generation process for the Vehicle Routing Problem with Time Windows

    DEFF Research Database (Denmark)

    Larsen, Jesper

    2004-01-01

    interval denoted the time window. The objective is to determine routes for the vehicles that minimizes the accumulated cost (or distance) with respect to the above mentioned constraints. Currently the best approaches for determining optimal solutions are based on column generation and Branch......-and-Bound, also known as Branch-and-Price. This paper presents two ideas for run-time improvements of the Branch-and-Price framework for the Vehicle Routing Problem with Time Windows. Both ideas reveal a significant potential for using run-time refinements when speeding up an exact approach without compromising...

  16. A two-stage method for inverse medium scattering

    KAUST Repository

    Ito, Kazufumi

    2013-03-01

    We present a novel numerical method to the time-harmonic inverse medium scattering problem of recovering the refractive index from noisy near-field scattered data. The approach consists of two stages, one pruning step of detecting the scatterer support, and one resolution enhancing step with nonsmooth mixed regularization. The first step is strictly direct and of sampling type, and it faithfully detects the scatterer support. The second step is an innovative application of nonsmooth mixed regularization, and it accurately resolves the scatterer size as well as intensities. The nonsmooth model can be efficiently solved by a semi-smooth Newton-type method. Numerical results for two- and three-dimensional examples indicate that the new approach is accurate, computationally efficient, and robust with respect to data noise. © 2012 Elsevier Inc.

  17. Success Run Waiting Times and Fuss-Catalan Numbers

    Directory of Open Access Journals (Sweden)

    S. J. Dilworth

    2015-01-01

    Full Text Available We present power series expressions for all the roots of the auxiliary equation of the recurrence relation for the distribution of the waiting time for the first run of k consecutive successes in a sequence of independent Bernoulli trials, that is, the geometric distribution of order k. We show that the series coefficients are Fuss-Catalan numbers and write the roots in terms of the generating function of the Fuss-Catalan numbers. Our main result is a new exact expression for the distribution, which is more concise than previously published formulas. Our work extends the analysis by Feller, who gave asymptotic results. We obtain quantitative improvements of the error estimates obtained by Feller.

  18. Possible two-stage 87Sr evolution in the Stockdale Rhyolite

    International Nuclear Information System (INIS)

    Compston, W.; McDougall, I.; Wyborn, D.

    1982-01-01

    The Rb-Sr total-rock data for the Stockdale Rhyolite, of significance for the Palaeozoic time scale, are more scattered about a single-stage isochron than expected from experimental error. Two-stage 87 Sr evolution for several of the samples is explored to explain this, as an alternative to variation in the initial 87 Sr/ 86 Sr which is customarily used in single-stage dating models. The deletion of certain samples having very high Rb/Sr removes most of the excess scatter and leads to an estimate of 430 +- 7 m.y. for the age of extrusion. There is a younger alignment of Rb-Sr data within each sampling site at 412 +- 7 m.y. We suggest that the Stockdale Rhyolite is at least 430 m.y. old, that its original range in Rb/Sr was smaller than now observed, and that it experienced a net loss in Sr during later hydrothermal alteration at ca. 412 m.y. (orig.)

  19. Massively parallel signal processing using the graphics processing unit for real-time brain-computer interface feature extraction

    Directory of Open Access Journals (Sweden)

    J. Adam Wilson

    2009-07-01

    Full Text Available The clock speeds of modern computer processors have nearly plateaued in the past five years. Consequently, neural prosthetic systems that rely on processing large quantities of data in a short period of time face a bottleneck, in that it may not be possible to process all of the data recorded from an electrode array with high channel counts and bandwidth, such as electrocorticographic grids or other implantable systems. Therefore, in this study a method of using the processing capabilities of a graphics card (GPU was developed for real-time neural signal processing of a brain-computer interface (BCI. The NVIDIA CUDA system was used to offload processing to the GPU, which is capable of running many operations in parallel, potentially greatly increasing the speed of existing algorithms. The BCI system records many channels of data, which are processed and translated into a control signal, such as the movement of a computer cursor. This signal processing chain involves computing a matrix-matrix multiplication (i.e., a spatial filter, followed by calculating the power spectral density on every channel using an auto-regressive method, and finally classifying appropriate features for control. In this study, the first two computationally-intensive steps were implemented on the GPU, and the speed was compared to both the current implementation and a CPU-based implementation that uses multi-threading. Significant performance gains were obtained with GPU processing: the current implementation processed 1000 channels in 933 ms, while the new GPU method took only 27 ms, an improvement of nearly 35 times.

  20. Robust iterative learning control for multi-phase batch processes: an average dwell-time method with 2D convergence indexes

    Science.gov (United States)

    Wang, Limin; Shen, Yiteng; Yu, Jingxian; Li, Ping; Zhang, Ridong; Gao, Furong

    2018-01-01

    In order to cope with system disturbances in multi-phase batch processes with different dimensions, a hybrid robust control scheme of iterative learning control combined with feedback control is proposed in this paper. First, with a hybrid iterative learning control law designed by introducing the state error, the tracking error and the extended information, the multi-phase batch process is converted into a two-dimensional Fornasini-Marchesini (2D-FM) switched system with different dimensions. Second, a switching signal is designed using the average dwell-time method integrated with the related switching conditions to give sufficient conditions ensuring stable running for the system. Finally, the minimum running time of the subsystems and the control law gains are calculated by solving the linear matrix inequalities. Meanwhile, a compound 2D controller with robust performance is obtained, which includes a robust extended feedback control for ensuring the steady-state tracking error to converge rapidly. The application on an injection molding process displays the effectiveness and superiority of the proposed strategy.

  1. The influence of magnetic field strength in ionization stage on ion transport between two stages of a double stage Hall thruster

    International Nuclear Information System (INIS)

    Yu Daren; Song Maojiang; Li Hong; Liu Hui; Han Ke

    2012-01-01

    It is futile for a double stage Hall thruster to design a special ionization stage if the ionized ions cannot enter the acceleration stage. Based on this viewpoint, the ion transport under different magnetic field strengths in the ionization stage is investigated, and the physical mechanisms affecting the ion transport are analyzed in this paper. With a combined experimental and particle-in-cell simulation study, it is found that the ion transport between two stages is chiefly affected by the potential well, the potential barrier, and the potential drop at the bottom of potential well. With the increase of magnetic field strength in the ionization stage, there is larger plasma density caused by larger potential well. Furthermore, the potential barrier near the intermediate electrode declines first and then rises up while the potential drop at the bottom of potential well rises up first and then declines as the magnetic field strength increases in the ionization stage. Consequently, both the ion current entering the acceleration stage and the total ion current ejected from the thruster rise up first and then decline as the magnetic field strength increases in the ionization stage. Therefore, there is an optimal magnetic field strength in the ionization stage to guide the ion transport between two stages.

  2. An adaptive three-stage extended Kalman filter for nonlinear discrete-time system in presence of unknown inputs.

    Science.gov (United States)

    Xiao, Mengli; Zhang, Yongbo; Wang, Zhihua; Fu, Huimin

    2018-04-01

    Considering the performances of conventional Kalman filter may seriously degrade when it suffers stochastic faults and unknown input, which is very common in engineering problems, a new type of adaptive three-stage extended Kalman filter (AThSEKF) is proposed to solve state and fault estimation in nonlinear discrete-time system under these conditions. The three-stage UV transformation and adaptive forgetting factor are introduced for derivation, and by comparing with the adaptive augmented state extended Kalman filter, it is proven to be uniformly asymptotically stable. Furthermore, the adaptive three-stage extended Kalman filter is applied to a two-dimensional radar tracking scenario to illustrate the effect, and the performance is compared with that of conventional three stage extended Kalman filter (ThSEKF) and the adaptive two-stage extended Kalman filter (ATEKF). The results show that the adaptive three-stage extended Kalman filter is more effective than these two filters when facing the nonlinear discrete-time systems with information of unknown inputs not perfectly known. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  3. On-line, real-time monitoring for petrochemical and pipeline process control applications

    Energy Technology Data Exchange (ETDEWEB)

    Kane, Russell D.; Eden, D.C.; Cayard, M.S.; Eden, D.A.; Mclean, D.T. [InterCorr International, Inc., 14503 Bammel N. Houston, Suite 300, Houston Texas 77014 (United States); Kintz, J. [BASF Corporation, 602 Copper Rd., Freeport, Texas 77541 (United States)

    2004-07-01

    Corrosion problems in petroleum and petrochemical plants and pipeline may be inherent to the processes, but costly and damaging equipment losses are not. With the continual drive to increase productivity, while protecting both product quality, safety and the environment, corrosion must become a variable that can be continuously monitored and assessed. This millennium has seen the introduction of new 'real-time', online measurement technologies and vast improvements in methods of electronic data handling. The 'replace when it fails' approach is receding into a distant memory; facilities management today is embracing new technology, and rapidly appreciating the value it has to offer. It has offered the capabilities to increase system run time between major inspections, reduce the time and expense associated with turnaround or in-line inspections, and reduce major upsets which cause unplanned shut downs. The end result is the ability to know on a practical basis of how 'hard' facilities can be pushed before excessive corrosion damage will result, so that process engineers can understand the impact of their process control actions and implement true asset management. This paper makes reference to use of a online, real-time electrochemical corrosion monitoring system - SmartCET 1- in a plant running a mostly organic process media. It also highlights other pertinent examples where similar systems have been used to provide useful real-time information to detect system upsets, which would not have been possible otherwise. This monitoring/process control approach has operators and engineers to see, for the first time, changes in corrosion behavior caused by specific variations in process parameters. Process adjustments have been identified that reduce corrosion rates while maintaining acceptable yields and quality. The monitoring system has provided a new window into the chemistry of the process, helping chemical engineers improve their process

  4. The Effect of Effluent Recirculation in a Semi-Continuous Two-Stage Anaerobic Digestion System

    Directory of Open Access Journals (Sweden)

    Karthik Rajendran

    2013-06-01

    Full Text Available The effect of recirculation in increasing organic loading rate (OLR and decreasing hydraulic retention time (HRT in a semi-continuous two-stage anaerobic digestion system using stirred tank reactor (CSTR and an upflow anaerobic sludge bed (UASB was evaluated. Two-parallel processes were in operation for 100 days, one with recirculation (closed system and the other without recirculation (open system. For this purpose, two structurally different carbohydrate-based substrates were used; starch and cotton. The digestion of starch and cotton in the closed system resulted in production of 91% and 80% of the theoretical methane yield during the first 60 days. In contrast, in the open system the methane yield was decreased to 82% and 56% of the theoretical value, for starch and cotton, respectively. The OLR could successfully be increased to 4 gVS/L/day for cotton and 10 gVS/L/day for starch. It is concluded that the recirculation supports the microorganisms for effective hydrolysis of polyhydrocarbons in CSTR and to preserve the nutrients in the system at higher OLRs, thereby improving the overall performance and stability of the process.

  5. Backward running or absence of running from Creutz ratios

    International Nuclear Information System (INIS)

    Giedt, Joel; Weinberg, Evan

    2011-01-01

    We extract the running coupling based on Creutz ratios in SU(2) lattice gauge theory with two Dirac fermions in the adjoint representation. Depending on how the extrapolation to zero fermion mass is performed, either backward running or an absence of running is observed at strong bare coupling. This behavior is consistent with other findings which indicate that this theory has an infrared fixed point.

  6. The medial prefrontal cortex and nucleus accumbens mediate the motivation for voluntary wheel running in the rat.

    Science.gov (United States)

    Basso, Julia C; Morrell, Joan I

    2015-08-01

    Voluntary wheel running in rats provides a preclinical model of exercise motivation in humans. We hypothesized that rats run because this activity has positive incentive salience in both the acquisition and habitual stages of wheel running and that gender differences might be present. Additionally, we sought to determine which forebrain regions are essential for the motivational processes underlying wheel running in rats. The motivation for voluntary wheel running in male and female Sprague-Dawley rats was investigated during the acquisition (Days 1-7) and habitual phases (after Day 21) of running using conditioned place preference (CPP) and the reinstatement (rebound) response after forced abstinence, respectively. Both genders displayed a strong CPP for the acquisition phase and a strong rebound response to wheel deprivation during the habitual phase, suggesting that both phases of wheel running are rewarding for both sexes. Female rats showed a 1.5 times greater rebound response than males to wheel deprivation in the habitual phase of running, while during the acquisition phase, no gender differences in CPP were found. We transiently inactivated the medial prefrontal cortex (mPFC) or the nucleus accumbens (NA), hypothesizing that because these regions are involved in the acquisition and reinstatement of self-administration of both natural and pharmacological stimuli, they might also serve a role in the motivation to wheel run. Inactivation of either structure decreased the rebound response in the habitual phase of running, demonstrating that these structures are involved in the motivation for this behavior. (c) 2015 APA, all rights reserved).

  7. High Performance Gasification with the Two-Stage Gasifier

    DEFF Research Database (Denmark)

    Gøbel, Benny; Hindsgaul, Claus; Henriksen, Ulrik Birk

    2002-01-01

    , air preheating and pyrolysis, hereby very high energy efficiencies can be achieved. Encouraging results are obtained at a 100 kWth laboratory facility. The tar content in the raw gas is measured to be below 25 mg/Nm3 and around 5 mg/Nm3 after gas cleaning with traditional baghouse filter. Furthermore...... a cold gas efficiency exceeding 90% is obtained. In the original design of the two-stage gasification process, the pyrolysis unit consists of a screw conveyor with external heating, and the char unit is a fixed bed gasifier. This design is well proven during more than 1000 hours of testing with various...... fuels, and is a suitable design for medium size gasifiers....

  8. A statistical approach for segregating cognitive task stages from multivariate fMRI BOLD time series

    Directory of Open Access Journals (Sweden)

    Charmaine eDemanuele

    2015-10-01

    Full Text Available Multivariate pattern analysis can reveal new information from neuroimaging data to illuminate human cognition and its disturbances. Here, we develop a methodological approach, based on multivariate statistical/machine learning and time series analysis, to discern cognitive processing stages from fMRI blood oxygenation level dependent (BOLD time series. We apply this method to data recorded from a group of healthy adults whilst performing a virtual reality version of the delayed win-shift radial arm maze task. This task has been frequently used to study working memory and decision making in rodents. Using linear classifiers and multivariate test statistics in conjunction with time series bootstraps, we show that different cognitive stages of the task, as defined by the experimenter, namely, the encoding/retrieval, choice, reward and delay stages, can be statistically discriminated from the BOLD time series in brain areas relevant for decision making and working memory. Discrimination of these task stages was significantly reduced during poor behavioral performance in dorsolateral prefrontal cortex (DLPFC, but not in the primary visual cortex (V1. Experimenter-defined dissection of time series into class labels based on task structure was confirmed by an unsupervised, bottom-up approach based on Hidden Markov Models. Furthermore, we show that different groupings of recorded time points into cognitive event classes can be used to test hypotheses about the specific cognitive role of a given brain region during task execution. We found that whilst the DLPFC strongly differentiated between task stages associated with different memory loads, but not between different visual-spatial aspects, the reverse was true for V1. Our methodology illustrates how different aspects of cognitive information processing during one and the same task can be separated and attributed to specific brain regions based on information contained in multivariate patterns of voxel

  9. Single-stage Acetabular Revision During Two-stage THA Revision for Infection is Effective in Selected Patients.

    Science.gov (United States)

    Fink, Bernd; Schlumberger, Michael; Oremek, Damian

    2017-08-01

    The treatment of periprosthetic infections of hip arthroplasties typically involves use of either a single- or two-stage (with implantation of a temporary spacer) revision surgery. In patients with severe acetabular bone deficiencies, either already present or after component removal, spacers cannot be safely implanted. In such hips where it is impossible to use spacers and yet a two-stage revision of the prosthetic stem is recommended, we have combined a two-stage revision of the stem with a single revision of the cup. To our knowledge, this approach has not been reported before. (1) What proportion of patients treated with single-stage acetabular reconstruction as part of a two-stage revision for an infected THA remain free from infection at 2 or more years? (2) What are the Harris hip scores after the first stage and at 2 years or more after the definitive reimplantation? Between June 2009 and June 2014, we treated all patients undergoing surgical treatment for an infected THA using a single-stage acetabular revision as part of a two-stage THA exchange if the acetabular defect classification was Paprosky Types 2B, 2C, 3A, 3B, or pelvic discontinuity and a two-stage procedure was preferred for the femur. The procedure included removal of all components, joint débridement, definitive acetabular reconstruction (with a cage to bridge the defect, and a cemented socket), and a temporary cemented femoral component at the first stage; the second stage consisted of repeat joint and femoral débridement and exchange of the femoral component to a cementless device. During the period noted, 35 patients met those definitions and were treated with this approach. No patients were lost to followup before 2 years; mean followup was 42 months (range, 24-84 months). The clinical evaluation was performed with the Harris hip scores and resolution of infection was assessed by the absence of clinical signs of infection and a C-reactive protein level less than 10 mg/L. All

  10. Hydrogen and methane production from desugared molasses using a two‐stage thermophilic anaerobic process

    DEFF Research Database (Denmark)

    Kongjan, Prawit; O-Thong, Sompong; Angelidaki, Irini

    2013-01-01

    Hydrogen and methane production from desugared molasses by a two‐stage thermophilic anaerobic process was investigated in a series of two up‐flow anaerobic sludge blanket (UASB) reactors. The first reactor that was dominated with hydrogen‐producing bacteria of Thermoanaerobacterium thermosaccharo......Hydrogen and methane production from desugared molasses by a two‐stage thermophilic anaerobic process was investigated in a series of two up‐flow anaerobic sludge blanket (UASB) reactors. The first reactor that was dominated with hydrogen‐producing bacteria of Thermoanaerobacterium...... molasses. Furthermore, the mixed gas with a volumetric content of 16.5% H2, 38.7% CO2, and 44.8% CH4, containing approximately 15% energy by hydrogen is viable to be bio‐hythane....

  11. Enhanced coproduction of hydrogen and methane from cornstalks by a three-stage anaerobic fermentation process integrated with alkaline hydrolysis.

    Science.gov (United States)

    Cheng, Xi-Yu; Liu, Chun-Zhao

    2012-01-01

    A three-stage anaerobic fermentation process including H(2) fermentation I, H(2) fermentation II, methane fermentation was developed for the coproduction of hydrogen and methane from cornstalks. Hydrogen production from cornstalks using direct microbial conversion by Clostridium thermocellum 7072 was markedly enhanced in the two-stage thermophilic hydrogen fermentation process integrated with alkaline treatment. The highest total hydrogen yield from cornstalks in the two-stage fermentation process reached 74.4 mL/g-cornstalk. The hydrogen fermentation effluents and alkaline hydrolyzate were further used for methane fermentation by anaerobic granular sludge, and the total methane yield reached 205.8 mL/g-cornstalk. The total energy recovery in the three-stage anaerobic fermentation process integrated with alkaline hydrolysis reached 70.0%. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. Instrument front-ends at Fermilab during Run II

    Science.gov (United States)

    Meyer, T.; Slimmer, D.; Voy, D.

    2011-11-01

    The optimization of an accelerator relies on the ability to monitor the behavior of the beam in an intelligent and timely fashion. The use of processor-driven front-ends allowed for the deployment of smart systems in the field for improved data collection and analysis during Run II. This paper describes the implementation of the two main systems used: National Instruments LabVIEW running on PCs, and WindRiver's VxWorks real-time operating system running in a VME crate processor. Work supported by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the United States Department of Energy.

  13. Control strategy research of two stage topology for pulsed power supply

    International Nuclear Information System (INIS)

    Shi Chunfeng; Wang Rongkun; Huang Yuzhen; Chen Youxin; Yan Hongbin; Gao Daqing

    2013-01-01

    A kind of pulsed power supply of HIRFL-CSR was introduced, the ripple and the current error of the topological structure of the power in the operation process were analyzed, and two stage topology of pulsed power supply was given. The control strategy was simulated and the experiment was done in digital power platform. The results show that the main circuit structure and control method are feasible. (authors)

  14. Influence of Cu(NO32 initiation additive in two-stage mode conditions of coal pyrolytic decomposition

    Directory of Open Access Journals (Sweden)

    Larionov Kirill

    2017-01-01

    Full Text Available Two-stage process (pyrolysis and oxidation of brown coal sample with Cu(NO32 additive pyrolytic decomposition was studied. Additive was introduced by using capillary wetness impregnation method with 5% mass concentration. Sample reactivity was studied by thermogravimetric analysis with staged gaseous medium supply (argon and air at heating rate 10 °C/min and intermediate isothermal soaking. The initiative additive introduction was found to significantly reduce volatile release temperature and accelerate thermal decomposition of sample. Mass-spectral analysis results reveal that significant difference in process characteristics is connected to volatile matter release stage which is initiated by nitrous oxide produced during copper nitrate decomposition.

  15. Spatial statistics of magnetic field in two-dimensional chaotic flow in the resistive growth stage

    Energy Technology Data Exchange (ETDEWEB)

    Kolokolov, I.V., E-mail: igor.kolokolov@gmail.com [Landau Institute for Theoretical Physics RAS, 119334, Kosygina 2, Moscow (Russian Federation); NRU Higher School of Economics, 101000, Myasnitskaya 20, Moscow (Russian Federation)

    2017-03-18

    The correlation tensors of magnetic field in a two-dimensional chaotic flow of conducting fluid are studied. It is shown that there is a stage of resistive evolution where the field correlators grow exponentially with time. The two- and four-point field correlation tensors are computed explicitly in this stage in the framework of Batchelor–Kraichnan–Kazantsev model. They demonstrate strong temporal intermittency of the field fluctuations and high level of non-Gaussianity in spatial field distribution.

  16. Mixing times in quantum walks on two-dimensional grids

    International Nuclear Information System (INIS)

    Marquezino, F. L.; Portugal, R.; Abal, G.

    2010-01-01

    Mixing properties of discrete-time quantum walks on two-dimensional grids with toruslike boundary conditions are analyzed, focusing on their connection to the complexity of the corresponding abstract search algorithm. In particular, an exact expression for the stationary distribution of the coherent walk over odd-sided lattices is obtained after solving the eigenproblem for the evolution operator for this particular graph. The limiting distribution and mixing time of a quantum walk with a coin operator modified as in the abstract search algorithm are obtained numerically. On the basis of these results, the relation between the mixing time of the modified walk and the running time of the corresponding abstract search algorithm is discussed.

  17. Two-stage soil infiltration treatment system for treating ammonium wastewaters of low COD/TN ratios.

    Science.gov (United States)

    Lei, Zhongfang; Wu, Ting; Zhang, Yi; Liu, Xiang; Wan, Chunli; Lee, Duu-Jong; Tay, Joo-Hwa

    2013-01-01

    Soil infiltration treatment (SIT) is ineffective to treat ammonium wastewaters of total nitrogen (TN) > 100 mg l(-1). This study applied a novel two-stage SIT process for effective TN removal from wastewaters of TN>100 mg l(-1) and of chemical oxygen demand (COD)/TN ratio of 3.2-8.6. The wastewater was first fed into the soil column (stage 1) at hydraulic loading rate (HLR) of 0.06 m(3) m(-2) d(-1) for COD removal and total phosphorus (TP) immobilization. Then the effluent from stage 1 was fed individually into four soil columns (stage 2) at 0.02 m(3) m(-2) d(-1) of HLR with different proportions of raw wastewater as additional carbon source. Over the one-year field test, balanced nitrification and denitrification in the two-stage SIT revealed excellent TN removal (>90%) from the tested wastewaters. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. High-speed double-disc TMP [thermomechanical pulp] from northern and southern softwoods: One or two refining stages

    Energy Technology Data Exchange (ETDEWEB)

    Sabourin, M.J. (Andritz Sprout-Bauer, Inc., Springfield, OH (United States)); Cort, J.B.; Musselman, R.L. (Andritz Sprout-Bauer, Inc., Muncy, PA (United States))

    1994-01-01

    Pilot-plant studies were carried out to evaluate one- and two-stage high-speed refining processes for production of thermomechanical pulp (TMP) at minimal energy consumption. Both northern (black spruce/balsam fir) and southern (lobolly pine) wood species were tested. Preliminary results indicate both one- and two-stage high-speed refining are suitable for the production of TMP from spruce and fir. Single-stage, high-speed refining of spruce/fir resulted in over 25% energy savings compared to conventional TMP production. The resulting TMP had improved optical and shive content properties, with slightly reduced pulp strength and long fiber content. Two stages of refining were necessary to optimize pulp quality from the lobolly pine furnish. A 15% energy reduction was obtained when comparing high-speed and conventional TMP pulping of lobolly pine at similar operating conditions. The high-speed pine TMP had comparable bonding strength, shive content, and lower tear than conventional two-stage lobolly pine TMP. 14 refs., 11 figs., 6 tabs.

  19. Passenger Sharing of the High-Speed Railway from Sensitivity Analysis Caused by Price and Run-time Based on the Multi-Agent System

    Directory of Open Access Journals (Sweden)

    Ma Ning

    2013-09-01

    Full Text Available Purpose: Nowadays, governments around the world are active in constructing the high-speed railway. Therefore, it is significant to make research on this increasingly prevalent transport.Design/methodology/approach: In this paper, we simulate the process of the passenger’s travel mode choice by adjusting the ticket fare and the run-time based on the multi-agent system (MAS.Findings: From the research we get the conclusion that increasing the run-time appropriately and reducing the ticket fare in some extent are effective ways to enhance the passenger sharing of the high-speed railway.Originality/value: We hope it can provide policy recommendations for the railway sectors in developing the long-term plan on high-speed railway in the future.

  20. Performance of the ATLAS muon trigger in run 2

    CERN Document Server

    Morgenstern, Marcus; The ATLAS collaboration

    2017-01-01

    Triggering on muons is a crucial ingredient to fulfill the physics program of the ATLAS experiments. The ATLAS trigger system deploys a two stage strategy, a hardware-based Level-1 trigger and a software-based high-level trigger to select events of interest at a suitable recording rate. Both stages underwent upgrades to cope with the challenges in run-II data-taking at centre-of-mass energies of 13 TeV and instantaneous luminosities up to 2x10$^{34} cm^{-2}s^{-1}$. The design of the ATLAS muon triggers and their performance in proton-proton collisions at 13 TeV are presented.

  1. Quintessence as a run-away dilaton

    CERN Document Server

    Gasperini, M; Veneziano, Gabriele

    2002-01-01

    We consider a late-time cosmological model based on a recent proposal that the infinite-bare-coupling limit of superstring/M-theory exists and has good phenomenological properties, including a vanishing cosmological constant, and a massless, decoupled dilaton. As it runs away to $+ \\infty$, the dilaton can play the role of the quintessence field recently advocated to drive the late-time accelerated expansion of the Universe. If, as suggested by some string theory examples, appreciable deviations from General Relativity persist even today in the dark matter sector, the Universe may smoothly evolve from an initial "focussing" stage, lasting till radiation-matter equality, to a "dragging" regime, which eventually gives rise to an accelerated expansion with frozen $\\Omega(\\rm{dark energy})/\\Omega(\\rm{dark matter})$.

  2. Palantiri: a distributed real-time database system for process control

    International Nuclear Information System (INIS)

    Tummers, B.J.; Heubers, W.P.J.

    1992-01-01

    The medium-energy accelerator MEA, located in Amsterdam, is controlled by a heterogeneous computer network. A large real-time database contains the parameters involved in the control of the accelerator and the experiments. This database system was implemented about ten years ago and has since been extended several times. In response to increased needs the database system has been redesigned. The new database environment, as described in this paper, consists out of two new concepts: (1) A Palantir which is a per machine process that stores the locally declared data and forwards all non local requests for data access to the appropriate machine. It acts as a storage device for data and a looking glass upon the world. (2) Golems: working units that define the data within the Palantir, and that have knowledge of the hardware they control. Applications access the data of a Golem by name (which do resemble Unix path names). The palantir that runs on the same machine as the application handles the distribution of access requests. This paper focuses on the Palantir concept as a distributed data storage and event handling device for process control. (author)

  3. Integrating software testing and run-time checking in an assertion verification framework

    OpenAIRE

    Mera, E.; López García, Pedro; Hermenegildo, Manuel V.

    2009-01-01

    We have designed and implemented a framework that unifies unit testing and run-time verification (as well as static verification and static debugging). A key contribution of our approach is that a unified assertion language is used for all of these tasks. We first propose methods for compiling runtime checks for (parts of) assertions which cannot be verified at compile-time via program transformation. This transformation allows checking preconditions and postconditions, including conditional...

  4. Building fast well-balanced two-stage numerical schemes for a model of two-phase flows

    Science.gov (United States)

    Thanh, Mai Duc

    2014-06-01

    We present a set of well-balanced two-stage schemes for an isentropic model of two-phase flows arisen from the modeling of deflagration-to-detonation transition in granular materials. The first stage is to absorb the source term in nonconservative form into equilibria. Then in the second stage, these equilibria will be composed into a numerical flux formed by using a convex combination of the numerical flux of a stable Lax-Friedrichs-type scheme and the one of a higher-order Richtmyer-type scheme. Numerical schemes constructed in such a way are expected to get the interesting property: they are fast and stable. Tests show that the method works out until the parameter takes on the value CFL, and so any value of the parameter between zero and this value is expected to work as well. All the schemes in this family are shown to capture stationary waves and preserves the positivity of the volume fractions. The special values of the parameter 0,1/2,1/(1+CFL), and CFL in this family define the Lax-Friedrichs-type, FAST1, FAST2, and FAST3 schemes, respectively. These schemes are shown to give a desirable accuracy. The errors and the CPU time of these schemes and the Roe-type scheme are calculated and compared. The constructed schemes are shown to be well-balanced and faster than the Roe-type scheme.

  5. Five training sessions improves 3000 meter running performance.

    Science.gov (United States)

    Riiser, A; Ripe, S; Aadland, E

    2015-12-01

    The primary aim of the present study was to evaluate the effect of two weeks of endurance training on 3000-meter running performance. Secondary we wanted to assess the relationship between baseline running performance and change in running performance over the intervention period. We assigned 36 military recruits to a training group (N.=28) and a control group. The training group was randomly allocated to one of three sub-groups: 1) a 3000 meter group (test race); 2) a 4x4-minutes high-intensity interval group; 3) a continuous training group. The training group exercised five times over a two-week period. The training group improved its 3000 meter running performance with 50 seconds (6%) compared to the control group (P=0.003). Moreover, all sub-groups improved their performance by 37 to 73 seconds (4-8%) compared to the control group (Ptraining group. We conclude that five endurance training sessions improved 3000 meter running performance and the slowest runners achieved the greatest improvement in running performance.

  6. Final Report on Two-Stage Fast Spectrum Fuel Cycle Options

    International Nuclear Information System (INIS)

    Yang, Won Sik; Lin, C. S.; Hader, J. S.; Park, T. K.; Deng, P.; Yang, G.; Jung, Y. S.; Kim, T. K.; Stauff, N. E.

    2016-01-01

    This report presents the performance characteristics of two ''two-stage'' fast spectrum fuel cycle options proposed to enhance uranium resource utilization and to reduce nuclear waste generation. One is a two-stage fast spectrum fuel cycle option of continuous recycle of plutonium (Pu) in a fast reactor (FR) and subsequent burning of minor actinides (MAs) in an accelerator-driven system (ADS). The first stage is a sodium-cooled FR fuel cycle starting with low-enriched uranium (LEU) fuel; at the equilibrium cycle, the FR is operated using the recovered Pu and natural uranium without supporting LEU. Pu and uranium (U) are co-extracted from the discharged fuel and recycled in the first stage, and the recovered MAs are sent to the second stage. The second stage is a sodium-cooled ADS in which MAs are burned in an inert matrix fuel form. The discharged fuel of ADS is reprocessed, and all the recovered heavy metals (HMs) are recycled into the ADS. The other is a two-stage FR/ADS fuel cycle option with MA targets loaded in the FR. The recovered MAs are not directly sent to ADS, but partially incinerated in the FR in order to reduce the amount of MAs to be sent to the ADS. This is a heterogeneous recycling option of transuranic (TRU) elements

  7. Two-stage electrolysis to enrich tritium in environmental water

    International Nuclear Information System (INIS)

    Shima, Nagayoshi; Muranaka, Takeshi

    2007-01-01

    We present a two-stage electrolyzing procedure to enrich tritium in environmental waters. Tritium is first enriched rapidly through a commercially-available electrolyser with a large 50A current, and then through a newly-designed electrolyser that avoids the memory effect, with a 6A current. Tritium recovery factor obtained by such a two-stage electrolysis was greater than that obtained when using the commercially-available device solely. Water samples collected in 2006 in lakes and along the Pacific coast of Aomori prefecture, Japan, were electrolyzed using the two-stage method. Tritium concentrations in these samples ranged from 0.2 to 0.9 Bq/L and were half or less, that in samples collected at the same sites in 1992. (author)

  8. HOSPITAL SITE SELECTION USING TWO-STAGE FUZZY MULTI-CRITERIA DECISION MAKING PROCESS

    Directory of Open Access Journals (Sweden)

    Ali Soltani

    2011-06-01

    Full Text Available Site selection for sitting of urban activities/facilities is one of the crucial policy-related decisions taken by urban planners and policy makers. The process of site selection is inherently complicated. A careless site imposes exorbitant costs on city budget and damages the environment inevitably. Nowadays, multi-attributes decision making approaches are suggested to use to improve precision of decision making and reduce surplus side effects. Two well-known techniques, analytical hierarchal process and analytical network process are among multi-criteria decision making systems which can easily be consistent with both quantitative and qualitative criteria. These are also developed to be fuzzy analytical hierarchal process and fuzzy analytical network process systems which are capable of accommodating inherent uncertainty and vagueness in multi-criteria decision-making. This paper reports the process and results of a hospital site selection within the Region 5 of Shiraz metropolitan area, Iran using integrated fuzzy analytical network process systems with Geographic Information System (GIS. The weights of the alternatives were calculated using fuzzy analytical network process. Then a sensitivity analysis was conducted to measure the elasticity of a decision in regards to different criteria. This study contributes to planning practice by suggesting a more comprehensive decision making tool for site selection.

  9. HOSPITAL SITE SELECTION USING TWO-STAGE FUZZY MULTI-CRITERIA DECISION MAKING PROCESS

    Directory of Open Access Journals (Sweden)

    Ali Soltani

    2011-01-01

    Full Text Available Site selection for sitting of urban activities/facilities is one of the crucial policy-related decisions taken by urban planners and policy makers. The process of site selection is inherently complicated. A careless site imposes exorbitant costs on city budget and damages the environment inevitably. Nowadays, multi-attributes decision making approaches are suggested to use to improve precision of decision making and reduce surplus side effects. Two well-known techniques, analytical hierarchal process and analytical network process are among multi-criteria decision making systems which can easily be consistent with both quantitative and qualitative criteria. These are also developed to be fuzzy analytical hierarchal process and fuzzy analytical network process systems which are capable of accommodating inherent uncertainty and vagueness in multi-criteria decision-making. This paper reports the process and results of a hospital site selection within the Region 5 of Shiraz metropolitan area, Iran using integrated fuzzy analytical network process systems with Geographic Information System (GIS. The weights of the alternatives were calculated using fuzzy analytical network process. Then a sensitivity analysis was conducted to measure the elasticity of a decision in regards to different criteria. This study contributes to planning practice by suggesting a more comprehensive decision making tool for site selection.

  10. Thermodynamics analysis of a modified dual-evaporator CO2 transcritical refrigeration cycle with two-stage ejector

    International Nuclear Information System (INIS)

    Bai, Tao; Yan, Gang; Yu, Jianlin

    2015-01-01

    In this paper, a modified dual-evaporator CO 2 transcritical refrigeration cycle with two-stage ejector (MDRC) is proposed. In MDRC, the two-stage ejector are employed to recover the expansion work from cycle throttling processes and enhance the system performance and obtain dual-temperature refrigeration simultaneously. The effects of some key parameters on the thermodynamic performance of the modified cycle are theoretically investigated based on energetic and exergetic analyses. The simulation results for the modified cycle show that two-stage ejector exhibits more effective system performance improvement than the single ejector in CO 2 dual-temperature refrigeration cycle, and the improvements of the maximum system COP (coefficient of performance) and system exergy efficiency could reach 37.61% and 31.9% over those of the conventional dual-evaporator cycle under the given operating conditions. The exergetic analysis for each component at optimum discharge pressure indicates that the gas cooler, compressor, two-stage ejector and expansion valves contribute main portion to the total system exergy destruction, and the exergy destruction caused by the two-stage ejector could amount to 16.91% of the exergy input. The performance characteristics of the proposed cycle show its promise in dual-evaporator refrigeration system. - Highlights: • Two-stage ejector is used in dual-evaporator CO 2 transcritical refrigeration cycle. • Energetic and exergetic methods are carried out to analyze the system performance. • The modified cycle could obtain dual-temperature refrigeration simultaneously. • Two-stage ejector could effectively improve system COP and exergy efficiency

  11. Parameters affecting the stability of the digestate from a two-stage anaerobic process treating the organic fraction of municipal solid waste

    International Nuclear Information System (INIS)

    Trzcinski, Antoine P.; Stuckey, David C.

    2011-01-01

    This paper focused on the factors affecting the respiration rate of the digestate taken from a continuous anaerobic two-stage process treating the organic fraction of municipal solid waste (OFMSW). The process involved a hydrolytic reactor (HR) that produced a leachate fed to a submerged anaerobic membrane bioreactor (SAMBR). It was found that a volatile solids (VS) removal in the range 40-75% and an operating temperature in the HR between 21 and 35 o C resulted in digestates with similar respiration rates, with all digestates requiring 17 days of aeration before satisfying the British Standard Institution stability threshold of 16 mg CO 2 g VS -1 day -1 . Sanitization of the digestate at 65 o C for 7 days allowed a mature digestate to be obtained. At 4 g VS L -1 d -1 and Solid Retention Times (SRT) greater than 70 days, all the digestates emitted CO 2 at a rate lower than 25 mg CO 2 g VS -1 d -1 after 3 days of aeration, while at SRT lower than 20 days all the digestates displayed a respiration rate greater than 25 mg CO 2 g VS -1 d -1 . The compliance criteria for Class I digestate set by the European Commission (EC) and British Standard Institution (BSI) could not be met because of nickel and chromium contamination, which was probably due to attrition of the stainless steel stirrer in the HR.

  12. Habitat Fragmentation Drives Plant Community Assembly Processes across Life Stages

    Science.gov (United States)

    Hu, Guang; Feeley, Kenneth J.; Yu, Mingjian

    2016-01-01

    Habitat fragmentation is one of the principal causes of biodiversity loss and hence understanding its impacts on community assembly and disassembly is an important topic in ecology. We studied the relationships between fragmentation and community assembly processes in the land-bridge island system of Thousand Island Lake in East China. We focused on the changes in species diversity and phylogenetic diversity that occurred between life stages of woody plants growing on these islands. The observed diversities were compared with the expected diversities from random null models to characterize assembly processes. Regression tree analysis was used to illustrate the relationships between island attributes and community assembly processes. We found that different assembly processes predominate in the seedlings-to-saplings life-stage transition (SS) vs. the saplings-to-trees transition (ST). Island area was the main attribute driving the assembly process in SS. In ST, island isolation was more important. Within a fragmented landscape, the factors driving community assembly processes were found to differ between life stage transitions. Environmental filtering had a strong effect on the seedlings-to-saplings life-stage transition. Habitat isolation and dispersal limitation influenced all plant life stages, but had a weaker effect on communities than area. These findings add to our understanding of the processes driving community assembly and species coexistence in the context of pervasive and widespread habitat loss and fragmentation. PMID:27427960

  13. A two-stage planning and control model toward Economically Adapted Power Distribution Systems using analytical hierarchy processes and fuzzy optimization

    Energy Technology Data Exchange (ETDEWEB)

    Schweickardt, Gustavo [Instituto de Economia Energetica, Fundacion Bariloche, Centro Atomico Bariloche - Pabellon 7, Av. Bustillo km 9500, 8400 Bariloche (Argentina); Miranda, Vladimiro [INESC Porto, Instituto de Engenharia de Sistemas e Computadores do Porto and FEUP, Faculdade de Engenharia da Universidade do Porto, R. Dr. Roberto Frias, 378, 4200-465 Porto (Portugal)

    2009-07-15

    This work presents a model to evaluate the Distribution System Dynamic De-adaptation respecting its planning for a given period of Tariff Control. The starting point for modeling is brought about by the results from a multi-criteria method based on Fuzzy Dynamic Programming and on Analytic Hierarchy Processes applied in a mid/short-term horizon (stage 1). Then, the decision-making activities using the Hierarchy Analytical Processes will allow defining, for a Control of System De-adaptation (stage 2), a Vector to evaluate the System Dynamic Adaptation. It is directly associated to an eventual series of inbalances that take place during its evolution. (author)

  14. On A Two-Stage Supply Chain Model In The Manufacturing Industry ...

    African Journals Online (AJOL)

    We model a two-stage supply chain where the upstream stage (stage 2) always meet demand from the downstream stage (stage 1).Demand is stochastic hence shortages will occasionally occur at stage 2. Stage 2 must fill these shortages by expediting using overtime production and/or backordering. We derive optimal ...

  15. Optimization of a two stage process for biodiesel production from shea butter using response surface methodology

    Directory of Open Access Journals (Sweden)

    E.O. Ajala

    2017-12-01

    Full Text Available The challenges of biodiesel production from high free fatty acid (FFA shea butter (SB necessitated this study. The reduction of %FFA of SB by esterification and its subsequent utilization by transesterification for biodiesel production in a two stage process for optimization studies was investigated using response surface methodology based on a central composite design (CCD. Four operating conditions were investigated to reduce the %FFA of SB and increase the %yield of shea biodiesel (SBD. The operating conditions were temperature (40–60°C, agitation speed (200–1400 rpm, methanol (MeOH: oil mole ratio: 2:1–6:1 (w/w for esterification and 4:1–8:1 (w/w for transesterification and catalyst loading: 1–2% (H2SO4, (v/v for esterification and KOH, (w/w for transesterification. The significance of the parameters obtained in linear and non-linear form from the models were determined using analysis of variance (ANOVA. The optimal operating conditions that gave minimum FFA of 0.26% were 52.19°C, 200 rpm, 2:1 (w/w and 1.5% (v/v, while those that gave maximum yield of 92.16% SBD were 40°C, 800 rpm, 7:1 (w/w and 1% (w/w. The p-value of <0.0001 for each of the stages showed that the models were significant with R2 of 0.96 each. These results indicate the reproducibility of the models and showed that the RSM is suitable to optimize the esterification and transesterification of SB for SBD production. Therefore, RSM is a useful tool that can be employed in industrial scale production of SBD from high FFA SB.

  16. Sodium pool combustion test for small-scale leakage. Run-F7-4 and Run-F8-2

    International Nuclear Information System (INIS)

    Futagami, Satoshi; Ohno, Shuji

    2003-06-01

    Since 1998, the test (Run-F7 series) was performed to acquire the fundamental knowledge about the sodium pool growth and floor liner temperature in the case of small-scale leakage of sodium. And the test (Run-F8 series) was performed to know the floor liner material corrosion mechanism under high moisture conditions. In both test series, those influences are investigated by making the rate of sodium leakage, and moisture conditions of supply air into main parameters. As the last test, (1) Run-F7-4 (June 28, 2000) and (2) Run-F8-2 (January 26, 2000) were carried out. The conclusion of the following which receives sodium small-scale leakage (about 10 kg/h) was obtained from these experiments and the result of old Run-F7 and Run-F8 series. The peak temperature of a catch pan tends to become lower with decrease of sodium leak rate. Moreover, height of leak point and moisture conditions also become the factor which raises the catch pan peak temperature. Although it grows up in proportion [almost]to time in early stages of leakage about growth of a sodium pool, growth stops during the leakage. Moreover, the final growth area is mostly proportional to the rate of sodium leakage. It was suggested by the measured value of catch pan corrosion thickness and a material analysis result that the dominant corrosion mechanism was relatively slow Na-Fe double oxidization type corrosion even under the high moisture condition of 4.6 to 4.8%. And the chemical analysis result of a deposits also suggested that the catch pan material was in the environment in which molten salt type corrosion was not easy to occur. (author)

  17. Fracture processes studied in CRESST

    International Nuclear Information System (INIS)

    Astroem, J.; Proebst, F.; Di Stefano, P.C.F.; Stodolsky, L.; Timonen, J.; Bucci, C.; Cooper, S.; Cozzini, C.; Feilitzsch, F. von; Kraus, H.; Marchese, J.; Meier, O.; Nagel, U.; Ramachers, Y.; Seidel, W.; Sisti, M.; Uchaikin, S.; Zerle, L.

    2006-01-01

    In the early stages of running of the CRESST dark matter search with sapphire crystals as detectors, an unexpectedly high rate of signal pulses appeared. Their origin was finally traced to fracture events in the sapphire due to the very tight clamping of the detectors. During extensive runs the energy and time of each event was recorded, providing large data sets for such phenomena. We believe this is the first time that the energy release in fracture has been accurately measured on a microscopic event-by-event basis. The energy distributions appear to follow a power law, dN/dE∝E -β , similar to the Gutenberg-Richter power law for earthquake magnitudes, and after appropriate translation, with a similar exponent. In the time domain, the autocorrelation function shows time correlations lasting for substantial parts of an hour. Some remarks are made concerning the possible role of such mechanical stress release processes in the noise of sensitive cryodetectors

  18. Evidence that viral RNAs have evolved for efficient, two-stage packaging.

    Science.gov (United States)

    Borodavka, Alexander; Tuma, Roman; Stockley, Peter G

    2012-09-25

    Genome packaging is an essential step in virus replication and a potential drug target. Single-stranded RNA viruses have been thought to encapsidate their genomes by gradual co-assembly with capsid subunits. In contrast, using a single molecule fluorescence assay to monitor RNA conformation and virus assembly in real time, with two viruses from differing structural families, we have discovered that packaging is a two-stage process. Initially, the genomic RNAs undergo rapid and dramatic (approximately 20-30%) collapse of their solution conformations upon addition of cognate coat proteins. The collapse occurs with a substoichiometric ratio of coat protein subunits and is followed by a gradual increase in particle size, consistent with the recruitment of additional subunits to complete a growing capsid. Equivalently sized nonviral RNAs, including high copy potential in vivo competitor mRNAs, do not collapse. They do support particle assembly, however, but yield many aberrant structures in contrast to viral RNAs that make only capsids of the correct size. The collapse is specific to viral RNA fragments, implying that it depends on a series of specific RNA-protein interactions. For bacteriophage MS2, we have shown that collapse is driven by subsequent protein-protein interactions, consistent with the RNA-protein contacts occurring in defined spatial locations. Conformational collapse appears to be a distinct feature of viral RNA that has evolved to facilitate assembly. Aspects of this process mimic those seen in ribosome assembly.

  19. Haemoglobin mass and running time trial performance after recombinant human erythropoietin administration in trained men.

    Directory of Open Access Journals (Sweden)

    Jérôme Durussel

    Full Text Available UNLABELLED: Recombinant human erythropoietin (rHuEpo increases haemoglobin mass (Hb(mass and maximal oxygen uptake (v O(2 max. PURPOSE: This study defined the time course of changes in Hb(mass, v O(2 max as well as running time trial performance following 4 weeks of rHuEpo administration to determine whether the laboratory observations would translate into actual improvements in running performance in the field. METHODS: 19 trained men received rHuEpo injections of 50 IU•kg(-1 body mass every two days for 4 weeks. Hb(mass was determined weekly using the optimized carbon monoxide rebreathing method until 4 weeks after administration. v O(2 max and 3,000 m time trial performance were measured pre, post administration and at the end of the study. RESULTS: Relative to baseline, running performance significantly improved by ∼6% after administration (10:30±1:07 min:sec vs. 11:08±1:15 min:sec, p<0.001 and remained significantly enhanced by ∼3% 4 weeks after administration (10:46±1:13 min:sec, p<0.001, while v O(2 max was also significantly increased post administration (60.7±5.8 mL•min(-1•kg(-1 vs. 56.0±6.2 mL•min(-1•kg(-1, p<0.001 and remained significantly increased 4 weeks after rHuEpo (58.0±5.6 mL•min(-1•kg(-1, p = 0.021. Hb(mass was significantly increased at the end of administration compared to baseline (15.2±1.5 g•kg(-1 vs. 12.7±1.2 g•kg(-1, p<0.001. The rate of decrease in Hb(mass toward baseline values post rHuEpo was similar to that of the increase during administration (-0.53 g•kg(-1•wk(-1, 95% confidence interval (CI (-0.68, -0.38 vs. 0.54 g•kg(-1•wk(-1, CI (0.46, 0.63 but Hb(mass was still significantly elevated 4 weeks after administration compared to baseline (13.7±1.1 g•kg(-1, p<0.001. CONCLUSION: Running performance was improved following 4 weeks of rHuEpo and remained elevated 4 weeks after administration compared to baseline. These field performance effects coincided with r

  20. Icelandic Public Pensions: Why time is running out

    Directory of Open Access Journals (Sweden)

    Ólafur Ísleifsson

    2011-12-01

    Full Text Available The aim of this paper is to analyse the Icelandic public sector pension system enjoying a third party guarantee. Defined benefit funds fundamentally differ from defined contribution pension funds without a third party guarantee as is the case with the Icelandic general labour market pension funds. We probe the special nature of the public sector pension funds and make a comparison to the defined contribution pension funds of the general labour market. We explore the financial and economic effects of the third party guarantee of the funds, their investment performance and other relevant factors. We seek an answer to the question why time is running out for the country’s largest pension fund that currently faces the prospect of becoming empty by the year 2022.

  1. Evaluation of the effect of one stage versus two stage full mouth disinfection on C-reactive protein and leucocyte count in patients with chronic periodontitis.

    Science.gov (United States)

    Pabolu, Chandra Mohan; Mutthineni, Ramesh Babu; Chintala, Srikanth; Naheeda; Mutthineni, Navya

    2013-07-01

    Conventional non-surgical periodontal therapy is carried out in quadrant basis with 1-2 week interval. This time lag may result in re-infection of instrumented pocket and may impair healing. Therefore, a new approach to full-mouth non-surgical therapy to be completed within two consecutive days with full-mouth disinfection has been suggested. In periodontitis, leukocyte counts and levels of C-reactive protein (CRP) are likely to be slightly elevated, indicating the presence of infection or inflammation. The aim of this study is to compare the efficacy of one stage and two stage non-surgical therapy on clinical parameters along with CRP levels and total white blood cell (TWBC) count. A total of 20 patients were selected and were divided into two groups. Group 1 received one stage full mouth dis-infection and Group 2 received two stages FMD. Plaque index, sulcus bleeding index, probing depth, clinical attachment loss, serum CRP and TWBC count were evaluated for both the groups at baseline and at 1 month post-treatment. The results were analyzed using the Student t-test. Both treatment modalities lead to a significant improvement of the clinical and hematological parameters; however comparison between the two groups showed no significant difference after 1 month. The therapeutic intervention may have a systemic effect on blood count in periodontitis patients. Though one stage FMD had limited benefits over two stages FMD, the therapy can be accomplished in a shorter duration.

  2. The running pattern and its importance in running long-distance gears

    Directory of Open Access Journals (Sweden)

    Jarosław Hoffman

    2017-07-01

    Full Text Available The running pattern is individual for each runner, regardless of distance. We can characterize it as the sum of the data of the runner (age, height, training time, etc. and the parameters of his run. Building the proper technique should focus first and foremost on the work of movement coordination and the power of the runner. In training the correct running steps we can use similar tools as working on deep feeling. The aim of this paper was to define what we can call a running pattern, what is its influence in long-distance running, and the relationship between the training technique and the running pattern. The importance of a running pattern in long-distance racing is immense, as the more distracted and departed from the norm, the greater the harm to the body will cause it to repetition in long run. Putting on training exercises that shape the technique is very important and affects the running pattern significantly.

  3. Uniqueness of human running coordination: The integration of modern and ancient evolutionary innovations

    Directory of Open Access Journals (Sweden)

    John eKiely

    2016-04-01

    Full Text Available Running is a pervasive activity across human cultures and a cornerstone of contemporary health, fitness and sporting activities. Yet for the overwhelming predominance of human existence running was an essential prerequisite for survival. A means to hunt, and a means to escape when hunted. In a very real sense humans have evolved to run. Yet curiously, perhaps due to running’s cultural ubiquity and the natural ease with which we learn to run, we rarely consider the uniqueness of human bipedal running within the animal kingdom. Our unique upright, single stance, bouncing running gait imposes a unique set of coordinative difficulties. Challenges demanding we precariously balance our fragile brains in the very position where they are most vulnerable to falling injury while simultaneously retaining stability, steering direction of travel, and powering the upcoming stride: all within the abbreviated time-frames afforded by short, violent ground contacts separated by long flight times. These running coordination challenges are solved through the tightly-integrated blending of primitive evolutionary legacies, conserved from reptilian and vertebrate lineages, and comparatively modern, more exclusively human, innovations. The integrated unification of these top-down and bottom-up control processes bestows humans with an agile control system, enabling us to readily modulate speeds, change direction, negotiate varied terrains and to instantaneously adapt to changing surface conditions. The seamless integration of these evolutionary processes is facilitated by pervasive, neural and biological, activity-dependent adaptive plasticity. Over time, and with progressive exposure, this adaptive plasticity shapes neural and biological structures to best cope with regularly imposed movement challenges. This pervasive plasticity enables the gradual construction of a robust system of distributed coordinated control, comprised of processes that are so deeply

  4. Shoe cleat position during cycling and its effect on subsequent running performance in triathletes.

    Science.gov (United States)

    Viker, Tomas; Richardson, Matt X

    2013-01-01

    Research with cyclists suggests a decreased load on the lower limbs by placing the shoe cleat more posteriorly, which may benefit subsequent running in a triathlon. This study investigated the effect of shoe cleat position during cycling on subsequent running. Following bike-run training sessions with both aft and traditional cleat positions, 13 well-trained triathletes completed a 30 min simulated draft-legal triathlon cycling leg, followed by a maximal 5 km run on two occasions, once with aft-placed and once with traditionally placed cleats. Oxygen consumption, breath frequency, heart rate, cadence and power output were measured during cycling, while heart rate, contact time, 200 m lap time and total time were measured during running. Cardiovascular measures did not differ between aft and traditional cleat placement during the cycling protocol. The 5 km run time was similar for aft and traditional cleat placement, at 1084 ± 80 s and 1072 ± 64 s, respectively, as was contact time during km 1 and 5, and heart rate and running speed for km 5 for the two cleat positions. Running speed during km 1 was 2.1% ± 1.8 faster (P beneficial effects of an aft cleat position on subsequent running in a short distance triathlon.

  5. NDDP multi-stage flash desalination process simulator design process optimization

    International Nuclear Information System (INIS)

    Sashi Kumar, G.N.; Mahendra, A.K.; Sanyal, A.; Gouthaman, G.

    2009-03-01

    The improvement of NDDP-MSF plant's performance ratio (PR) from design value of 9.0 to 13.1 was achieved by optimizing the plant's operating parameters within the feasible zone of operation. This plant has 20% excess heat transfer area over the design condition which helped us to get a PR of 15.1 after optimization. Thus we have obtained, (1) A 45% increase in the output over design value by the optimization carried out with design heat transfer area. (2) A 68% increase in the output over design value by the optimization carried out with increased heat transfer area. This report discusses the approach, methodology and results of the optimization study carried out. A simulator, MSFSIM which predicts the performance of a multi-stage flash (MSF) desalination plant has been coupled with Genetic Algorithm (GA) optimizer. Exhaustive optimization case studies have been conducted on this plant with an objective to increase the performance ratio (PR). The steady state optimization performed was based on obtaining the best stage wise pressure profile to enhance thermal efficiency which in-turn improves the performance ratio. Apart from this, the recirculating brine flow rate was also optimized. This optimization study enabled us to increase the PR of NDDP-MSF plant from design value of 9.0 to an optimized value 13.1. The actual plant is provided with 20% additional heat transfer area over and above the design heat transfer area. Optimization with this additional heat transfer area has taken the PR to 15.1. A desire to maintain equal flashing rates in all of the stages (a feature required for long life of the plant and to avoid cascading effect of non-flashing triggered by any stage) of the MSF plant has also been achieved. The deviation in the flashing rates within stages has been reduced. The startup characteristic of the plant (i.e the variation of stage pressure and the variation of recirculation flow rate with time), have been optimized with a target to minimize the

  6. Running Technique is an Important Component of Running Economy and Performance

    Science.gov (United States)

    FOLLAND, JONATHAN P.; ALLEN, SAM J.; BLACK, MATTHEW I.; HANDSAKER, JOSEPH C.; FORRESTER, STEPHANIE E.

    2017-01-01

    ABSTRACT Despite an intuitive relationship between technique and both running economy (RE) and performance, and the diverse techniques used by runners to achieve forward locomotion, the objective importance of overall technique and the key components therein remain to be elucidated. Purpose This study aimed to determine the relationship between individual and combined kinematic measures of technique with both RE and performance. Methods Ninety-seven endurance runners (47 females) of diverse competitive standards performed a discontinuous protocol of incremental treadmill running (4-min stages, 1-km·h−1 increments). Measurements included three-dimensional full-body kinematics, respiratory gases to determine energy cost, and velocity of lactate turn point. Five categories of kinematic measures (vertical oscillation, braking, posture, stride parameters, and lower limb angles) and locomotory energy cost (LEc) were averaged across 10–12 km·h−1 (the highest common velocity < velocity of lactate turn point). Performance was measured as season's best (SB) time converted to a sex-specific z-score. Results Numerous kinematic variables were correlated with RE and performance (LEc, 19 variables; SB time, 11 variables). Regression analysis found three variables (pelvis vertical oscillation during ground contact normalized to height, minimum knee joint angle during ground contact, and minimum horizontal pelvis velocity) explained 39% of LEc variability. In addition, four variables (minimum horizontal pelvis velocity, shank touchdown angle, duty factor, and trunk forward lean) combined to explain 31% of the variability in performance (SB time). Conclusions This study provides novel and robust evidence that technique explains a substantial proportion of the variance in RE and performance. We recommend that runners and coaches are attentive to specific aspects of stride parameters and lower limb angles in part to optimize pelvis movement, and ultimately enhance performance

  7. Improvements of the ALICE HLT data transport framework for LHC Run 2

    Science.gov (United States)

    Rohr, David; Krzwicki, Mikolaj; Engel, Heiko; Lehrbach, Johannes; Lindenstruth, Volker; ALICE Collaboration

    2017-10-01

    The ALICE HLT uses a data transport framework based on the publisher- subscriber message principle, which transparently handles the communication between processing components over the network and between processing components on the same node via shared memory with a zero copy approach. We present an analysis of the performance in terms of maximum achievable data rates and event rates as well as processing capabilities during Run 1 and Run 2. Based on this analysis, we present new optimizations we have developed for ALICE in Run 2. These include support for asynchronous transport via Zero-MQ which enables loops in the reconstruction chain graph and which is used to ship QA histograms to DQM. We have added asynchronous processing capabilities in order to support long-running tasks besides the event-synchronous reconstruction tasks in normal HLT operation. These asynchronous components run in an isolated process such that the HLT as a whole is resilient even to fatal errors in these asynchronous components. In this way, we can ensure that new developments cannot break data taking. On top of that, we have tuned the processing chain to cope with the higher event and data rates expected from the new TPC readout electronics (RCU2) and we have improved the configuration procedure and the startup time in order to increase the time where ALICE can take physics data. We analyze the maximum achievable data processing rates taking into account processing capabilities of CPUs and GPUs, buffer sizes, network bandwidth, the incoming links from the detectors, and the outgoing links to data acquisition.

  8. The representation of time course events in visual arts and the development of the concept of time in children: a preliminary study.

    Science.gov (United States)

    Actis-Grosso, Rossana; Zavagno, Daniele

    2008-01-01

    By means of a careful search we found several representations of dynamic contents of events that show how the depiction of the passage of time in the visual arts has evolved gradually through a series of modifications and adaptations. The general hypothesis we started to investigate is that the evolution of the representation of the time course in visual arts is mirrored in the evolution of the concept of time in children, who, according to Piaget (1946), undergo three stages in their ability to conceptualize time. Crucial for our hypothesis is Stage II, in which children become progressively able to link the different phases of an event, but vacillate between what Piaget termed 'intuitive regulations', not being able to understand all the different aspects of a given situation. We found several pictorial representations - mainly dated back to the 14th to 15th century - that seem to fit within a Stage II of children's comprehension of time. According to our hypothesis, this type of pictorial representations should be immediately understood only by those children who are at Piaget's Stage II of time conceptualization. This implies that children at Stages I and III should not be able to understand the representation of time courses in the aforementioned paintings. An experiment was run to verify the agreement between children's collocation within Piaget's three stages - as indicated by an adaptation of Piaget's original experiment - and their understanding of pictorial representations that should be considered as Stage II type of representations of time courses. Despite the small sample of children examined so far, results seem to support our hypothesis. A follow-up (Experiment 2) on the same children was also run one year later in order to verify other possible explanations. Results from the two experiments suggest that the study of the visual arts can aid our understanding of the development of the concept of time, and it can also help to distinguish between the

  9. System and Component Software Specification, Run-time Verification and Automatic Test Generation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The following background technology is described in Part 5: Run-time Verification (RV), White Box Automatic Test Generation (WBATG). Part 5 also describes how WBATG...

  10. Low contrast volume run-off CT angiography with optimized scan time based on double-level test bolus technique – feasibility study

    International Nuclear Information System (INIS)

    Baxa, Jan; Vendiš, Tomáš; Moláček, Jiří; Štěpánková, Lucie; Flohr, Thomas; Schmidt, Bernhard; Korporaal, Johannes G.; Ferda, Jiří

    2014-01-01

    Purpose: To verify the technical feasibility of low contrast volume (40 mL) run-off CT angiography (run-off CTA) with the individual scan time optimization based on double-level test bolus technique. Materials and methods: A prospective study of 92 consecutive patients who underwent run-off CTA performed with 40 mL of contrast medium (injection rate of 6 mL/s) and optimized scan times on a second generation of dual-source CT. Individual optimized scan times were calculated from aortopopliteal transit times obtained on the basis of double-level test bolus technique – the single injection of 10 mL test bolus and dynamic acquisitions in two levels (abdominal aorta and popliteal arteries). Intraluminal attenuation (HU) was measured in 6 levels (aorta, iliac, femoral and popliteal arteries, middle and distal lower-legs) and subjective quality (3-point score) was assessed. Relations of image quality, test bolus parameters and arterial circulation involvement were analyzed. Results: High mean attenuation (HU) values (468; 437; 442; 440; 342; 274) and quality score in all monitored levels was achieved. In 91 patients (0.99) the sufficient diagnostic quality (score 1–2) in aorta, iliac and femoral arteries was determined. A total of 6 patients (0.07) were not evaluable in distal lower-legs. Only the weak indirect correlation of image quality and test-bolus parameters was proved in iliac, femoral and popliteal levels (r values: −0.263, −0.298 and −0.254). The statistically significant difference of the test-bolus parameters and image quality was proved in patients with occlusive and aneurysmal disease. Conclusion: We proved the technical feasibility and sufficient quality of run-off CTA with low volume of contrast medium and optimized scan time according to aortopopliteal transit time calculated from double-level test bolus

  11. Two-boundary first exit time of Gauss-Markov processes for stochastic modeling of acto-myosin dynamics.

    Science.gov (United States)

    D'Onofrio, Giuseppe; Pirozzi, Enrica

    2017-05-01

    We consider a stochastic differential equation in a strip, with coefficients suitably chosen to describe the acto-myosin interaction subject to time-varying forces. By simulating trajectories of the stochastic dynamics via an Euler discretization-based algorithm, we fit experimental data and determine the values of involved parameters. The steps of the myosin are represented by the exit events from the strip. Motivated by these results, we propose a specific stochastic model based on the corresponding time-inhomogeneous Gauss-Markov and diffusion process evolving between two absorbing boundaries. We specify the mean and covariance functions of the stochastic modeling process taking into account time-dependent forces including the effect of an external load. We accurately determine the probability density function (pdf) of the first exit time (FET) from the strip by solving a system of two non singular second-type Volterra integral equations via a numerical quadrature. We provide numerical estimations of the mean of FET as approximations of the dwell-time of the proteins dynamics. The percentage of backward steps is given in agreement to experimental data. Numerical and simulation results are compared and discussed.

  12. A Two-Stage Queue Model to Optimize Layout of Urban Drainage System considering Extreme Rainstorms

    OpenAIRE

    He, Xinhua; Hu, Wenfa

    2017-01-01

    Extreme rainstorm is a main factor to cause urban floods when urban drainage system cannot discharge stormwater successfully. This paper investigates distribution feature of rainstorms and draining process of urban drainage systems and uses a two-stage single-counter queue method M/M/1→M/D/1 to model urban drainage system. The model emphasizes randomness of extreme rainstorms, fuzziness of draining process, and construction and operation cost of drainage system. Its two objectives are total c...

  13. Operating Security System Support for Run-Time Security with a Trusted Execution Environment

    DEFF Research Database (Denmark)

    Gonzalez, Javier

    Software services have become an integral part of our daily life. Cyber-attacks have thus become a problem of increasing importance not only for the IT industry, but for society at large. A way to contain cyber-attacks is to guarantee the integrity of IT systems at run-time. Put differently......, it is safe to assume that any complex software is compromised. The problem is then to monitor and contain it when it executes in order to protect sensitive data and other sensitive assets. To really have an impact, any solution to this problem should be integrated in commodity operating systems...... sensitive assets at run-time that we denote split-enforcement, and provide an implementation for ARM-powered devices using ARM TrustZone security extensions. We design, build, and evaluate a prototype Trusted Cell that provides trusted services. We also present the first generic TrustZone driver...

  14. Possible two-stage /sup 87/Sr evolution in the Stockdale Rhyolite

    Energy Technology Data Exchange (ETDEWEB)

    Compston, W.; McDougall, I. (Australian National Univ., Canberra. Research School of Earth Sciences); Wyborn, D. (Department of Minerals and Energy, Canberra (Australia). Bureau of Mineral Resources)

    1982-12-01

    The Rb-Sr total-rock data for the Stockdale Rhyolite, of significance for the Palaeozoic time scale, are more scattered about a single-stage isochron than expected from experimental error. Two-stage /sup 87/Sr evolution for several of the samples is explored to explain this, as an alternative to variation in the initial /sup 87/Sr//sup 86/Sr which is customarily used in single-stage dating models. The deletion of certain samples having very high Rb/Sr removes most of the excess scatter and leads to an estimate of 430 +- 7 m.y. for the age of extrusion. There is a younger alignment of Rb-Sr data within each sampling site at 412 +- 7 m.y. We suggest that the Stockdale Rhyolite is at least 430 m.y. old, that its original range in Rb/Sr was smaller than now observed, and that it experienced a net loss in Sr during later hydrothermal alteration at ca. 412 m.y.

  15. Two-stage gas-phase bioreactor for the combined removal of hydrogen sulphide, methanol and alpha-pinene.

    Science.gov (United States)

    Rene, Eldon R; Jin, Yaomin; Veiga, María C; Kennes, Christian

    2009-11-01

    Biological treatment systems have emerged as cost-effective and eco-friendly techniques for treating waste gases from process industries at moderately high gas flow rates and low pollutant concentrations. In this study, we have assessed the performance of a two-stage bioreactor, namely a biotrickling filter packed with pall rings (BTF, 1st stage) and a perlite + pall ring mixed biofilter (BF, 2nd stage) operated in series, for handling a complex mixture of hydrogen sulphide (H2S), methanol (CH3OH) and alpha-pinene (C10H16). It has been reported that the presence of H2S can reduce the biofiltration efficiency of volatile organic compounds (VOCs) when both are present in the gas mixture. Hydrogen sulphide and methanol were removed in the first stage BTF, previously inoculated with H2S-adapted populations and a culture containing Candida boidinii, an acid-tolerant yeast, whereas, in the second stage, alpha-pinene was removed predominantly by the fungus Ophiostoma stenoceras. Experiments were conducted in five different phases, corresponding to inlet loading rates varying between 2.1 and 93.5 g m(-3) h(-1) for H2S, 55.3 and 1260.2 g m(-3) h(-1) for methanol, and 2.8 and 161.1 g m(-3) h(-1) for alpha-pinene. Empty bed residence times were varied between 83.4 and 10 s in the first stage and 146.4 and 17.6 s in the second stage. The BTF, working at a pH as low as 2.7 as a result of H2S degradation, removed most of the H2S and methanol but only very little alpha-pinene. On the other hand, the BF, at a pH around 6.0, removed the rest of the H2S, the non-degraded methanol and most of the alpha-pinene vapours. Attempts were originally made to remove the three pollutants in a single acidophilic bioreactor, but the Ophiostoma strain was hardly active at pH elimination capacities (ECs) reached by the two-stage bioreactor for individual pollutants were 894.4 g m(-3) h(-1) for methanol, 45.1 g m(-3) h(-1) for H2S and 138.1 g m(-3) h(-1) for alpha-pinene. The results from this

  16. Late Financial Distress Process Stages and Financial Ratios

    DEFF Research Database (Denmark)

    Sormunen, Nina; Laitinen, Teija

    2012-01-01

    stage affects the classification ability of single financial ratios and financial distress prediction models in short-term financial distress prediction. The study shows that the auditor's GC task could be supported by paying attention to the financial distress process stage. The implications...... of these findings for auditors and every stakeholder of business firms are considered....

  17. Final Report on Two-Stage Fast Spectrum Fuel Cycle Options

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Won Sik [Purdue Univ., West Lafayette, IN (United States); Lin, C. S. [Purdue Univ., West Lafayette, IN (United States); Hader, J. S. [Purdue Univ., West Lafayette, IN (United States); Park, T. K. [Purdue Univ., West Lafayette, IN (United States); Deng, P. [Purdue Univ., West Lafayette, IN (United States); Yang, G. [Purdue Univ., West Lafayette, IN (United States); Jung, Y. S. [Purdue Univ., West Lafayette, IN (United States); Kim, T. K. [Argonne National Lab. (ANL), Argonne, IL (United States); Stauff, N. E. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-01-30

    This report presents the performance characteristics of twotwo-stage” fast spectrum fuel cycle options proposed to enhance uranium resource utilization and to reduce nuclear waste generation. One is a two-stage fast spectrum fuel cycle option of continuous recycle of plutonium (Pu) in a fast reactor (FR) and subsequent burning of minor actinides (MAs) in an accelerator-driven system (ADS). The first stage is a sodium-cooled FR fuel cycle starting with low-enriched uranium (LEU) fuel; at the equilibrium cycle, the FR is operated using the recovered Pu and natural uranium without supporting LEU. Pu and uranium (U) are co-extracted from the discharged fuel and recycled in the first stage, and the recovered MAs are sent to the second stage. The second stage is a sodium-cooled ADS in which MAs are burned in an inert matrix fuel form. The discharged fuel of ADS is reprocessed, and all the recovered heavy metals (HMs) are recycled into the ADS. The other is a two-stage FR/ADS fuel cycle option with MA targets loaded in the FR. The recovered MAs are not directly sent to ADS, but partially incinerated in the FR in order to reduce the amount of MAs to be sent to the ADS. This is a heterogeneous recycling option of transuranic (TRU) elements

  18. A deteriorating two-system with two repair modes and sojourn times phase-type distributed

    International Nuclear Information System (INIS)

    Montoro-Cazorla, Delia; Perez-Ocon, Rafael

    2006-01-01

    We study a two-unit cold standby system in steady-state. The online unit goes through a finite number of stages of successive degradation preceding the failure. The units are reparable, there is a repairman and two types of maintenance are considered, preventive and corrective. The preventive repair aims to improve the degradation of a unit being operative. The corrective repair is necessary when the unit fails. We will assume that the preventive repair will be interrupted in favour of a corrective repair in order to increase the availability of the system. The random operational and repair times follow phase-type distributions. For this system, the stationary probability vector, the replacement times, and the involved costs are calculated. An optimisation problem is illustrated by a numerical example. In this, the optimal degradation stage for the preventive repair of the online unit is determined by taking into account the system availability and the incurred costs

  19. A deteriorating two-system with two repair modes and sojourn times phase-type distributed

    Energy Technology Data Exchange (ETDEWEB)

    Montoro-Cazorla, Delia [Departamento de Estadistica e I.O., Escuela Politecnica de Linares, Universidad de Jaen, 23700 Linares, Jaen (Spain); Perez-Ocon, Rafael [Departamento de Estadistica e I.O., Facultad de Ciencias, Universidad de Granada, Granada 18071 (Spain)]. E-mail: rperezo@ugr.es

    2006-01-01

    We study a two-unit cold standby system in steady-state. The online unit goes through a finite number of stages of successive degradation preceding the failure. The units are reparable, there is a repairman and two types of maintenance are considered, preventive and corrective. The preventive repair aims to improve the degradation of a unit being operative. The corrective repair is necessary when the unit fails. We will assume that the preventive repair will be interrupted in favour of a corrective repair in order to increase the availability of the system. The random operational and repair times follow phase-type distributions. For this system, the stationary probability vector, the replacement times, and the involved costs are calculated. An optimisation problem is illustrated by a numerical example. In this, the optimal degradation stage for the preventive repair of the online unit is determined by taking into account the system availability and the incurred costs.

  20. Triathlon: running injuries.

    Science.gov (United States)

    Spiker, Andrea M; Dixit, Sameer; Cosgarea, Andrew J

    2012-12-01

    The running portion of the triathlon represents the final leg of the competition and, by some reports, the most important part in determining a triathlete's overall success. Although most triathletes spend most of their training time on cycling, running injuries are the most common injuries encountered. Common causes of running injuries include overuse, lack of rest, and activities that aggravate biomechanical predisposers of specific injuries. We discuss the running-associated injuries in the hip, knee, lower leg, ankle, and foot of the triathlete, and the causes, presentation, evaluation, and treatment of each.

  1. Addressing Thermal Model Run Time Concerns of the Wide Field Infrared Survey Telescope using Astrophysics Focused Telescope Assets (WFIRST-AFTA)

    Science.gov (United States)

    Peabody, Hume; Guerrero, Sergio; Hawk, John; Rodriguez, Juan; McDonald, Carson; Jackson, Cliff

    2016-01-01

    The Wide Field Infrared Survey Telescope using Astrophysics Focused Telescope Assets (WFIRST-AFTA) utilizes an existing 2.4 m diameter Hubble sized telescope donated from elsewhere in the federal government for near-infrared sky surveys and Exoplanet searches to answer crucial questions about the universe and dark energy. The WFIRST design continues to increase in maturity, detail, and complexity with each design cycle leading to a Mission Concept Review and entrance to the Mission Formulation Phase. Each cycle has required a Structural-Thermal-Optical-Performance (STOP) analysis to ensure the design can meet the stringent pointing and stability requirements. As such, the models have also grown in size and complexity leading to increased model run time. This paper addresses efforts to reduce the run time while still maintaining sufficient accuracy for STOP analyses. A technique was developed to identify slews between observing orientations that were sufficiently different to warrant recalculation of the environmental fluxes to reduce the total number of radiation calculation points. The inclusion of a cryocooler fluid loop in the model also forced smaller time-steps than desired, which greatly increases the overall run time. The analysis of this fluid model required mitigation to drive the run time down by solving portions of the model at different time scales. Lastly, investigations were made into the impact of the removal of small radiation couplings on run time and accuracy. Use of these techniques allowed the models to produce meaningful results within reasonable run times to meet project schedule deadlines.

  2. A parallel algorithm for the two-dimensional time fractional diffusion equation with implicit difference method.

    Science.gov (United States)

    Gong, Chunye; Bao, Weimin; Tang, Guojian; Jiang, Yuewen; Liu, Jie

    2014-01-01

    It is very time consuming to solve fractional differential equations. The computational complexity of two-dimensional fractional differential equation (2D-TFDE) with iterative implicit finite difference method is O(M(x)M(y)N(2)). In this paper, we present a parallel algorithm for 2D-TFDE and give an in-depth discussion about this algorithm. A task distribution model and data layout with virtual boundary are designed for this parallel algorithm. The experimental results show that the parallel algorithm compares well with the exact solution. The parallel algorithm on single Intel Xeon X5540 CPU runs 3.16-4.17 times faster than the serial algorithm on single CPU core. The parallel efficiency of 81 processes is up to 88.24% compared with 9 processes on a distributed memory cluster system. We do think that the parallel computing technology will become a very basic method for the computational intensive fractional applications in the near future.

  3. Process and operating device for an apparatus using a running liquid film and application to separation of Zr and Hf tetrachlorides

    International Nuclear Information System (INIS)

    Brun, R.

    1989-01-01

    A process is claimed allowing to maintain a thin film in a running film exchanger, by increasing the flow rate for a short time to establish a film all over the surface. Application is made to continuous condensation of zirconium and hafnium tetrachlorides, from the separation column, by absorption in a liquid solvent made of potassium chloroaluminate [fr

  4. Energy production from agricultural residues: High methane yields in pilot-scale two-stage anaerobic digestion

    International Nuclear Information System (INIS)

    Parawira, W.; Read, J.S.; Mattiasson, B.; Bjoernsson, L.

    2008-01-01

    There is a large, unutilised energy potential in agricultural waste fractions. In this pilot-scale study, the efficiency of a simple two-stage anaerobic digestion process was investigated for stabilisation and biomethanation of solid potato waste and sugar beet leaves, both separately and in co-digestion. A good phase separation between hydrolysis/acidification and methanogenesis was achieved, as indicated by the high carbon dioxide production, high volatile fatty acid concentration and low pH in the acidogenic reactors. Digestion of the individual substrates gave gross energy yields of 2.1-3.4 kWh/kg VS in the form of methane. Co-digestion, however, gave up to 60% higher methane yield, indicating that co-digestion resulted in improved methane production due to the positive synergism established in the digestion liquor. The integrity of the methane filters (MFs) was maintained throughout the period of operation, producing biogas with 60-78% methane content. A stable effluent pH showed that the methanogenic reactors had good ability to withstand the variations in load and volatile fatty acid concentrations that occurred in the two-stage process. The results of this pilot-scale study show that the two-stage anaerobic digestion system is suitable for effective conversion of semi-solid agricultural residues as potato waste and sugar beet leaves

  5. Two stage, low temperature, catalyzed fluidized bed incineration with in situ neutralization for radioactive mixed wastes

    International Nuclear Information System (INIS)

    Wade, J.F.; Williams, P.M.

    1995-01-01

    A two stage, low temperature, catalyzed fluidized bed incineration process is proving successful at incinerating hazardous wastes containing nuclear material. The process operates at 550 degrees C and 650 degrees C in its two stages. Acid gas neutralization takes place in situ using sodium carbonate as a sorbent in the first stage bed. The feed material to the incinerator is hazardous waste-as defined by the Resource Conservation and Recovery Act-mixed with radioactive materials. The radioactive materials are plutonium, uranium, and americium that are byproducts of nuclear weapons production. Despite its low temperature operation, this system successfully destroyed poly-chlorinated biphenyls at a 99.99992% destruction and removal efficiency. Radionuclides and volatile heavy metals leave the fluidized beds and enter the air pollution control system in minimal amounts. Recently collected modeling and experimental data show the process minimizes dioxin and furan production. The report also discusses air pollution, ash solidification, and other data collected from pilot- and demonstration-scale testing. The testing took place at Rocky Flats Environmental Technology Site, a US Department of Energy facility, in the 1970s, 1980s, and 1990s

  6. Two-stage residual inclusion estimation: addressing endogeneity in health econometric modeling.

    Science.gov (United States)

    Terza, Joseph V; Basu, Anirban; Rathouz, Paul J

    2008-05-01

    The paper focuses on two estimation methods that have been widely used to address endogeneity in empirical research in health economics and health services research-two-stage predictor substitution (2SPS) and two-stage residual inclusion (2SRI). 2SPS is the rote extension (to nonlinear models) of the popular linear two-stage least squares estimator. The 2SRI estimator is similar except that in the second-stage regression, the endogenous variables are not replaced by first-stage predictors. Instead, first-stage residuals are included as additional regressors. In a generic parametric framework, we show that 2SRI is consistent and 2SPS is not. Results from a simulation study and an illustrative example also recommend against 2SPS and favor 2SRI. Our findings are important given that there are many prominent examples of the application of inconsistent 2SPS in the recent literature. This study can be used as a guide by future researchers in health economics who are confronted with endogeneity in their empirical work.

  7. Maximally efficient two-stage screening: Determining intellectual disability in Taiwanese military conscripts.

    Science.gov (United States)

    Chien, Chia-Chang; Huang, Shu-Fen; Lung, For-Wey

    2009-01-27

    The purpose of this study was to apply a two-stage screening method for the large-scale intelligence screening of military conscripts. We collected 99 conscripted soldiers whose educational levels were senior high school level or lower to be the participants. Every participant was required to take the Wisconsin Card Sorting Test (WCST) and the Wechsler Adult Intelligence Scale-Revised (WAIS-R) assessments. Logistic regression analysis showed the conceptual level responses (CLR) index of the WCST was the most significant index for determining intellectual disability (ID; FIQ ≤ 84). We used the receiver operating characteristic curve to determine the optimum cut-off point of CLR. The optimum one cut-off point of CLR was 66; the two cut-off points were 49 and 66. Comparing the two-stage window screening with the two-stage positive screening, the area under the curve and the positive predictive value increased. Moreover, the cost of the two-stage window screening decreased by 59%. The two-stage window screening is more accurate and economical than the two-stage positive screening. Our results provide an example for the use of two-stage screening and the possibility of the WCST to replace WAIS-R in large-scale screenings for ID in the future.

  8. Frequency of hepatitis E virus, rotavirus and porcine enteric calicivirus at various stages of pork carcass processing in two pork processing plants.

    Science.gov (United States)

    Jones, Tineke H; Muehlhauser, Victoria

    2017-10-16

    Hepatitis E virus (HEV), rotavirus (RV), and porcine enteric calicivirus (PEC) infections are common in swine and raises concerns about the potential for zoonotic transmission through undercooked meat products. Enteric viruses can potentially contaminate carcasses during meat processing operations. There is a lack of information on the prevalence and control of enteric viruses in the pork processing chain. This study compared the incidence and levels of contamination of hog carcasses with HEV, RV and PEC at different stages of the dressing process. A total of 1000 swabs were collected from 2 pork processing plants on 10 separate occasions over the span of a year. The samples were obtained from random sites on hog carcasses at 4 dressing stages (plant A: bleeding, dehairing, pasteurization, and evisceration; plant B: bleeding, skinning, evisceration, and washing) and from meat cuts. Numbers of genome copies (gc) of HEV, RV and PEC were determined by RT-qPCR. RV and PEC were detected in 100%, and 18% of samples, respectively, after bleeding for plant A and in 98%, and 36% of samples, respectively, after bleeding for plant B. After evisceration, RV and PEC were detected in 21% and 3% of samples, respectively, for plant A and in 1%, and 0% of samples, respectively for plant B. RV and PEC were detected on 1%, and 5% of pork cuts, respectively, for plant A and on 0%, and 0% of pork cuts, respectively, for plant B. HEV was not detected in any pork carcass or retail pork samples from plants A or B. The frequency of PEC and RV on pork is progressively reduced along the pork processing chain but the viruses were not completely eliminated. The findings suggest that consumers could be at risk when consuming undercooked meat contaminated with pathogenic enteric viruses. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.

  9. Mining manufacturing data for discovery of high productivity process characteristics.

    Science.gov (United States)

    Charaniya, Salim; Le, Huong; Rangwala, Huzefa; Mills, Keri; Johnson, Kevin; Karypis, George; Hu, Wei-Shou

    2010-06-01

    Modern manufacturing facilities for bioproducts are highly automated with advanced process monitoring and data archiving systems. The time dynamics of hundreds of process parameters and outcome variables over a large number of production runs are archived in the data warehouse. This vast amount of data is a vital resource to comprehend the complex characteristics of bioprocesses and enhance production robustness. Cell culture process data from 108 'trains' comprising production as well as inoculum bioreactors from Genentech's manufacturing facility were investigated. Each run constitutes over one-hundred on-line and off-line temporal parameters. A kernel-based approach combined with a maximum margin-based support vector regression algorithm was used to integrate all the process parameters and develop predictive models for a key cell culture performance parameter. The model was also used to identify and rank process parameters according to their relevance in predicting process outcome. Evaluation of cell culture stage-specific models indicates that production performance can be reliably predicted days prior to harvest. Strong associations between several temporal parameters at various manufacturing stages and final process outcome were uncovered. This model-based data mining represents an important step forward in establishing a process data-driven knowledge discovery in bioprocesses. Implementation of this methodology on the manufacturing floor can facilitate a real-time decision making process and thereby improve the robustness of large scale bioprocesses. 2010 Elsevier B.V. All rights reserved.

  10. The Two-Word Stage: Motivated by Linguistic or Cognitive Constraints?

    Science.gov (United States)

    Berk, Stephanie; Lillo-Martin, Diane

    2012-01-01

    Child development researchers often discuss a "two-word" stage during language acquisition. However, there is still debate over whether the existence of this stage reflects primarily cognitive or linguistic constraints. Analyses of longitudinal data from two Deaf children, Mei and Cal, not exposed to an accessible first language (American Sign…

  11. Near Real-Time Processing and Archiving of GPS Surveys for Crustal Motion Monitoring

    Science.gov (United States)

    Crowell, B. W.; Bock, Y.

    2008-12-01

    We present an inverse instantaneous RTK method for rapidly processing and archiving GPS data for crustal motion surveys that gives positional accuracy similar to traditional post-processing methods. We first stream 1 Hz data from GPS receivers over Bluetooth to Verizon XV6700 smartphones equipped with Geodetics, Inc. RTD Rover software. The smartphone transmits raw receiver data to a real-time server at the Scripps Orbit and Permanent Array Center (SOPAC) running RTD Pro. At the server, instantaneous positions are computed every second relative to the three closest base stations in the California Real Time Network (CRTN), using ultra-rapid orbits produced by SOPAC, the NOAATrop real-time tropospheric delay model, and ITRF2005 coordinates computed by SOPAC for the CRTN stations. The raw data are converted on-the-fly to RINEX format at the server. Data in both formats are stored on the server along with a file of instantaneous positions, computed independently at each observation epoch. The single-epoch instantaneous positions are continuously transmitted back to the field surveyor's smartphone, where RTD Rover computes a median position and interquartile range for each new epoch of observation. The best-fit solution is the last median position and is available as soon as the survey is completed. We describe how we used this method to process 1 Hz data from the February, 2008 Imperial Valley GPS survey of 38 geodetic monuments established by Imperial College, London in the 1970's, and previously measured by SOPAC using rapid-static GPS methods in 1993, 1999 and 2000, as well as 14 National Geodetic Survey (NGS) monuments. For redundancy, each monument was surveyed for about 15 minutes at least twice and at staggered intervals using two survey teams operating autonomously. Archiving of data and the overall project at SOPAC is performed using the PGM software, developed by the California Spatial Reference Center (CSRC) for the National Geodetic Survey (NGS). The

  12. A new view of responses to first-time barefoot running.

    OpenAIRE

    Wilkinson, Mick; Caplan, Nick; Akenhead, Richard; Hayes, Phil

    2015-01-01

    We examined acute alterations in gait and oxygen cost from shod-to-barefoot running in habitually-shod well-trained runners with no prior experience of running barefoot. Thirteen runners completed six-minute treadmill runs shod and barefoot on separate days at a mean speed of 12.5 km·h-1. Steady-state oxygen cost in the final minute was recorded. Kinematic data were captured from 30-consecutive strides. Mean differences between conditions were estimated with 90% confidence intervals. When bar...

  13. Performance and microbial community analysis of two-stage process with extreme thermophilic hydrogen and thermophilic methane production from hydrolysate in UASB reactors

    DEFF Research Database (Denmark)

    Kongjan, Prawit; O-Thong, Sompong; Angelidaki, Irini

    2011-01-01

    The two-stage process for extreme thermophilic hydrogen and thermophilic methane production from wheat straw hydrolysate was investigated in up-flow anaerobic sludge bed (UASB) reactors. Specific hydrogen and methane yields of 89ml-H2/g-VS (190ml-H2/g-sugars) and 307ml-CH4/g-VS, respectively were...... energy of 13.4kJ/g-VS. Dominant hydrogen-producing bacteria in the H2-UASB reactor were Thermoanaerobacter wiegelii, Caldanaerobacter subteraneus, and Caloramator fervidus. Meanwhile, the CH4-UASB reactor was dominated with methanogens of Methanosarcina mazei and Methanothermobacter defluvii. The results...

  14. Chromium (Ⅵ) removal from aqueous solutions through powdered activated carbon countercurrent two-stage adsorption.

    Science.gov (United States)

    Wang, Wenqiang

    2018-01-01

    To exploit the adsorption capacity of commercial powdered activated carbon (PAC) and to improve the efficiency of Cr(VI) removal from aqueous solutions, the adsorption of Cr(VI) by commercial PAC and the countercurrent two-stage adsorption (CTA) process was investigated. Different adsorption kinetics models and isotherms were compared, and the pseudo-second-order model and the Langmuir and Freundlich models fit the experimental data well. The Cr(VI) removal efficiency was >80% and was improved by 37% through the CTA process compared with the conventional single-stage adsorption process when the initial Cr(VI) concentration was 50 mg/L with a PAC dose of 1.250 g/L and a pH of 3. A calculation method for calculating the effluent Cr(VI) concentration and the PAC dose was developed for the CTA process, and the validity of the method was confirmed by a deviation of <5%. Copyright © 2017. Published by Elsevier Ltd.

  15. The reinforcing property and the rewarding aftereffect of wheel running in rats: a combination of two paradigms.

    Science.gov (United States)

    Belke, Terry W; Wagner, Jason P

    2005-02-28

    Wheel running reinforces the behavior that generates it and produces a preference for the context that follows it. The goal of the present study was to demonstrate both of these effects in the same animals. Twelve male Wistar rats were first exposed to a fixed-interval 30 s schedule of wheel-running reinforcement. The operant was lever-pressing and the reinforcer was the opportunity to run for 45 s. Following this phase, the method of place conditioning was used to test for a rewarding aftereffect following operant sessions. On alternating days, half the rats responded for wheel-running reinforcement while the other half remained in their home cage. Upon completion of the wheel-running reinforcement sessions, rats that ran and rats that remained in their home cages were placed into a chamber of a conditioned place preference (CPP) apparatus for 30 min. Each animal received six pairings of a distinctive context with wheel running and six pairings of a different context with their home cage. On the test day, animals were free to move between the chambers for 10 min. Results showed a conditioned place preference for the context associated with wheel running; however, time spent in the context associated with running was not related to wheel-running rate, lever-pressing rate, or post-reinforcement pause duration. (c) 2004 Elsevier B.V. All rights reserved.

  16. Responding for sucrose and wheel-running reinforcement: effect of pre-running.

    Science.gov (United States)

    Belke, Terry W

    2006-01-10

    Six male albino Wistar rats were placed in running wheels and exposed to a fixed interval 30-s schedule that produced either a drop of 15% sucrose solution or the opportunity to run for 15s as reinforcing consequences for lever pressing. Each reinforcer type was signaled by a different stimulus. To assess the effect of pre-running, animals were allowed to run for 1h prior to a session of responding for sucrose and running. Results showed that, after pre-running, response rates in the later segments of the 30-s schedule decreased in the presence of a wheel-running stimulus and increased in the presence of a sucrose stimulus. Wheel-running rates were not affected. Analysis of mean post-reinforcement pauses (PRP) broken down by transitions between successive reinforcers revealed that pre-running lengthened pausing in the presence of the stimulus signaling wheel running and shortened pauses in the presence of the stimulus signaling sucrose. No effect was observed on local response rates. Changes in pausing in the presence of stimuli signaling the two reinforcers were consistent with a decrease in the reinforcing efficacy of wheel running and an increase in the reinforcing efficacy of sucrose. Pre-running decreased motivation to respond for running, but increased motivation to work for food.

  17. Efficacy of single-stage and two-stage Fowler–Stephens laparoscopic orchidopexy in the treatment of intraabdominal high testis

    Directory of Open Access Journals (Sweden)

    Chang-Yuan Wang

    2017-11-01

    Conclusion: In the case of testis with good collateral circulation, single-stage F-S laparoscopic orchidopexy had the same safety and efficacy as the two-stage F-S procedure. Surgical options should be based on comprehensive consideration of intraoperative testicular location, testicular ischemia test, and collateral circumstances surrounding the testes. Under the appropriate conditions, we propose single-stage F-S laparoscopic orchidopexy be preferred. It may be appropriate to avoid unnecessary application of the two-stage procedure that has a higher cost and causes more pain for patients.

  18. A two stage data envelopment analysis model with undesirable output

    Science.gov (United States)

    Shariff Adli Aminuddin, Adam; Izzati Jaini, Nur; Mat Kasim, Maznah; Nawawi, Mohd Kamal Mohd

    2017-09-01

    The dependent relationship among the decision making units (DMU) is usually assumed to be non-existent in the development of Data Envelopment Analysis (DEA) model. The dependency can be represented by the multi-stage DEA model, where the outputs from the precedent stage will be the inputs for the latter stage. The multi-stage DEA model evaluate both the efficiency score for each stages and the overall efficiency of the whole process. The existing multi stage DEA models do not focus on the integration with the undesirable output, in which the higher input will generate lower output unlike the normal desirable output. This research attempts to address the inclusion of such undesirable output and investigate the theoretical implication and potential application towards the development of multi-stage DEA model.

  19. Habitual Minimalist Shod Running Biomechanics and the Acute Response to Running Barefoot.

    Science.gov (United States)

    Tam, Nicholas; Darragh, Ian A J; Divekar, Nikhil V; Lamberts, Robert P

    2017-09-01

    The aim of the study was to determine whether habitual minimalist shoe runners present with purported favorable running biomechanithat reduce running injury risk such as initial loading rate. Eighteen minimalist and 16 traditionally cushioned shod runners were assessed when running both in their preferred training shoe and barefoot. Ankle and knee joint kinetics and kinematics, initial rate of loading, and footstrike angle were measured. Sagittal ankle and knee joint stiffness were also calculated. Results of a two-factor ANOVA presented no group difference in initial rate of loading when participants were running either shod or barefoot; however, initial loading rate increased for both groups when running barefoot (p=0.008). Differences in footstrike angle were observed between groups when running shod, but not when barefoot (minimalist:8.71±8.99 vs. traditional: 17.32±11.48 degrees, p=0.002). Lower ankle joint stiffness was found in both groups when running barefoot (p=0.025). These findings illustrate that risk factors for injury potentially differ between the two groups. Shoe construction differences do change mechanical demands, however, once habituated to the demands of a given shoe condition, certain acute favorable or unfavorable responses may be moderated. The purported benefits of minimalist running shoes in mimicking habitual barefoot running is questioned, and risk of injury may not be attenuated. © Georg Thieme Verlag KG Stuttgart · New York.

  20. On bi-criteria two-stage transportation problem: a case study

    Directory of Open Access Journals (Sweden)

    Ahmad MURAD

    2010-01-01

    Full Text Available The study of the optimum distribution of goods between sources and destinations is one of the important topics in projects economics. This importance comes as a result of minimizing the transportation cost, deterioration, time, etc. The classical transportation problem constitutes one of the major areas of application for linear programming. The aim of this problem is to obtain the optimum distribution of goods from different sources to different destinations which minimizes the total transportation cost. From the practical point of view, the transportation problems may differ from the classical form. It may contain one or more objective function, one or more stage to transport, one or more type of commodity with one or more means of transport. The aim of this paper is to construct an optimization model for transportation problem for one of mill-stones companies. The model is formulated as a bi-criteria two-stage transportation problem with a special structure depending on the capacities of suppliers, warehouses and requirements of the destinations. A solution algorithm is introduced to solve this class of bi-criteria two-stage transportation problem to obtain the set of non-dominated extreme points and the efficient solutions accompanied with each one that enables the decision maker to choose the best one. The solution algorithm mainly based on the fruitful application of the methods for treating transportation problems, theory of duality of linear programming and the methods of solving bi-criteria linear programming problems.