WorldWideScience

Sample records for efficient two-stage approach

  1. A two-stage DEA approach for environmental efficiency measurement.

    Science.gov (United States)

    Song, Malin; Wang, Shuhong; Liu, Wei

    2014-05-01

    The slacks-based measure (SBM) model based on the constant returns to scale has achieved some good results in addressing the undesirable outputs, such as waste water and water gas, in measuring environmental efficiency. However, the traditional SBM model cannot deal with the scenario in which desirable outputs are constant. Based on the axiomatic theory of productivity, this paper carries out a systematic research on the SBM model considering undesirable outputs, and further expands the SBM model from the perspective of network analysis. The new model can not only perform efficiency evaluation considering undesirable outputs, but also calculate desirable and undesirable outputs separately. The latter advantage successfully solves the "dependence" problem of outputs, that is, we can not increase the desirable outputs without producing any undesirable outputs. The following illustration shows that the efficiency values obtained by two-stage approach are smaller than those obtained by the traditional SBM model. Our approach provides a more profound analysis on how to improve environmental efficiency of the decision making units.

  2. The Sources of Efficiency of the Nigerian Banking Industry: A Two- Stage Approach

    Directory of Open Access Journals (Sweden)

    Frances Obafemi

    2013-11-01

    Full Text Available The paper employed a two-stage Data Envelopment Analysis (DEA approach to examine the sources oftechnical efficiency in the Nigerian banking sub-sector. Using a cross sectionof commercial and merchant banks, the study showed that the Nigerian bankingindustry was not efficient both in the pre-and-post-liberalization era. Thestudy further revealed that market share was the strongest determinant oftechnical efficiency in the Nigerian banking Industry. Thus, appropriatemacroeconomic policy, institutional development and structural reforms mustaccompany financial liberalization to create the stable environment requiredfor it to succeed. Hence, the present bank consolidation and reforms by theCentral Bank of Nigeria, which started with Soludo and continued with Sanusi,are considered necessary, especially in the areas of e banking and reorganizingthe management of banks.

  3. A two-stage value chain model for vegetable marketing chain efficiency evaluation: A transaction cost approach

    OpenAIRE

    Lu Hualiang

    2006-01-01

    We applied a two-stage value chain model to investigate the effects of input application and occasional transaction costs on vegetable marketing chain efficiencies with a farm household-level data set. In the first stage, the production efficiencies with the combination of resource endowments, capital and managerial inputs, and production techniques were evaluated; then at the second stage, the marketing technical efficiencies were determined under the marketing value of the vegetables for th...

  4. Two stage approach to dynamic soil structure interaction

    International Nuclear Information System (INIS)

    Nelson, I.

    1981-01-01

    A two stage approach is used to reduce the effective size of soil island required to solve dynamic soil structure interaction problems. The ficticious boundaries of the conventional soil island are chosen sufficiently far from the structure so that the presence of the structure causes only a slight perturbation on the soil response near the boundaries. While the resulting finite element model of the soil structure system can be solved, it requires a formidable computational effort. Currently, a two stage approach is used to reduce this effort. The combined soil structure system has many frequencies and wavelengths. For a stiff structure, the lowest frequencies are those associated with the motion of the structure as a rigid body. In the soil, these modes have the longest wavelengths and attenuate most slowly. The higher frequency deformational modes of the structure have shorter wavelengths and their effect attenuates more rapidly with distance from the structure. The difference in soil response between a computation with a refined structural model, and one with a crude model, tends towards zero a very short distance from the structure. In the current work, the 'crude model' is a rigid structure with the same geometry and inertial properties as the refined model. Preliminary calculations indicated that a rigid structure would be a good low frequency approximation to the actual structure, provided the structure was much stiffer than the native soil. (orig./RW)

  5. Does integration of HIV and sexual and reproductive health services improve technical efficiency in Kenya and Swaziland? An application of a two-stage semi parametric approach incorporating quality measures.

    Science.gov (United States)

    Obure, Carol Dayo; Jacobs, Rowena; Guinness, Lorna; Mayhew, Susannah; Vassall, Anna

    2016-02-01

    Theoretically, integration of vertically organized services is seen as an important approach to improving the efficiency of health service delivery. However, there is a dearth of evidence on the effect of integration on the technical efficiency of health service delivery. Furthermore, where technical efficiency has been assessed, there have been few attempts to incorporate quality measures within efficiency measurement models particularly in sub-Saharan African settings. This paper investigates the technical efficiency and the determinants of technical efficiency of integrated HIV and sexual and reproductive health (SRH) services using data collected from 40 health facilities in Kenya and Swaziland for 2008/2009 and 2010/2011. Incorporating a measure of quality, we estimate the technical efficiency of health facilities and explore the effect of integration and other environmental factors on technical efficiency using a two-stage semi-parametric double bootstrap approach. The empirical results reveal a high degree of inefficiency in the health facilities studied. The mean bias corrected technical efficiency scores taking quality into consideration varied between 22% and 65% depending on the data envelopment analysis (DEA) model specification. The number of additional HIV services in the maternal and child health unit, public ownership and facility type, have a positive and significant effect on technical efficiency. However, number of additional HIV and STI services provided in the same clinical room, proportion of clinical staff to overall staff, proportion of HIV services provided, and rural location had a negative and significant effect on technical efficiency. The low estimates of technical efficiency and mixed effects of the measures of integration on efficiency challenge the notion that integration of HIV and SRH services may substantially improve the technical efficiency of health facilities. The analysis of quality and efficiency as separate dimensions of

  6. A two-stage stochastic programming approach for operating multi-energy systems

    DEFF Research Database (Denmark)

    Zeng, Qing; Fang, Jiakun; Chen, Zhe

    2017-01-01

    This paper provides a two-stage stochastic programming approach for joint operating multi-energy systems under uncertainty. Simulation is carried out in a test system to demonstrate the feasibility and efficiency of the proposed approach. The test energy system includes a gas subsystem with a gas...

  7. Assessing efficiency and effectiveness of Malaysian Islamic banks: A two stage DEA analysis

    Science.gov (United States)

    Kamarudin, Norbaizura; Ismail, Wan Rosmanira; Mohd, Muhammad Azri

    2014-06-01

    Islamic banks in Malaysia are indispensable players in the financial industry with the growing needs for syariah compliance system. In the banking industry, most recent studies concerned only on operational efficiency. However rarely on the operational effectiveness. Since the production process of banking industry can be described as a two-stage process, two-stage Data Envelopment Analysis (DEA) can be applied to measure the bank performance. This study was designed to measure the overall performance in terms of efficiency and effectiveness of Islamic banks in Malaysia using Two-Stage DEA approach. This paper presents analysis of a DEA model which split the efficiency and effectiveness in order to evaluate the performance of ten selected Islamic Banks in Malaysia for the financial year period ended 2011. The analysis shows average efficient score is more than average effectiveness score thus we can say that Malaysian Islamic banks were more efficient rather than effective. Furthermore, none of the bank exhibit best practice in both stages as we can say that a bank with better efficiency does not always mean having better effectiveness at the same time.

  8. Two-stage decision approach to material accounting

    International Nuclear Information System (INIS)

    Opelka, J.H.; Sutton, W.B.

    1982-01-01

    The validity of the alarm threshold 4sigma has been checked for hypothetical large and small facilities using a two-stage decision model in which the diverter's strategic variable is the quantity diverted, and the defender's strategic variables are the alarm threshold and the effectiveness of the physical security and material control systems in the possible presence of a diverter. For large facilities, the material accounting system inherently appears not to be a particularly useful system for the deterrence of diversions, and essentially no improvement can be made by lowering the alarm threshold below 4sigma. For small facilities, reduction of the threshold to 2sigma or 3sigma is a cost effective change for the accounting system, but is probably less cost effective than making improvements in the material control and physical security systems

  9. Two staged incentive contract focused on efficiency and innovation matching in critical chain project management

    Directory of Open Access Journals (Sweden)

    Min Zhang

    2014-09-01

    Full Text Available Purpose: The purpose of this paper is to define the relative optimal incentive contract to effectively encourage employees to improve work efficiency while actively implementing innovative behavior. Design/methodology/approach: This paper analyzes a two staged incentive contract coordinated with efficiency and innovation in Critical Chain Project Management using learning real options, based on principle-agent theory. The situational experiment is used to analyze the validity of the basic model. Finding: The two staged incentive scheme is more suitable for employees to create and implement learning real options, which will throw themselves into innovation process efficiently in Critical Chain Project Management. We prove that the combination of tolerance for early failure and reward for long-term success is effective in motivating innovation. Research limitations/implications: We do not include the individual characteristics of uncertain perception, which might affect the consistency of external validity. The basic model and the experiment design need to improve. Practical Implications: The project managers should pay closer attention to early innovation behavior and monitoring feedback of competition time in the implementation of Critical Chain Project Management. Originality/value: The central contribution of this paper is the theoretical and experimental analysis of incentive schemes for innovation in Critical Chain Project Management using the principal-agent theory, to encourage the completion of CCPM methods as well as imitative free-riding on the creative ideas of other members in the team.

  10. A Two-Stage DEA to Analyze the Effect of Entrance Deregulation on Iranian Insurers: A Robust Approach

    OpenAIRE

    Jalali Naini, Seyed Gholamreza; Nouralizadeh, Hamid Reza

    2012-01-01

    We use two-stage data envelopment analysis (DEA) model to analyze the effects of entrance deregulation on the efficiency in the Iranian insurance market. In the first stage, we propose a robust optimization approach in order to overcome the sensitivity of DEA results to any uncertainty in the output parameters. Hence, the efficiency of each ongoing insurer is estimated using our proposed robust DEA model. The insurers are then ranked based on their relative efficiency scores for an eight-year...

  11. Development and testing of a two stage granular filter to improve collection efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Rangan, R.S.; Prakash, S.G.; Chakravarti, S.; Rao, S.R.

    1999-07-01

    A circulating bed granular filter (CBGF) with a single filtration stage was tested with a PFB combustor in the Coal Research Facility of BHEL R and D in Hyderabad during the years 1993--95. Filter outlet dust loading varied between 20--50 mg/Nm{sup 3} for an inlet dust loading of 5--8 gms/Nm{sup 3}. The results were reported in Fluidized Bed Combustion-Volume 2, ASME 1995. Though the outlet consists of predominantly fine particulates below 2 microns, it is still beyond present day gas turbine specifications for particulate concentration. In order to enhance the collection efficiency, a two-stage granular filtration concept was evolved, wherein the filter depth is divided between two stages, accommodated in two separate vertically mounted units. The design also incorporates BHEL's scale-up concept of multiple parallel stages. The two-stage concept minimizes reentrainment of captured dust by providing clean granules in the upper stage, from where gases finally exit the filter. The design ensures that dusty gases come in contact with granules having a higher dust concentration at the bottom of the two-stage unit, where most of the cleaning is completed. A second filtration stage of cleaned granules is provided in the top unit (where the granules are returned to the system after dedusting) minimizing reentrainment. Tests were conducted to determine the optimum granule to dust ratio (G/D ratio) which decides the granule circulation rate required for the desired collection efficiency. The data brings out the importance of pre-separation and the limitation on inlet dust loading for any continuous system of granular filtration. Collection efficiencies obtained were much higher (outlet dust being 3--9 mg/Nm{sub 3}) than in the single stage filter tested earlier for similar dust loading at the inlet. The results indicate that two-stage granular filtration has a high potential for HTHT application with fewer risks as compared to other systems under development.

  12. EVALUATION OF A TWO-STAGE PASSIVE TREATMENT APPROACH FOR MINING INFLUENCE WATERS

    Science.gov (United States)

    A two-stage passive treatment approach was assessed at bench-scale using two Colorado Mining Influenced Waters (MIWs). The first-stage was a limestone drain with the purpose of removing iron and aluminum and mitigating the potential effects of mineral acidity. The second stage w...

  13. A Two-Stage Approach to Civil Conflict: Contested Incompatibilities and Armed Violence

    DEFF Research Database (Denmark)

    Bartusevicius, Henrikas; Gleditsch, Kristian Skrede

    2017-01-01

    conflict origination but have no clear effect on militarization, whereas other features emphasized as shaping the risk of civil war, such as refugee flows and soft state power, strongly influence militarization but not incompatibilities. We posit that a two-stage approach to conflict analysis can help...

  14. Advancing early detection of autism spectrum disorder by applying an integrated two-stage screening approach

    NARCIS (Netherlands)

    Oosterling, Iris J.; Wensing, Michel; Swinkels, Sophie H.; van der Gaag, Rutger Jan; Visser, Janne C.; Woudenberg, Tim; Minderaa, Ruud; Steenhuis, Mark-Peter; Buitelaar, Jan K.

    Background: Few field trials exist on the impact of implementing guidelines for the early detection of autism spectrum disorders (ASD). The aims of the present study were to develop and evaluate a clinically relevant integrated early detection programme based on the two-stage screening approach of

  15. Efficiency assessment of wind farms in China using two-stage data envelopment analysis

    International Nuclear Information System (INIS)

    Wu, Yunna; Hu, Yong; Xiao, Xinli; Mao, Chunyu

    2016-01-01

    Highlights: • The efficiency of China’s wind farms is assessed by data envelopment analysis. • Tobit model is used to analyze the impact of uncontrollable factors on efficiency. • Sensitivity analysis is conducted to verify the stability of evaluation results. • Efficiency levels of Chinese wind farms are relatively high in general. • Age and wind curtailment rate negatively affect the productive efficiency. - Abstract: China has been the world’s leader in wind power capacity due to the promotion of favorable policies. Given the rare research on the efficiency of China’s wind farms, this study analyzes the productive efficiency of 42 large-scale wind farms in China using a two-stage analysis. In the first stage, efficiency scores of wind farms are determined with data envelopment analysis and the sensitivity analysis is conducted to verify the robustness of efficiency calculation results. In the second stage, the Tobit regression is employed to explore the relationship between the efficiency scores and the environment variables that are beyond the control of wind farms. According to the results, all wind farms studied operate at an acceptable level. However, 50% of them overinvest in the installed capacity and about 48% have the electricity-saving potential. The most important factors affecting the efficiency of wind farms are the installed capacity and the wind power density. In addition, the age of the wind farm and the wind curtailment rate have a negative effect on productive efficiency, whereas the ownership of the wind farm has no significant effect. Findings from this study may be helpful for stakeholders in the wind industry to select wind power projects, optimize operational strategies and make related policies.

  16. Optimization of Two-Stage Peltier Modules: Structure and Exergetic Efficiency

    Directory of Open Access Journals (Sweden)

    Cesar Ramirez-Lopez

    2012-08-01

    Full Text Available In this paper we undertake the theoretical analysis of a two-stage semiconductor thermoelectric module (TEM which contains an arbitrary and different number of thermocouples, n1 and n2, in each stage (pyramid-styled TEM. The analysis is based on a dimensionless entropy balance set of equations. We study the effects of n1 and n2, the flowing electric currents through each stage, the applied temperatures and the thermoelectric properties of the semiconductor materials on the exergetic efficiency. Our main result implies that the electric currents flowing in each stage must necessarily be different with a ratio about 4.3 if the best thermal performance and the highest temperature difference possible between the cold and hot side of the device are pursued. This fact had not been pointed out before for pyramid-styled two stage TEM. The ratio n1/n2 should be about 8.

  17. Neuroscience and approach/avoidance personality traits: a two stage (valuation-motivation) approach.

    Science.gov (United States)

    Corr, Philip J; McNaughton, Neil

    2012-11-01

    Many personality theories link specific traits to the sensitivities of the neural systems that control approach and avoidance. But there is no consensus on the nature of these systems. Here we combine recent advances in economics and neuroscience to provide a more solid foundation for a neuroscience of approach/avoidance personality. We propose a two-stage integration of valuation (loss/gain) sensitivities with motivational (approach/avoidance/conflict) sensitivities. Our key conclusions are: (1) that valuation of appetitive and aversive events (e.g. gain and loss as studied by behavioural economists) is an independent perceptual input stage--with the economic phenomenon of loss aversion resulting from greater negative valuation sensitivity compared to positive valuation sensitivity; (2) that valuation of an appetitive stimulus then interacts with a contingency of presentation or omission to generate a motivational 'attractor' or 'repulsor', respectively (vice versa for an aversive stimulus); (3) the resultant behavioural tendencies to approach or avoid have distinct sensitivities to those of the valuation systems; (4) while attractors and repulsors can reinforce new responses they also, more usually, elicit innate or previously conditioned responses and so the perception/valuation-motivation/action complex is best characterised as acting as a 'reinforcer' not a 'reinforcement'; and (5) approach-avoidance conflict must be viewed as activating a third motivation system that is distinct from the basic approach and avoidance systems. We provide examples of methods of assessing each of the constructs within approach-avoidance theories and of linking these constructs to personality measures. We sketch a preliminary five-element reinforcer sensitivity theory (RST-5) as a first step in the integration of existing specific approach-avoidance theories into a coherent neuroscience of personality. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. Efficiency of primary care in rural Burkina Faso. A two-stage DEA analysis.

    Science.gov (United States)

    Marschall, Paul; Flessa, Steffen

    2011-07-20

    Providing health care services in Africa is hampered by severe scarcity of personnel, medical supplies and financial funds. Consequently, managers of health care institutions are called to measure and improve the efficiency of their facilities in order to provide the best possible services with their resources. However, very little is known about the efficiency of health care facilities in Africa and instruments of performance measurement are hardly applied in this context. This study determines the relative efficiency of primary care facilities in Nouna, a rural health district in Burkina Faso. Furthermore, it analyses the factors influencing the efficiency of these institutions. We apply a two-stage Data Envelopment Analysis (DEA) based on data from a comprehensive provider and household information system. In the first stage, the relative efficiency of each institution is calculated by a traditional DEA model. In the second stage, we identify the reasons for being inefficient by regression technique. The DEA projections suggest that inefficiency is mainly a result of poor utilization of health care facilities as they were either too big or the demand was too low. Regression results showed that distance is an important factor influencing the efficiency of a health care institution Compared to the findings of existing one-stage DEA analyses of health facilities in Africa, the share of relatively efficient units is slightly higher. The difference might be explained by a rather homogenous structure of the primary care facilities in the Burkina Faso sample. The study also indicates that improving the accessibility of primary care facilities will have a major impact on the efficiency of these institutions. Thus, health decision-makers are called to overcome the demand-side barriers in accessing health care.

  19. Train Stop Scheduling in a High-Speed Rail Network by Utilizing a Two-Stage Approach

    Directory of Open Access Journals (Sweden)

    Huiling Fu

    2012-01-01

    Full Text Available Among the most commonly used methods of scheduling train stops are practical experience and various “one-step” optimal models. These methods face problems of direct transferability and computational complexity when considering a large-scale high-speed rail (HSR network such as the one in China. This paper introduces a two-stage approach for train stop scheduling with a goal of efficiently organizing passenger traffic into a rational train stop pattern combination while retaining features of regularity, connectivity, and rapidity (RCR. Based on a three-level station classification definition, a mixed integer programming model and a train operating tactics descriptive model along with the computing algorithm are developed and presented for the two stages. A real-world numerical example is presented using the Chinese HSR network as the setting. The performance of the train stop schedule and the applicability of the proposed approach are evaluated from the perspective of maintaining RCR.

  20. Maximally efficient two-stage screening: Determining intellectual disability in Taiwanese military conscripts.

    Science.gov (United States)

    Chien, Chia-Chang; Huang, Shu-Fen; Lung, For-Wey

    2009-01-27

    The purpose of this study was to apply a two-stage screening method for the large-scale intelligence screening of military conscripts. We collected 99 conscripted soldiers whose educational levels were senior high school level or lower to be the participants. Every participant was required to take the Wisconsin Card Sorting Test (WCST) and the Wechsler Adult Intelligence Scale-Revised (WAIS-R) assessments. Logistic regression analysis showed the conceptual level responses (CLR) index of the WCST was the most significant index for determining intellectual disability (ID; FIQ ≤ 84). We used the receiver operating characteristic curve to determine the optimum cut-off point of CLR. The optimum one cut-off point of CLR was 66; the two cut-off points were 49 and 66. Comparing the two-stage window screening with the two-stage positive screening, the area under the curve and the positive predictive value increased. Moreover, the cost of the two-stage window screening decreased by 59%. The two-stage window screening is more accurate and economical than the two-stage positive screening. Our results provide an example for the use of two-stage screening and the possibility of the WCST to replace WAIS-R in large-scale screenings for ID in the future.

  1. Maximally efficient two-stage screening: Determining intellectual disability in Taiwanese military conscripts

    Directory of Open Access Journals (Sweden)

    Chia-Chang Chien

    2009-01-01

    Full Text Available Chia-Chang Chien1, Shu-Fen Huang1,2,3,4, For-Wey Lung1,2,3,41Department of Psychiatry, Kaohsiung Armed Forces General Hospital, Kaohsiung, Taiwan; 2Graduate Institute of Behavioral Sciences, Kaohsiung Medical University, Kaohsiung, Taiwan; 3Department of Psychiatry, National Defense Medical Center, Taipei, Taiwan; 4Calo Psychiatric Center, Pingtung County, TaiwanObjective: The purpose of this study was to apply a two-stage screening method for the large-scale intelligence screening of military conscripts.Methods: We collected 99 conscripted soldiers whose educational levels were senior high school level or lower to be the participants. Every participant was required to take the Wisconsin Card Sorting Test (WCST and the Wechsler Adult Intelligence Scale-Revised (WAIS-R assessments.Results: Logistic regression analysis showed the conceptual level responses (CLR index of the WCST was the most significant index for determining intellectual disability (ID; FIQ ≤ 84. We used the receiver operating characteristic curve to determine the optimum cut-off point of CLR. The optimum one cut-off point of CLR was 66; the two cut-off points were 49 and 66. Comparing the two-stage window screening with the two-stage positive screening, the area under the curve and the positive predictive value increased. Moreover, the cost of the two-stage window screening decreased by 59%.Conclusion: The two-stage window screening is more accurate and economical than the two-stage positive screening. Our results provide an example for the use of two-stage screening and the possibility of the WCST to replace WAIS-R in large-scale screenings for ID in the future.Keywords: intellectual disability, intelligence screening, two-stage positive screening, Wisconsin Card Sorting Test, Wechsler Adult Intelligence Scale-Revised

  2. Comparisons of single-stage and two-stage approaches to genomic selection.

    Science.gov (United States)

    Schulz-Streeck, Torben; Ogutu, Joseph O; Piepho, Hans-Peter

    2013-01-01

    Genomic selection (GS) is a method for predicting breeding values of plants or animals using many molecular markers that is commonly implemented in two stages. In plant breeding the first stage usually involves computation of adjusted means for genotypes which are then used to predict genomic breeding values in the second stage. We compared two classical stage-wise approaches, which either ignore or approximate correlations among the means by a diagonal matrix, and a new method, to a single-stage analysis for GS using ridge regression best linear unbiased prediction (RR-BLUP). The new stage-wise method rotates (orthogonalizes) the adjusted means from the first stage before submitting them to the second stage. This makes the errors approximately independently and identically normally distributed, which is a prerequisite for many procedures that are potentially useful for GS such as machine learning methods (e.g. boosting) and regularized regression methods (e.g. lasso). This is illustrated in this paper using componentwise boosting. The componentwise boosting method minimizes squared error loss using least squares and iteratively and automatically selects markers that are most predictive of genomic breeding values. Results are compared with those of RR-BLUP using fivefold cross-validation. The new stage-wise approach with rotated means was slightly more similar to the single-stage analysis than the classical two-stage approaches based on non-rotated means for two unbalanced datasets. This suggests that rotation is a worthwhile pre-processing step in GS for the two-stage approaches for unbalanced datasets. Moreover, the predictive accuracy of stage-wise RR-BLUP was higher (5.0-6.1%) than that of componentwise boosting.

  3. Forecasting long memory series subject to structural change: A two-stage approach

    DEFF Research Database (Denmark)

    Papailias, Fotis; Dias, Gustavo Fruet

    2015-01-01

    A two-stage forecasting approach for long memory time series is introduced. In the first step, we estimate the fractional exponent and, by applying the fractional differencing operator, obtain the underlying weakly dependent series. In the second step, we produce multi-step-ahead forecasts...... for the weakly dependent series and obtain their long memory counterparts by applying the fractional cumulation operator. The methodology applies to both stationary and nonstationary cases. Simulations and an application to seven time series provide evidence that the new methodology is more robust to structural...

  4. An Efficient Robust Solution to the Two-Stage Stochastic Unit Commitment Problem

    DEFF Research Database (Denmark)

    Blanco, Ignacio; Morales González, Juan Miguel

    2017-01-01

    This paper proposes a reformulation of the scenario-based two-stage unitcommitment problem under uncertainty that allows finding unit-commitment plansthat perform reasonably well both in expectation and for the worst caserealization of the uncertainties. The proposed reformulation is based onpart...

  5. Two-stage autotransplantation of human submandibular gland: a novel approach to treat postradiogenic xerostomia.

    Science.gov (United States)

    Hagen, Rudolf; Scheich, Matthias; Kleinsasser, Norbert; Burghartz, Marc

    2016-08-01

    Xerostomia is a persistent side effect of radiotherapy (RT), which severely reduces the quality of life of the patients affected. Besides drug treatment and new irradiation strategies, surgical procedures aim for tissue protection of the submandibular gland. Using a new surgical approach, the submandibular gland was autotransplanted in 6 patients to the patient's forearm for the period of RT and reimplanted into the floor of the mouth 2-3 months after completion of RT. Saxon's test was performed during different time points to evaluate patient's saliva production. Furthermore patients had to answer EORTC QLQ-HN35 questionnaire and visual analog scale. Following this two-stage autotransplantation, xerostomia in the patients was markedly reduced due to improved saliva production of the reimplanted gland. Whether this promising novel approach is a reliable treatment option for RT patients in general should be evaluated in further studies.

  6. Robust Frequency-Domain Constrained Feedback Design via a Two-Stage Heuristic Approach.

    Science.gov (United States)

    Li, Xianwei; Gao, Huijun

    2015-10-01

    Based on a two-stage heuristic method, this paper is concerned with the design of robust feedback controllers with restricted frequency-domain specifications (RFDSs) for uncertain linear discrete-time systems. Polytopic uncertainties are assumed to enter all the system matrices, while RFDSs are motivated by the fact that practical design specifications are often described in restricted finite frequency ranges. Dilated multipliers are first introduced to relax the generalized Kalman-Yakubovich-Popov lemma for output feedback controller synthesis and robust performance analysis. Then a two-stage approach to output feedback controller synthesis is proposed: at the first stage, a robust full-information (FI) controller is designed, which is used to construct a required output feedback controller at the second stage. To improve the solvability of the synthesis method, heuristic iterative algorithms are further formulated for exploring the feedback gain and optimizing the initial FI controller at the individual stage. The effectiveness of the proposed design method is finally demonstrated by the application to active control of suspension systems.

  7. Mediastinal Bronchogenic Cyst With Acute Cardiac Dysfunction: Two-Stage Surgical Approach.

    Science.gov (United States)

    Smail, Hassiba; Baste, Jean Marc; Melki, Jean; Peillon, Christophe

    2015-10-01

    We describe a two-stage surgical approach in a patient with cardiac dysfunction and hemodynamic compromise resulting from a massive and compressive mediastinal bronchogenic cyst. To drain this cyst, video-assisted mediastinoscopy was performed as an emergency procedure, which immediately improved the patient's cardiac function. Five days later and under video thoracoscopy, resection of the cyst margins was impossible because the cyst was tightly adherent to the left atrium. We performed deroofing of this cyst through a right thoracotomy. The patient had an uncomplicated postoperative recovery, and no recurrence was observed at the long-term follow-up visit. Copyright © 2015 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  8. A cause and effect two-stage BSC-DEA method for measuring the relative efficiency of organizations

    Directory of Open Access Journals (Sweden)

    Seyed Esmaeel Najafi

    2011-01-01

    Full Text Available This paper presents an integration of balanced score card (BSE with two-stage data envelopment analysis (DEA. The proposed model of this paper uses different financial and non-financial perspectives to evaluate the performance of decision making units in different BSC stages. At each stage, a two-stage DEA method is implemented to measure the relative efficiency of decision making units and the results are monitored using the cause and effect relationships. An empirical study for a banking sector is also performed using the method developed in this paper and the results are briefly analyzed.

  9. Maximally efficient two-stage screening: Determining intellectual disability in Taiwanese military conscripts

    OpenAIRE

    Chien, Chia-Chang; Huang, Shu-Fen; Lung, For-Wey

    2009-01-01

    Chia-Chang Chien1, Shu-Fen Huang1,2,3,4, For-Wey Lung1,2,3,41Department of Psychiatry, Kaohsiung Armed Forces General Hospital, Kaohsiung, Taiwan; 2Graduate Institute of Behavioral Sciences, Kaohsiung Medical University, Kaohsiung, Taiwan; 3Department of Psychiatry, National Defense Medical Center, Taipei, Taiwan; 4Calo Psychiatric Center, Pingtung County, TaiwanObjective: The purpose of this study was to apply a two-stage screening method for the large-scale intelligence screening of militar...

  10. A two-stage approach to the depot shunting driver assignment problem with workload balance considerations.

    Science.gov (United States)

    Wang, Jiaxi; Gronalt, Manfred; Sun, Yan

    2017-01-01

    Due to its environmentally sustainable and energy-saving characteristics, railway transportation nowadays plays a fundamental role in delivering passengers and goods. Emerged in the area of transportation planning, the crew (workforce) sizing problem and the crew scheduling problem have been attached great importance by the railway industry and the scientific community. In this paper, we aim to solve the two problems by proposing a novel two-stage optimization approach in the context of the electric multiple units (EMU) depot shunting driver assignment problem. Given a predefined depot shunting schedule, the first stage of the approach focuses on determining an optimal size of shunting drivers. While the second stage is formulated as a bi-objective optimization model, in which we comprehensively consider the objectives of minimizing the total walking distance and maximizing the workload balance. Then we combine the normalized normal constraint method with a modified Pareto filter algorithm to obtain Pareto solutions for the bi-objective optimization problem. Furthermore, we conduct a series of numerical experiments to demonstrate the proposed approach. Based on the computational results, the regression analysis yield a driver size predictor and the sensitivity analysis give some interesting insights that are useful for decision makers.

  11. A two-stage approach to the depot shunting driver assignment problem with workload balance considerations.

    Directory of Open Access Journals (Sweden)

    Jiaxi Wang

    Full Text Available Due to its environmentally sustainable and energy-saving characteristics, railway transportation nowadays plays a fundamental role in delivering passengers and goods. Emerged in the area of transportation planning, the crew (workforce sizing problem and the crew scheduling problem have been attached great importance by the railway industry and the scientific community. In this paper, we aim to solve the two problems by proposing a novel two-stage optimization approach in the context of the electric multiple units (EMU depot shunting driver assignment problem. Given a predefined depot shunting schedule, the first stage of the approach focuses on determining an optimal size of shunting drivers. While the second stage is formulated as a bi-objective optimization model, in which we comprehensively consider the objectives of minimizing the total walking distance and maximizing the workload balance. Then we combine the normalized normal constraint method with a modified Pareto filter algorithm to obtain Pareto solutions for the bi-objective optimization problem. Furthermore, we conduct a series of numerical experiments to demonstrate the proposed approach. Based on the computational results, the regression analysis yield a driver size predictor and the sensitivity analysis give some interesting insights that are useful for decision makers.

  12. A New Two-Stage Approach to Short Term Electrical Load Forecasting

    Directory of Open Access Journals (Sweden)

    Dragan Tasić

    2013-04-01

    Full Text Available In the deregulated energy market, the accuracy of load forecasting has a significant effect on the planning and operational decision making of utility companies. Electric load is a random non-stationary process influenced by a number of factors which make it difficult to model. To achieve better forecasting accuracy, a wide variety of models have been proposed. These models are based on different mathematical methods and offer different features. This paper presents a new two-stage approach for short-term electrical load forecasting based on least-squares support vector machines. With the aim of improving forecasting accuracy, one more feature was added to the model feature set, the next day average load demand. As this feature is unknown for one day ahead, in the first stage, forecasting of the next day average load demand is done and then used in the model in the second stage for next day hourly load forecasting. The effectiveness of the presented model is shown on the real data of the ISO New England electricity market. The obtained results confirm the validity advantage of the proposed approach.

  13. Optimization of Removal Efficiency and Minimum Contact Time for Cadmium and Zinc Removal onto Iron-modified Zeolite in a Two-stage Batch Sorption Reactor

    Directory of Open Access Journals (Sweden)

    M. Ugrina

    2018-01-01

    Full Text Available In highly congested industrial sites where significant volumes of effluents have to be treated in the minimum contact time, the application of a multi-stage batch reactor is suggested. To achieve better balance between capacity utilization and cost efficiency in design optimization, a two-stage batch reactor is usually the optimal solution. Thus, in this paper, a two-stage batch sorption design approach was applied to the experimental data of cadmium and zinc uptake onto iron-modified zeolite. The optimization approach involves the application of the Vermeulen’s approximation model and mass balance equation to kinetic data. A design analysis method was developed to optimize the removal efficiency and minimum total contact time by combining the time required in the two-stages, in order to achieve the maximum percentage of cadmium and zinc removal using a fixed mass of zeolite. The benefits and limitations of the two-stage design approach have been investigated and discussed

  14. TWO-STAGE CHARACTER CLASSIFICATION : A COMBINED APPROACH OF CLUSTERING AND SUPPORT VECTOR CLASSIFIERS

    NARCIS (Netherlands)

    Vuurpijl, L.; Schomaker, L.

    2000-01-01

    This paper describes a two-stage classification method for (1) classification of isolated characters and (2) verification of the classification result. Character prototypes are generated using hierarchical clustering. For those prototypes known to sometimes produce wrong classification results, a

  15. Efficiency and productivity (TFP) of the Turkish electricity distribution companies: An application of two-stage (DEA and Tobit) analysis

    International Nuclear Information System (INIS)

    Çelen, Aydın

    2013-01-01

    In this study, we analyze the efficiency performances of 21 Turkish electricity distribution companies during the period of 2002–2009. For this aim, we employ a two-stage analysis in order to take into account the business environment variables which are beyond the control of distribution companies. We determine the efficiency performances of the electricity distribution companies by help of DEA in the first stage. Then, in the second stage, using these calculated efficiency scores as dependent variable, we utilize Tobit model to determine the business environment variables which may explain the efficiency scores. According to the results, customer density of the region and the private ownership affect the efficiencies positively. Thus, the best strategy to improve efficiency in the market is privatizing the public distribution companies. - Highlights: • We analyze the efficiencies of 21 Turkish electricity distribution companies. • A two-stage analysis is employed to take into account environmental variables. • We firstly calculate efficiencies of companies with DEA, then Tobit model is used to determine the effects of the variables. • Customer density and ownership type affect the efficiencies positively. • Privatization is a good strategy to improve efficiencies

  16. Two-stage laparoscopic approaches for high anorectal malformation: transumbilical colostomy and anorectoplasty.

    Science.gov (United States)

    Yang, Li; Tang, Shao-Tao; Li, Shuai; Aubdoollah, T H; Cao, Guo-Qing; Lei, Hai-Yan; Wang, Xin-Xing

    2014-11-01

    Trans-umbilical colostomy (TUC) has been previously created in patients with Hirschsprung's disease and intermediate anorectal malformation (ARM), but not in patients with high-ARM. The purposes of this study were to assess the feasibility, safety, complications and cosmetic results of TUC in a divided fashion, and subsequently stoma closure and laparoscopic assisted anorectoplasty (LAARP) were simultaneously completed by using the colostomy site for a laparoscopic port in high-ARM patients. Twenty male patients with high-ARMs were chosen for this two-stage procedure. The first-stage consisted of creating the TUC in double-barreled fashion colostomy with a high chimney at the umbilicus, and the loop was divided at the same time, in such a way that the two diverting ends were located at the umbilical incision with the distal end half closed and slightly higher than proximal end. In the second-stage, 3 to 7 months later, the stoma was closed through a peristomal skin incision followed by end-to-end anastomosis and simultaneously LAARP was performed by placing a laparoscopic port at the umbilicus, which was previously the colonostomy site. Umbilical wound closure was performed in a semi-opened fashion to create a deep umbilicus. TUC and LAARP were successfully performed in 20 patients. Four cases with bladder neck fistulas and 16 cases with prostatic urethra fistulas were found. Postoperative complications were rectal mucosal prolapsed in three cases, anal stricture in two cases and wound dehiscence in one case. Neither umbilical ring narrowing, parastomal hernia nor obstructive symptoms was observed. Neither umbilical nor perineal wound infection was observed. Stoma care was easily carried-out by attaching stoma bag. Healing of umbilical wounds after the second-stage was excellent. Early functional stooling outcome were satisfactory. The umbilicus may be an alternative stoma site for double-barreled colostomy in high-ARM patients. The two-stage laparoscopic

  17. HOUSEHOLD FOOD DEMAND IN INDONESIA: A TWO-STAGE BUDGETING APPROACH

    Directory of Open Access Journals (Sweden)

    Agus Widarjono

    2016-05-01

    Full Text Available A two-stage budgeting approach was applied to analyze the food demand in urban areas separated by geographical areas and classified by income groups. The demographically augmented Quadratic Almost Ideal Demand System (QUAIDS was employed to estimate the demand elasticity. Data from the National Social and Economic Survey of Households (SUSENAS in 2011 were used. The demand system is a censored model because the data contains zero expenditures and is estimated by employing the consistent two-step estimation procedure to solve biased estimation. The results show that price and income elasticities become less elastic from poor households to rich households. Demand by urban households in Java is more responsive to price but less responsive to income than urban households outside of Java. Simulation policies indicate that an increase in food prices would have more adverse impacts than a decrease in income levels. Poor families would suffer more than rich families from rising food prices and/or decreasing incomes. More importantly, urban households on Java are more vulnerable to an economic crisis, and would respond by reducing their food consumption. Economic policies to stabilize food prices are better than income policies, such as the cash transfer, to maintain the well-being of the population in Indonesia

  18. A Two-Stage DEA to Analyze the Effect of Entrance Deregulation on Iranian Insurers: A Robust Approach

    Directory of Open Access Journals (Sweden)

    Seyed Gholamreza Jalali Naini

    2012-01-01

    Full Text Available We use two-stage data envelopment analysis (DEA model to analyze the effects of entrance deregulation on the efficiency in the Iranian insurance market. In the first stage, we propose a robust optimization approach in order to overcome the sensitivity of DEA results to any uncertainty in the output parameters. Hence, the efficiency of each ongoing insurer is estimated using our proposed robust DEA model. The insurers are then ranked based on their relative efficiency scores for an eight-year period from 2003 to 2010. In the second stage, a comprehensive statistical analysis using generalized estimating equations (GEE is conducted to analyze some other factors which could possibly affect the efficiency scores. The first results from DEA model indicate a decline in efficiency over the entrance deregulation period while further statistical analysis confirms that the solvency ignorance which is a widespread paradigm among state owned companies is one of the main drivers of efficiency in the Iranian insurance market.

  19. Evidence that viral RNAs have evolved for efficient, two-stage packaging.

    Science.gov (United States)

    Borodavka, Alexander; Tuma, Roman; Stockley, Peter G

    2012-09-25

    Genome packaging is an essential step in virus replication and a potential drug target. Single-stranded RNA viruses have been thought to encapsidate their genomes by gradual co-assembly with capsid subunits. In contrast, using a single molecule fluorescence assay to monitor RNA conformation and virus assembly in real time, with two viruses from differing structural families, we have discovered that packaging is a two-stage process. Initially, the genomic RNAs undergo rapid and dramatic (approximately 20-30%) collapse of their solution conformations upon addition of cognate coat proteins. The collapse occurs with a substoichiometric ratio of coat protein subunits and is followed by a gradual increase in particle size, consistent with the recruitment of additional subunits to complete a growing capsid. Equivalently sized nonviral RNAs, including high copy potential in vivo competitor mRNAs, do not collapse. They do support particle assembly, however, but yield many aberrant structures in contrast to viral RNAs that make only capsids of the correct size. The collapse is specific to viral RNA fragments, implying that it depends on a series of specific RNA-protein interactions. For bacteriophage MS2, we have shown that collapse is driven by subsequent protein-protein interactions, consistent with the RNA-protein contacts occurring in defined spatial locations. Conformational collapse appears to be a distinct feature of viral RNA that has evolved to facilitate assembly. Aspects of this process mimic those seen in ribosome assembly.

  20. Two-Stage Power Factor Corrected Power Supplies: The Low Component-Stress Approach

    DEFF Research Database (Denmark)

    Petersen, Lars; Andersen, Michael Andreas E.

    2002-01-01

    The discussion concerning the use of single-stage contra two-stage PFC solutions has been going on for the last decade and it continues. The purpose of this paper is to direct the focus back on how the power is processed and not so much as to the number of stages or the amount of power processed...

  1. A Two-Stage Multi-Agent Based Assessment Approach to Enhance Students' Learning Motivation through Negotiated Skills Assessment

    Science.gov (United States)

    Chadli, Abdelhafid; Bendella, Fatima; Tranvouez, Erwan

    2015-01-01

    In this paper we present an Agent-based evaluation approach in a context of Multi-agent simulation learning systems. Our evaluation model is based on a two stage assessment approach: (1) a Distributed skill evaluation combining agents and fuzzy sets theory; and (2) a Negotiation based evaluation of students' performance during a training…

  2. Optimization of storage tank locations in an urban stormwater drainage system using a two-stage approach.

    Science.gov (United States)

    Wang, Mingming; Sun, Yuanxiang; Sweetapple, Chris

    2017-12-15

    Storage is important for flood mitigation and non-point source pollution control. However, to seek a cost-effective design scheme for storage tanks is very complex. This paper presents a two-stage optimization framework to find an optimal scheme for storage tanks using storm water management model (SWMM). The objectives are to minimize flooding, total suspended solids (TSS) load and storage cost. The framework includes two modules: (i) the analytical module, which evaluates and ranks the flooding nodes with the analytic hierarchy process (AHP) using two indicators (flood depth and flood duration), and then obtains the preliminary scheme by calculating two efficiency indicators (flood reduction efficiency and TSS reduction efficiency); (ii) the iteration module, which obtains an optimal scheme using a generalized pattern search (GPS) method based on the preliminary scheme generated by the analytical module. The proposed approach was applied to a catchment in CZ city, China, to test its capability in choosing design alternatives. Different rainfall scenarios are considered to test its robustness. The results demonstrate that the optimal framework is feasible, and the optimization is fast based on the preliminary scheme. The optimized scheme is better than the preliminary scheme for reducing runoff and pollutant loads under a given storage cost. The multi-objective optimization framework presented in this paper may be useful in finding the best scheme of storage tanks or low impact development (LID) controls. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Optimizing the Steel Plate Storage Yard Crane Scheduling Problem Using a Two Stage Planning/Scheduling Approach

    DEFF Research Database (Denmark)

    Hansen, Anders Dohn; Clausen, Jens

    This paper presents the Steel Plate Storage Yard Crane Scheduling Problem. The task is to generate a schedule for two gantry cranes sharing tracks. The schedule must comply with a number of constraints and at the same time be cost efficient. We propose some ideas for a two stage planning...

  4. Two-stage discrete-continuous multi-objective load optimization: An industrial consumer utility approach to demand response

    International Nuclear Information System (INIS)

    Abdulaal, Ahmed; Moghaddass, Ramin; Asfour, Shihab

    2017-01-01

    Highlights: •Two-stage model links discrete-optimization to real-time system dynamics operation. •The solutions obtained are non-dominated Pareto optimal solutions. •Computationally efficient GA solver through customized chromosome coding. •Modest to considerable savings are achieved depending on the consumer’s preference. -- Abstract: In the wake of today’s highly dynamic and competitive energy markets, optimal dispatching of energy sources requires effective demand responsiveness. Suppliers have adopted a dynamic pricing strategy in efforts to control the downstream demand. This method however requires consumer awareness, flexibility, and timely responsiveness. While residential activities are more flexible and schedulable, larger commercial consumers remain an obstacle due to the impacts on industrial performance. This paper combines methods from quadratic, stochastic, and evolutionary programming with multi-objective optimization and continuous simulation, to propose a two-stage discrete-continuous multi-objective load optimization (DiCoMoLoOp) autonomous approach for industrial consumer demand response (DR). Stage 1 defines discrete-event load shifting targets. Accordingly, controllable loads are continuously optimized in stage 2 while considering the consumer’s utility. Utility functions, which measure the loads’ time value to the consumer, are derived and weights are assigned through an analytical hierarchy process (AHP). The method is demonstrated for an industrial building model using real data. The proposed method integrates with building energy management system and solves in real-time with autonomous and instantaneous load shifting in the hour-ahead energy price (HAP) market. The simulation shows the occasional existence of multiple load management options on the Pareto frontier. Finally, the computed savings, based on the simulation analysis with real consumption, climate, and price data, ranged from modest to considerable amounts

  5. A two-stage approach for multi-objective decision making with applications to system reliability optimization

    International Nuclear Information System (INIS)

    Li Zhaojun; Liao Haitao; Coit, David W.

    2009-01-01

    This paper proposes a two-stage approach for solving multi-objective system reliability optimization problems. In this approach, a Pareto optimal solution set is initially identified at the first stage by applying a multiple objective evolutionary algorithm (MOEA). Quite often there are a large number of Pareto optimal solutions, and it is difficult, if not impossible, to effectively choose the representative solutions for the overall problem. To overcome this challenge, an integrated multiple objective selection optimization (MOSO) method is utilized at the second stage. Specifically, a self-organizing map (SOM), with the capability of preserving the topology of the data, is applied first to classify those Pareto optimal solutions into several clusters with similar properties. Then, within each cluster, the data envelopment analysis (DEA) is performed, by comparing the relative efficiency of those solutions, to determine the final representative solutions for the overall problem. Through this sequential solution identification and pruning process, the final recommended solutions to the multi-objective system reliability optimization problem can be easily determined in a more systematic and meaningful way.

  6. Plant specification of a generic human-error data through a two-stage Bayesian approach

    International Nuclear Information System (INIS)

    Heising, C.D.; Patterson, E.I.

    1984-01-01

    Expert judgement concerning human performance in nuclear power plants is quantitatively coupled with actuarial data on such performance in order to derive plant-specific human-error rate probability distributions. The coupling procedure consists of a two-stage application of Bayes' theorem to information which is grouped by type. The first information type contains expert judgement concerning human performance at nuclear power plants in general. Data collected on human performance at a group of similar plants forms the second information type. The third information type consists of data on human performance in a specific plant which has the same characteristics as the group members. The first and second information types are coupled in the first application of Bayes' theorem to derive a probability distribution for population performance. This distribution is then combined with the third information type in a second application of Bayes' theorem to determine a plant-specific human-error rate probability distribution. The two stage Bayesian procedure thus provides a means to quantitatively couple sparse data with expert judgement in order to obtain a human performance probability distribution based upon available information. Example calculations for a group of like reactors are also given. (author)

  7. Two-Stage Classification Approach for Human Detection in Camera Video in Bulk Ports

    Directory of Open Access Journals (Sweden)

    Mi Chao

    2015-09-01

    Full Text Available With the development of automation in ports, the video surveillance systems with automated human detection begun to be applied in open-air handling operation areas for safety and security. The accuracy of traditional human detection based on the video camera is not high enough to meet the requirements of operation surveillance. One of the key reasons is that Histograms of Oriented Gradients (HOG features of the human body will show great different between front & back standing (F&B and side standing (Side human body. Therefore, the final training for classifier will only gain a few useful specific features which have contribution to classification and are insufficient to support effective classification, while using the HOG features directly extracted by the samples from different human postures. This paper proposes a two-stage classification method to improve the accuracy of human detection. In the first stage, during preprocessing classification, images is mainly divided into possible F&B human body and not F&B human body, and then they were put into the second-stage classification among side human and non-human recognition. The experimental results in Tianjin port show that the two-stage classifier can improve the classification accuracy of human detection obviously.

  8. A two-stage approach for improved prediction of residue contact maps

    Directory of Open Access Journals (Sweden)

    Pollastri Gianluca

    2006-03-01

    Full Text Available Abstract Background Protein topology representations such as residue contact maps are an important intermediate step towards ab initio prediction of protein structure. Although improvements have occurred over the last years, the problem of accurately predicting residue contact maps from primary sequences is still largely unsolved. Among the reasons for this are the unbalanced nature of the problem (with far fewer examples of contacts than non-contacts, the formidable challenge of capturing long-range interactions in the maps, the intrinsic difficulty of mapping one-dimensional input sequences into two-dimensional output maps. In order to alleviate these problems and achieve improved contact map predictions, in this paper we split the task into two stages: the prediction of a map's principal eigenvector (PE from the primary sequence; the reconstruction of the contact map from the PE and primary sequence. Predicting the PE from the primary sequence consists in mapping a vector into a vector. This task is less complex than mapping vectors directly into two-dimensional matrices since the size of the problem is drastically reduced and so is the scale length of interactions that need to be learned. Results We develop architectures composed of ensembles of two-layered bidirectional recurrent neural networks to classify the components of the PE in 2, 3 and 4 classes from protein primary sequence, predicted secondary structure, and hydrophobicity interaction scales. Our predictor, tested on a non redundant set of 2171 proteins, achieves classification performances of up to 72.6%, 16% above a base-line statistical predictor. We design a system for the prediction of contact maps from the predicted PE. Our results show that predicting maps through the PE yields sizeable gains especially for long-range contacts which are particularly critical for accurate protein 3D reconstruction. The final predictor's accuracy on a non-redundant set of 327 targets is 35

  9. A two-stage rule-constrained seedless region growing approach for mandibular body segmentation in MRI.

    Science.gov (United States)

    Ji, Dong Xu; Foong, Kelvin Weng Chiong; Ong, Sim Heng

    2013-09-01

    Extraction of the mandible from 3D volumetric images is frequently required for surgical planning and evaluation. Image segmentation from MRI is more complex than CT due to lower bony signal-to-noise. An automated method to extract the human mandible body shape from magnetic resonance (MR) images of the head was developed and tested. Anonymous MR images data sets of the head from 12 subjects were subjected to a two-stage rule-constrained region growing approach to derive the shape of the body of the human mandible. An initial thresholding technique was applied followed by a 3D seedless region growing algorithm to detect a large portion of the trabecular bone (TB) regions of the mandible. This stage is followed with a rule-constrained 2D segmentation of each MR axial slice to merge the remaining portions of the TB regions with lower intensity levels. The two-stage approach was replicated to detect the cortical bone (CB) regions of the mandibular body. The TB and CB regions detected from the preceding steps were merged and subjected to a series of morphological processes for completion of the mandibular body region definition. Comparisons of the accuracy of segmentation between the two-stage approach, conventional region growing method, 3D level set method, and manual segmentation were made with Jaccard index, Dice index, and mean surface distance (MSD). The mean accuracy of the proposed method is [Formula: see text] for Jaccard index, [Formula: see text] for Dice index, and [Formula: see text] mm for MSD. The mean accuracy of CRG is [Formula: see text] for Jaccard index, [Formula: see text] for Dice index, and [Formula: see text] mm for MSD. The mean accuracy of the 3D level set method is [Formula: see text] for Jaccard index, [Formula: see text] for Dice index, and [Formula: see text] mm for MSD. The proposed method shows improvement in accuracy over CRG and 3D level set. Accurate segmentation of the body of the human mandible from MR images is achieved with the

  10. An inexact fuzzy two-stage stochastic model for quantifying the efficiency of nonpoint source effluent trading under uncertainty

    International Nuclear Information System (INIS)

    Luo, B.; Maqsood, I.; Huang, G.H.; Yin, Y.Y.; Han, D.J.

    2005-01-01

    Reduction of nonpoint source (NPS) pollution from agricultural lands is a major concern in most countries. One method to reduce NPS pollution is through land retirement programs. This method, however, may result in enormous economic costs especially when large sums of croplands need to be retired. To reduce the cost, effluent trading can be employed to couple with land retirement programs. However, the trading efforts can also become inefficient due to various uncertainties existing in stochastic, interval, and fuzzy formats in agricultural systems. Thus, it is desired to develop improved methods to effectively quantify the efficiency of potential trading efforts by considering those uncertainties. In this respect, this paper presents an inexact fuzzy two-stage stochastic programming model to tackle such problems. The proposed model can facilitate decision-making to implement trading efforts for agricultural NPS pollution reduction through land retirement programs. The applicability of the model is demonstrated through a hypothetical effluent trading program within a subcatchment of the Lake Tai Basin in China. The study results indicate that the efficiency of the trading program is significantly influenced by precipitation amount, agricultural activities, and level of discharge limits of pollutants. The results also show that the trading program will be more effective for low precipitation years and with stricter discharge limits

  11. A Comparison of Two-Stage Approaches for Fitting Nonlinear Ordinary Differential Equation Models with Mixed Effects.

    Science.gov (United States)

    Chow, Sy-Miin; Bendezú, Jason J; Cole, Pamela M; Ram, Nilam

    2016-01-01

    Several approaches exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA; Ramsay & Silverman, 2005 ), generalized local linear approximation (GLLA; Boker, Deboeck, Edler, & Peel, 2010 ), and generalized orthogonal local derivative approximation (GOLD; Deboeck, 2010 ). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo (MC) study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children's self-regulation.

  12. Treatment of tophaceous pseudogout with custom-fitted temporomandibular joint: a two-staged approach

    Directory of Open Access Journals (Sweden)

    Robert Pellecchia, DDS

    2015-12-01

    Full Text Available Tophaceous pseudogout, a variant of calcium pyrophosphate dihydrate deposition, is a relatively rare juxta-articular disease. It is a metabolic condition, in which patients develop pseudo-tumoral calcifications associated with peri-articular structures secondary to calcium pyrophosphate deposition into joints with fibrocartilage rather than hyaline cartilage. These lesions are reported in the knee, wrist, pubis, shoulder, and temporomandibular joint (TMJ and induce a histocytic foreign body giant cell reaction. We report a case of tophaceous pseudogout affecting the left TMJ with destruction of the condyle and glenoid and middle cranial fossa that was reconstructed with a TMJ Concepts (Ventura, CA custom-fitted prosthesis in a 2-staged surgical approach using a silicone spacer. The surgical management using a patient-specific TMJ is a viable option when the fossa or condylar component has been compromised due to breakdown of bone secondary to a pathologic process. Our case describes and identifies the lesion and its rare occurrence in the region of the temporomandibular region. The successful management of tophaceous pseudogout of the TMJ must include a thorough patient workup including the involvement of other joints as well as the modification of bone of the glenoid fossa and condylar relationship of the TMJ.

  13. Two-stage culture procedure using thidiazuron for efficient micropropagation of Stevia rebaudiana, an anti-diabetic medicinal herb.

    Science.gov (United States)

    Singh, Pallavi; Dwivedi, Padmanabh

    2014-08-01

    Stevia rebaudiana Bertoni, member of Asteraceae family, has bio-active compounds stevioside and rebaudioside which taste about 300 times sweeter than sucrose. It regulates blood sugar, prevents hypertension and tooth decay as well as used in treatment of skin disorders having high medicinal values, and hence there is a need for generating the plant on large scale. We have developed an efficient micropropagation protocol on half strength Murashige and Skoog (MS) media, using two-stage culture procedures. Varying concentrations of cytokinins, i.e., benzylaminopurine, kinetin and thidiazuron (TDZ) were supplemented in the nutrient media to observe their effects on shoot development. All the cytokinins promoted shoot formation, however, best response was observed in the TDZ (0.5 mg/l). The shoots from selected induction medium were sub-cultured on the multiplication media. The media containing 0.01 mg/l TDZ produced maximum number of shoot (11.00 ± 0.40) with longer shoots (7.17 ± 0.16) and highest number of leaves (61.00 ± 1.29). Rooting response was best observed in one-fourth strength on MS media supplemented with indole-3-butyric acid (1.0 mg/l) and activated charcoal (50 mg/l) with (11.00 ± 0.40) number of roots. The plantlets thus obtained were hardened and transferred to the pots with soil and sand mixture, where the survival rate was 80 % after 2 months. Quantitative analysis of stevioside content in leaves of in vivo mother plant and in vitro plantlets was carried out by high performance liquid chromatography. A remarkable increase in stevioside content was noticed in the in vitro-raised plants as compared to in vivo grown plants. The protocol reported here might be useful in genetic improvement and high stevioside production.

  14. CFD modeling of two-stage ignition in a rapid compression machine: Assessment of zero-dimensional approach

    Energy Technology Data Exchange (ETDEWEB)

    Mittal, Gaurav [Department of Mechanical Engineering, The University of Akron, Akron, OH 44325 (United States); Raju, Mandhapati P. [General Motor R and D Tech Center, Warren, MI 48090 (United States); Sung, Chih-Jen [Department of Mechanical Engineering, University of Connecticut, Storrs, CT 06269 (United States)

    2010-07-15

    In modeling rapid compression machine (RCM) experiments, zero-dimensional approach is commonly used along with an associated heat loss model. The adequacy of such approach has not been validated for hydrocarbon fuels. The existence of multi-dimensional effects inside an RCM due to the boundary layer, roll-up vortex, non-uniform heat release, and piston crevice could result in deviation from the zero-dimensional assumption, particularly for hydrocarbons exhibiting two-stage ignition and strong thermokinetic interactions. The objective of this investigation is to assess the adequacy of zero-dimensional approach in modeling RCM experiments under conditions of two-stage ignition and negative temperature coefficient (NTC) response. Computational fluid dynamics simulations are conducted for n-heptane ignition in an RCM and the validity of zero-dimensional approach is assessed through comparisons over the entire NTC region. Results show that the zero-dimensional model based on the approach of 'adiabatic volume expansion' performs very well in adequately predicting the first-stage ignition delays, although quantitative discrepancy for the prediction of the total ignition delays and pressure rise in the first-stage ignition is noted even when the roll-up vortex is suppressed and a well-defined homogeneous core is retained within an RCM. Furthermore, the discrepancy is pressure dependent and decreases as compressed pressure is increased. Also, as ignition response becomes single-stage at higher compressed temperatures, discrepancy from the zero-dimensional simulations reduces. Despite of some quantitative discrepancy, the zero-dimensional modeling approach is deemed satisfactory from the viewpoint of the ignition delay simulation. (author)

  15. A Two-Stage Penalized Logistic Regression Approach to Case-Control Genome-Wide Association Studies

    Directory of Open Access Journals (Sweden)

    Jingyuan Zhao

    2012-01-01

    Full Text Available We propose a two-stage penalized logistic regression approach to case-control genome-wide association studies. This approach consists of a screening stage and a selection stage. In the screening stage, main-effect and interaction-effect features are screened by using L1-penalized logistic like-lihoods. In the selection stage, the retained features are ranked by the logistic likelihood with the smoothly clipped absolute deviation (SCAD penalty (Fan and Li, 2001 and Jeffrey’s Prior penalty (Firth, 1993, a sequence of nested candidate models are formed, and the models are assessed by a family of extended Bayesian information criteria (J. Chen and Z. Chen, 2008. The proposed approach is applied to the analysis of the prostate cancer data of the Cancer Genetic Markers of Susceptibility (CGEMS project in the National Cancer Institute, USA. Simulation studies are carried out to compare the approach with the pair-wise multiple testing approach (Marchini et al. 2005 and the LASSO-patternsearch algorithm (Shi et al. 2007.

  16. A two-stage approach for managing actuators redundancy and its application to fault tolerant flight control

    Directory of Open Access Journals (Sweden)

    Zhong Lunlong

    2015-04-01

    Full Text Available In safety-critical systems such as transportation aircraft, redundancy of actuators is introduced to improve fault tolerance. How to make the best use of remaining actuators to allow the system to continue achieving a desired operation in the presence of some actuators failures is the main subject of this paper. Considering that many dynamical systems, including flight dynamics of a transportation aircraft, can be expressed as an input affine nonlinear system, a new state representation is adopted here where the output dynamics are related with virtual inputs associated with the intended operation. This representation, as well as the distribution matrix associated with the effectiveness of the remaining operational actuators, allows us to define different levels of fault tolerant governability with respect to actuators’ failures. Then, a two-stage control approach is developed, leading first to the inversion of the output dynamics to get nominal values for the virtual inputs and then to the solution of a linear quadratic (LQ problem to compute the solicitation of each operational actuator. The proposed approach is applied to the control of a transportation aircraft which performs a stabilized roll maneuver while a partial failure appears. Two fault scenarios are considered and the resulting performance of the proposed approach is displayed and discussed.

  17. Relative Efficiencies of a Three-Stage Versus a Two-Stage Sample Design For a New NLS Cohort Study. 22U-884-38.

    Science.gov (United States)

    Folsom, R. E.; Weber, J. H.

    Two sampling designs were compared for the planned 1978 national longitudinal survey of high school seniors with respect to statistical efficiency and cost. The 1972 survey used a stratified two-stage sample of high schools and seniors within schools. In order to minimize interviewer travel costs, an alternate sampling design was proposed,…

  18. Effect of the spectral broadening of the first Stokes component on the efficiency of a two-stage Raman converter

    International Nuclear Information System (INIS)

    Egorova, O N; Kurkov, Andrei S; Medvedkov, O I; Paramonov, Vladimir M; Dianov, Evgenii M

    2005-01-01

    A two-stage Raman fibre converter (1.089/1.273/1.533 μm) based on a P 2 O 5 -doped silica fibre is fabricated and studied. The spectral broadening of the first Stokes component is investigated. The Raman converter is simulated numerically. By using the experimental data, the method of Raman converter simulation is improved by taking into account the additional power loss of the first Stokes component. The results of calculations by the improved method are in good agreement with the experiment. It is shown that the additional power loss of the first Stokes component results in a change in the region of the optimal resonator length from 300-600 m to 600-800 m. (lasers)

  19. Two-stage approach for risk estimation of fetal trisomy 21 and other aneuploidies using computational intelligence systems.

    Science.gov (United States)

    Neocleous, A C; Syngelaki, A; Nicolaides, K H; Schizas, C N

    2018-04-01

    To estimate the risk of fetal trisomy 21 (T21) and other chromosomal abnormalities (OCA) at 11-13 weeks' gestation using computational intelligence classification methods. As a first step, a training dataset consisting of 72 054 euploid pregnancies, 295 cases of T21 and 305 cases of OCA was used to train an artificial neural network. Then, a two-stage approach was used for stratification of risk and diagnosis of cases of aneuploidy in the blind set. In Stage 1, using four markers, pregnancies in the blind set were classified into no risk and risk. No-risk pregnancies were not examined further, whereas the risk pregnancies were forwarded to Stage 2 for further examination. In Stage 2, using seven markers, pregnancies were classified into three types of risk, namely no risk, moderate risk and high risk. Of 36 328 unknown to the system pregnancies (blind set), 17 512 euploid, two T21 and 18 OCA were classified as no risk in Stage 1. The remaining 18 796 cases were forwarded to Stage 2, of which 7895 euploid, two T21 and two OCA cases were classified as no risk, 10 464 euploid, 83 T21 and 61 OCA as moderate risk and 187 euploid, 50 T21 and 52 OCA as high risk. The sensitivity and the specificity for T21 in Stage 2 were 97.1% and 99.5%, respectively, and the false-positive rate from Stage 1 to Stage 2 was reduced from 51.4% to ∼1%, assuming that the cell-free DNA test could identify all euploid and aneuploid cases. We propose a method for early diagnosis of chromosomal abnormalities that ensures that most T21 cases are classified as high risk at any stage. At the same time, the number of euploid cases subjected to invasive or cell-free DNA examinations was minimized through a routine procedure offered in two stages. Our method is minimally invasive and of relatively low cost, highly effective at T21 identification and it performs better than do other existing statistical methods. Copyright © 2017 ISUOG. Published by John Wiley & Sons Ltd. Copyright

  20. An Integrated Simulation, Inference and Optimization Approach for Groundwater Remediation with Two-stage Health-Risk Assessment

    Directory of Open Access Journals (Sweden)

    Aili Yang

    2018-05-01

    Full Text Available In this study, an integrated simulation, inference and optimization approach with two-stage health risk assessment (i.e., ISIO-THRA is developed for supporting groundwater remediation for a petroleum-contaminated site in western Canada. Both environmental standards and health risk are considered as the constraints in the ISIO-THRA model. The health risk includes two parts: (1 the health risk during the remediation process and (2 the health risk in the natural attenuation period after remediation. In the ISIO-THRA framework, the relationship between contaminant concentrations and time is expressed through first-order decay models. The results demonstrate that: (1 stricter environmental standards and health risk would require larger pumping rates for the same remediation duration; (2 higher health risk may happen in the period of the remediation process; (3 for the same environmental standard and acceptable health-risk level, the remediation techniques that take the shortest time would be chosen. ISIO-THRA can help to systematically analyze interaction among contaminant transport, remediation duration, and environmental and health concerns, and further provide useful supportive information for decision makers.

  1. Efficiency and hardware comparison of analog control-based and digital control-based 70 W two-stage power factor corrector and DC-DC converters

    DEFF Research Database (Denmark)

    Török, Lajos; Munk-Nielsen, Stig

    2011-01-01

    A comparison of an analog and a digital controller driven 70 W two-stage power factor corrector converter is presented. Both controllers are operated in average current-mode-control for the PFC and peak current control for the DC-DC converter. Digital controller design and converter modeling...... is described. Results show that digital control can compete with the analog one in efficiency, PFC and THD....

  2. Measuring the efficiency of Palestinian public hospitals during 2010-2015: an application of a two-stage DEA method.

    Science.gov (United States)

    Sultan, Wasim I M; Crispim, José

    2018-05-29

    While health needs and expenditure in the Occupied Palestinian Territories (OPT) are growing, the international donations are declining and the economic situation is worsening. The purpose of this paper is twofold, to evaluate the productive efficiency of public hospitals in West Bank and to study contextual factors contributing to efficiency differences. This study examined technical efficiency among 11 public hospitals in West Bank from 2010 through 2015 targeting a total of 66 observations. Nationally representative data were extracted from the official annual health reports. We applied input-oriented Data Envelopment Analysis (DEA) models to estimate efficiency scores. To elaborate further on performance, we used Tobit regression to identify contextual factors whose impact on inefficient performance is statistically significant. Despite the increase in efficiency mean scores by 4% from 2010 to 2015, findings show potential savings of 14.5% of resource consumption without reducing the volume of the provided services. The significant Tobit model showed four predictors explaining the inefficient performance of a hospital (p public hospitals in the OPT. Our work identified their efficiency levels for potential improvements and the determinants of efficient performance. Based on the measurement of efficiency, the generated information may guide hospitals' managers, policymakers, and international donors improving the performance of the main national healthcare provider. The scope of this study is limited to public hospitals in West Bank. For a better understanding of the Palestinian market, further research on private hospitals and hospitals in Gaza Strip will be useful.

  3. Bank Mergers Performance and the Determinants of Singaporean Banks’ Efficiency: An Application of Two-Stage Banking Models

    Directory of Open Access Journals (Sweden)

    Fadzlan Sufian

    2007-01-01

    Full Text Available An event study window analysis of Data Envelopment Analysis (DEA is employed in this study to investigate the effect of mergers and acquisitions on Singaporean domestic banking groups’ efficiency. The results suggest that the mergers have resulted in a higher post-merger mean overall efficiency of Singaporean banking groups. However, from the scale efficiency perspective, our findings do not support further consolidation in the Singaporean banking sector. We find mixed evidence of the efficiency characteristics of the acquirers and targets banks. Hence, the findings do not fully support the hypothesis that a more (less efficient bank becomes the acquirer (target. In most cases, our results further confirm the hypothesis that the acquiring bank’s mean overall efficiency improves (deteriorates post-merger resulted from the merger with a more (less efficient bank. Tobit regression model is employed to determine factors affecting bank performance, and the results suggest that bank profitability has a significantly positive impact on bank efficiency, whereas poor loan quality has a significantly negative influence on bank performance.

  4. Strengthening power generation efficiency utilizing liquefied natural gas cold energy by a novel two-stage condensation Rankine cycle (TCRC) system

    International Nuclear Information System (INIS)

    Bao, Junjiang; Lin, Yan; Zhang, Ruixiang; Zhang, Ning; He, Gaohong

    2017-01-01

    Highlights: • A two-stage condensation Rankine cycle (TCRC) system is proposed. • Net power output and thermal efficiency increases by 45.27% and 42.91%. • The effects of the condensation temperatures are analyzed. • 14 working fluids (such as propane, butane etc.) are compared. - Abstract: For the low efficiency of the traditional power generation system with liquefied natural gas (LNG) cold energy utilization, by improving the heat transfer characteristic between the working fluid and LNG, this paper has proposed a two-stage condensation Rankine cycle (TCRC) system. Using propane as working fluid, compared with the combined cycle in the conventional LNG cold energy power generation method, the net power output, thermal efficiency and exergy efficiency of the TCRC system are respectively increased by 45.27%, 42.91% and 52.31%. Meanwhile, the effects of the first-stage and second-stage condensation temperature and LNG vaporization pressure on the performance and cost index of the TCRC system (net power output, thermal efficiency, exergy efficiency and UA) are analyzed. Finally, using the net power output as the objective function, with 14 organic fluids (such as propane, butane etc.) as working fluids, the first-stage and second-stage condensation temperature at different LNG vaporization pressures are optimized. The results show that there exists a first-stage and second-stage condensation temperature making the performance of the TCRC system optimal. When LNG vaporization pressure is supercritical pressure, R116 has the best economy among all the investigated working fluids, and while R150 and R23 are better when the vaporization pressure of LNG is subcritical.

  5. Could the clinical interpretability of subgroups detected using clustering methods be improved by using a novel two-stage approach?

    DEFF Research Database (Denmark)

    Kent, Peter; Stochkendahl, Mette Jensen; Wulff Christensen, Henrik

    2015-01-01

    participation, psychological factors, biomarkers and imaging. However, such ‘whole person’ research may result in data-driven subgroups that are complex, difficult to interpret and challenging to recognise clinically. This paper describes a novel approach to applying statistical clustering techniques that may...... potential benefits but requires broad testing, in multiple patient samples, to determine its clinical value. The usefulness of the approach is likely to be context-specific, depending on the characteristics of the available data and the research question being asked of it....

  6. A new two-stage approach for predicting the soil water characteristic from saturation to oven-dryness

    DEFF Research Database (Denmark)

    Karup Jensen, Dan; Tuller, Markus; de Jonge, Lis Wollesen

    2015-01-01

    to slow and inaccurate measurements. Hence, models applied to predict the SWC consequently exclude the dry region and are often only applicable for specific soil textural classifications. The present study proposes a new two-step approach to prediction of the continuous SWC from saturation to oven dryness...... using a limited number of measured textural data, organic matter content and dry bulk density. The proposed approach combines dry- and wet-region functions to obtain the entire SWC by means of parameterizing a previously developed continuous equation. The dry region function relates gravimetric soil...

  7. A high-gain and high-efficiency X-band triaxial klystron amplifier with two-stage cascaded bunching cavities

    Science.gov (United States)

    Zhang, Wei; Ju, Jinchuan; Zhang, Jun; Zhong, Huihuang

    2017-12-01

    To achieve GW-level amplification output radiation at the X-band, a relativistic triaxial klystron amplifier with two-stage cascaded double-gap bunching cavities is investigated. The input cavity is optimized to obtain a high absorption rate of the external injection microwave. The cascaded bunching cavities are optimized to achieve a high depth of the fundamental harmonic current. A double-gap standing wave extractor is designed to improve the beam wave conversion efficiency. Two reflectors with high reflection coefficients both to the asymmetric mode and the TEM mode are employed to suppress the asymmetric mode competition and TEM mode microwave leakage. Particle-in-cell simulation results show that a high power microwave with a power of 2.53 GW and a frequency of 8.4 GHz is generated with a 690 kV, 9.3 kA electron beam excitation and a 25 kW seed microwave injection. Particularly, the achieved power conversion efficiency is about 40%, and the gain is as high as 50 dB. Meanwhile, there is insignificant self-excitation of the parasitic mode in the proposed structure by adopting the reflectors. The relative phase difference between the injected signals and the output microwaves keeps locked after the amplifier becomes saturated.

  8. Benchmarking energy performance of residential buildings using two-stage multifactor data envelopment analysis with degree-day based simple-normalization approach

    International Nuclear Information System (INIS)

    Wang, Endong; Shen, Zhigang; Alp, Neslihan; Barry, Nate

    2015-01-01

    Highlights: • Two-stage DEA model is developed to benchmark building energy efficiency. • Degree-day based simple normalization is used to neutralize the climatic noise. • Results of a real case study validated the benefits of this new model. - Abstract: Being able to identify detailed meta factors of energy performance is essential for creating effective residential energy-retrofitting strategies. Compared to other benchmarking methods, nonparametric multifactor DEA (data envelopment analysis) is capable of discriminating scale factors from management factors to reveal more details to better guide retrofitting practices. A two-stage DEA energy benchmarking method is proposed in this paper. This method includes (1) first-stage meta DEA which integrates the common degree day metrics for neutralizing noise energy effects of exogenous climatic variables; and (2) second-stage Tobit regression for further detailed efficiency analysis. A case study involving 3-year longitudinal panel data of 189 residential buildings indicated the proposed method has advantages over existing methods in terms of its efficiency in data processing and results interpretation. The results of the case study also demonstrated high consistency with existing linear regression based DEA.

  9. Control scheme towards enhancing power quality and operational efficiency of single-phase two-stage grid-connected photovoltaic systems

    Directory of Open Access Journals (Sweden)

    Mahmoud Salem

    2015-12-01

    Full Text Available Achieving high reliable grid-connected photovoltaic (PV systems with high power quality and high operation efficiency is highly required for distributed generation units. A double grid-frequency voltage ripple is found on the dc-link voltage in single-phase photovoltaic grid-connected systems due to the unbalance of the instantaneous dc input and ac output powers. This voltage ripple has undesirable effects on the power quality and operational efficiency of the whole system. Harmonic distortion in the injected current to the grid is one of the problems caused by this double grid-frequency voltage ripple. The double grid frequency ripple propagates to the PV voltage and current which disturb the extracted maximum power from the PV array. This paper introduces intelligent solutions towards mitigate the side effects of the double grid-frequency voltage ripple on the transferred power quality and the operational efficiency of single-phase two-stage grid-connected PV system. The proposed system has three control loops: MPPT control loop, dc-link voltage control loop and inverter current control loop. Solutions are introduced for all the three control loops in the system. The current controller cancels the dc-link voltage effect on the total harmonic distortion of the output current. The dc-link voltage controller is designed to generate a ripple free reference current signal that leads to enhance the quality of the output power. Also a modified MPPT controller is proposed to optimize the extracted power from the PV array. Simulation results show that higher injected power quality is achieved and higher efficiency of the overall system is realized.

  10. Asymmetric Anterior Distraction for Transversely Distorted Maxilla and Midfacial Anteroposterior Deficiency in a Patient With Cleft Lip/Palate: Two-Stage Surgical Approach.

    Science.gov (United States)

    Hirata, Kae; Tanikawa, Chihiro; Aikawa, Tomonao; Ishihama, Kohji; Kogo, Mikihiko; Iida, Seiji; Yamashiro, Takashi

    2016-07-01

    The present report describes a male patient with a unilateral cleft lip and palate who presented with midfacial anteroposterior and transverse deficiency. Correction involved a two-stage surgical-orthodontic approach: asymmetric anterior distraction of the segmented maxilla followed by two-jaw surgery (LeFort I and bilateral sagittal splitting ramus osteotomies). The present case demonstrates that the asymmetric elongation of the maxilla with anterior distraction is an effective way to correct a transversely distorted alveolar form and midfacial anteroposterior deficiency. Furthermore, successful tooth movement was demonstrated in the new bone created by distraction.

  11. Solving no-wait two-stage flexible flow shop scheduling problem with unrelated parallel machines and rework time by the adjusted discrete Multi Objective Invasive Weed Optimization and fuzzy dominance approach

    Energy Technology Data Exchange (ETDEWEB)

    Jafarzadeh, Hassan; Moradinasab, Nazanin; Gerami, Ali

    2017-07-01

    Adjusted discrete Multi-Objective Invasive Weed Optimization (DMOIWO) algorithm, which uses fuzzy dominant approach for ordering, has been proposed to solve No-wait two-stage flexible flow shop scheduling problem. Design/methodology/approach: No-wait two-stage flexible flow shop scheduling problem by considering sequence-dependent setup times and probable rework in both stations, different ready times for all jobs and rework times for both stations as well as unrelated parallel machines with regards to the simultaneous minimization of maximum job completion time and average latency functions have been investigated in a multi-objective manner. In this study, the parameter setting has been carried out using Taguchi Method based on the quality indicator for beater performance of the algorithm. Findings: The results of this algorithm have been compared with those of conventional, multi-objective algorithms to show the better performance of the proposed algorithm. The results clearly indicated the greater performance of the proposed algorithm. Originality/value: This study provides an efficient method for solving multi objective no-wait two-stage flexible flow shop scheduling problem by considering sequence-dependent setup times, probable rework in both stations, different ready times for all jobs, rework times for both stations and unrelated parallel machines which are the real constraints.

  12. Solving no-wait two-stage flexible flow shop scheduling problem with unrelated parallel machines and rework time by the adjusted discrete Multi Objective Invasive Weed Optimization and fuzzy dominance approach

    International Nuclear Information System (INIS)

    Jafarzadeh, Hassan; Moradinasab, Nazanin; Gerami, Ali

    2017-01-01

    Adjusted discrete Multi-Objective Invasive Weed Optimization (DMOIWO) algorithm, which uses fuzzy dominant approach for ordering, has been proposed to solve No-wait two-stage flexible flow shop scheduling problem. Design/methodology/approach: No-wait two-stage flexible flow shop scheduling problem by considering sequence-dependent setup times and probable rework in both stations, different ready times for all jobs and rework times for both stations as well as unrelated parallel machines with regards to the simultaneous minimization of maximum job completion time and average latency functions have been investigated in a multi-objective manner. In this study, the parameter setting has been carried out using Taguchi Method based on the quality indicator for beater performance of the algorithm. Findings: The results of this algorithm have been compared with those of conventional, multi-objective algorithms to show the better performance of the proposed algorithm. The results clearly indicated the greater performance of the proposed algorithm. Originality/value: This study provides an efficient method for solving multi objective no-wait two-stage flexible flow shop scheduling problem by considering sequence-dependent setup times, probable rework in both stations, different ready times for all jobs, rework times for both stations and unrelated parallel machines which are the real constraints.

  13. Efficient Separation and Extraction of Vanadium and Chromium in High Chromium Vanadium Slag by Selective Two-Stage Roasting-Leaching

    Science.gov (United States)

    Wen, Jing; Jiang, Tao; Xu, Yingzhe; Liu, Jiayi; Xue, Xiangxin

    2018-06-01

    Vanadium and chromium are important rare metals, leading to a focus on high chromium vanadium slag (HCVS) as a potential raw material to extract vanadium and chromium in China. In this work, a novel method based on selective two-stage roasting-leaching was proposed to separate and extract vanadium and chromium efficiently in HCVS. XRD, FT-IR, and SEM were utilized to analyze the phase evolutions and microstructure during the whole process. Calcification roasting, which can calcify vanadium selectively using thermodynamics, was carried out in the first roasting stage to transfer vanadium into acid-soluble vanadate and leave chromium in the leaching residue as (Fe0.6Cr0.4)2O3 after H2SO4 leaching. When HCVS and CaO were mixed in the molar ratio CaO/V2O3 (n(CaO)/n(V2O3)) of 0.5 to 1.25, around 90 pct vanadium and less than 1 pct chromium were extracted in the first leaching liquid, thus achieving the separation of vanadium and chromium. In the second roasting stage, sodium salt, which combines with chromium easily, was added to the first leaching residue to extract chromium and 95.16 pct chromium was extracted under the optimal conditions. The total vanadium and chromium leaching rates were above 95 pct, achieving the efficient separation and extraction of vanadium and chromium. The established method provides a new technique to separate vanadium and chromium during roasting rather than in the liquid form, which is useful for the comprehensive application of HCVS.

  14. Efficient Separation and Extraction of Vanadium and Chromium in High Chromium Vanadium Slag by Selective Two-Stage Roasting-Leaching

    Science.gov (United States)

    Wen, Jing; Jiang, Tao; Xu, Yingzhe; Liu, Jiayi; Xue, Xiangxin

    2018-04-01

    Vanadium and chromium are important rare metals, leading to a focus on high chromium vanadium slag (HCVS) as a potential raw material to extract vanadium and chromium in China. In this work, a novel method based on selective two-stage roasting-leaching was proposed to separate and extract vanadium and chromium efficiently in HCVS. XRD, FT-IR, and SEM were utilized to analyze the phase evolutions and microstructure during the whole process. Calcification roasting, which can calcify vanadium selectively using thermodynamics, was carried out in the first roasting stage to transfer vanadium into acid-soluble vanadate and leave chromium in the leaching residue as (Fe0.6Cr0.4)2O3 after H2SO4 leaching. When HCVS and CaO were mixed in the molar ratio CaO/V2O3 (n(CaO)/n(V2O3)) of 0.5 to 1.25, around 90 pct vanadium and less than 1 pct chromium were extracted in the first leaching liquid, thus achieving the separation of vanadium and chromium. In the second roasting stage, sodium salt, which combines with chromium easily, was added to the first leaching residue to extract chromium and 95.16 pct chromium was extracted under the optimal conditions. The total vanadium and chromium leaching rates were above 95 pct, achieving the efficient separation and extraction of vanadium and chromium. The established method provides a new technique to separate vanadium and chromium during roasting rather than in the liquid form, which is useful for the comprehensive application of HCVS.

  15. PERIODIC REVIEW SYSTEM FOR INVENTORY REPLENISHMENT CONTROL FOR A TWO-ECHELON LOGISTICS NETWORK UNDER DEMAND UNCERTAINTY: A TWO-STAGE STOCHASTIC PROGRAMING APPROACH

    Directory of Open Access Journals (Sweden)

    P.S.A. Cunha

    Full Text Available ABSTRACT Here, we propose a novel methodology for replenishment and control systems for inventories of two-echelon logistics networks using a two-stage stochastic programming, considering periodic review and uncertain demands. In addition, to achieve better customer services, we introduce a variable rationing rule to address quantities of the item in short. The devised models are reformulated into their deterministic equivalent, resulting in nonlinear mixed-integer programming models, which are then approximately linearized. To deal with the uncertain nature of the item demand levels, we apply a Monte Carlo simulation-based method to generate finite and discrete sets of scenarios. Moreover, the proposed approach does not require restricted assumptions to the behavior of the probabilistic phenomena, as does several existing methods in the literature. Numerical experiments with the proposed approach for randomly generated instances of the problem show results with errors around 1%.

  16. Day-Ahead Wind Power Forecasting Using a Two-Stage Hybrid Modeling Approach Based on SCADA and Meteorological Information, and Evaluating the Impact of Input-Data Dependency on Forecasting Accuracy

    Directory of Open Access Journals (Sweden)

    Dehua Zheng

    2017-12-01

    Full Text Available The power generated by wind generators is usually associated with uncertainties, due to the intermittency of wind speed and other weather variables. This creates a big challenge for transmission system operators (TSOs and distribution system operators (DSOs in terms of connecting, controlling and managing power networks with high-penetration wind energy. Hence, in these power networks, accurate wind power forecasts are essential for their reliable and efficient operation. They support TSOs and DSOs in enhancing the control and management of the power network. In this paper, a novel two-stage hybrid approach based on the combination of the Hilbert-Huang transform (HHT, genetic algorithm (GA and artificial neural network (ANN is proposed for day-ahead wind power forecasting. The approach is composed of two stages. The first stage utilizes numerical weather prediction (NWP meteorological information to predict wind speed at the exact site of the wind farm. The second stage maps actual wind speed vs. power characteristics recorded by SCADA. Then, the wind speed forecast in the first stage for the future day is fed to the second stage to predict the future day’s wind power. Comparative selection of input-data parameter sets for the forecasting model and impact analysis of input-data dependency on forecasting accuracy have also been studied. The proposed approach achieves significant forecasting accuracy improvement compared with three other artificial intelligence-based forecasting approaches and a benchmark model using the smart persistence method.

  17. A two-stage approach to estimate spatial and spatio-temporal disease risks in the presence of local discontinuities and clusters.

    Science.gov (United States)

    Adin, A; Lee, D; Goicoa, T; Ugarte, María Dolores

    2018-01-01

    Disease risk maps for areal unit data are often estimated from Poisson mixed models with local spatial smoothing, for example by incorporating random effects with a conditional autoregressive prior distribution. However, one of the limitations is that local discontinuities in the spatial pattern are not usually modelled, leading to over-smoothing of the risk maps and a masking of clusters of hot/coldspot areas. In this paper, we propose a novel two-stage approach to estimate and map disease risk in the presence of such local discontinuities and clusters. We propose approaches in both spatial and spatio-temporal domains, where for the latter the clusters can either be fixed or allowed to vary over time. In the first stage, we apply an agglomerative hierarchical clustering algorithm to training data to provide sets of potential clusters, and in the second stage, a two-level spatial or spatio-temporal model is applied to each potential cluster configuration. The superiority of the proposed approach with regard to a previous proposal is shown by simulation, and the methodology is applied to two important public health problems in Spain, namely stomach cancer mortality across Spain and brain cancer incidence in the Navarre and Basque Country regions of Spain.

  18. High-efficiency removal of phytic acid in soy meal using two-stage temperature-induced Aspergillus oryzae solid-state fermentation.

    Science.gov (United States)

    Chen, Liyan; Vadlani, Praveen V; Madl, Ronald L

    2014-01-15

    Phytic acid of soy meal (SM) could influence protein and important mineral digestion of monogastric animals. Aspergillus oryzae (ATCC 9362) solid-state fermentation was applied to degrade phytic acid in SM. Two-stage temperature fermentation protocol was investigated to increase the degradation rate. The first stage was to maximize phytase production and the second stage was to realize the maximum enzymatic degradation. In the first stage, a combination of 41% moisture, a temperature of 37 °C and inoculum size of 1.7 mL in 5 g substrate (dry matter basis) favored maximum phytase production, yielding phytase activity of 58.7 U, optimized via central composite design. By the end of second-stage fermentation, 57% phytic acid was degraded from SM fermented at 50 °C, compared with 39% of that fermented at 37 °C. The nutritional profile of fermented SM was also studied. Oligosaccharides were totally removed after fermentation and 67% of total non-reducing polysaccharides were decreased. Protein content increased by 9.5%. Two-stage temperature protocol achieved better phytic acid degradation during A. oryzae solid state fermentation. The fermented SM has lower antinutritional factors (phytic acid, oligosaccharides and non-reducing polysaccharides) and higher nutritional value for animal feed. © 2013 Society of Chemical Industry.

  19. The analysis of the external factors influence on the efficiency of the absorption heat pumps inclusion in the scheme of a two-stage line installation of a STP

    Directory of Open Access Journals (Sweden)

    Luzhkovoy Dmitriy S.

    2017-01-01

    Full Text Available The article deals with a comparative analysis of the efficiency of a two-stage line installation in a heating turbine before and after the inclusion of absorption heat pumps into its scheme with a decrease in the outside air temperature. The research shows the dependence of the efficiency of the line installation on its heat load while using AHP in its scheme, as well as on the heat conversion factor of the absorption heat pumps.

  20. Optimal Land Use Management for Soil Erosion Control by Using an Interval-Parameter Fuzzy Two-Stage Stochastic Programming Approach

    Science.gov (United States)

    Han, Jing-Cheng; Huang, Guo-He; Zhang, Hua; Li, Zhong

    2013-09-01

    Soil erosion is one of the most serious environmental and public health problems, and such land degradation can be effectively mitigated through performing land use transitions across a watershed. Optimal land use management can thus provide a way to reduce soil erosion while achieving the maximum net benefit. However, optimized land use allocation schemes are not always successful since uncertainties pertaining to soil erosion control are not well presented. This study applied an interval-parameter fuzzy two-stage stochastic programming approach to generate optimal land use planning strategies for soil erosion control based on an inexact optimization framework, in which various uncertainties were reflected. The modeling approach can incorporate predefined soil erosion control policies, and address inherent system uncertainties expressed as discrete intervals, fuzzy sets, and probability distributions. The developed model was demonstrated through a case study in the Xiangxi River watershed, China's Three Gorges Reservoir region. Land use transformations were employed as decision variables, and based on these, the land use change dynamics were yielded for a 15-year planning horizon. Finally, the maximum net economic benefit with an interval value of [1.197, 6.311] × 109 was obtained as well as corresponding land use allocations in the three planning periods. Also, the resulting soil erosion amount was found to be decreased and controlled at a tolerable level over the watershed. Thus, results confirm that the developed model is a useful tool for implementing land use management as not only does it allow local decision makers to optimize land use allocation, but can also help to answer how to accomplish land use changes.

  1. Optimal land use management for soil erosion control by using an interval-parameter fuzzy two-stage stochastic programming approach.

    Science.gov (United States)

    Han, Jing-Cheng; Huang, Guo-He; Zhang, Hua; Li, Zhong

    2013-09-01

    Soil erosion is one of the most serious environmental and public health problems, and such land degradation can be effectively mitigated through performing land use transitions across a watershed. Optimal land use management can thus provide a way to reduce soil erosion while achieving the maximum net benefit. However, optimized land use allocation schemes are not always successful since uncertainties pertaining to soil erosion control are not well presented. This study applied an interval-parameter fuzzy two-stage stochastic programming approach to generate optimal land use planning strategies for soil erosion control based on an inexact optimization framework, in which various uncertainties were reflected. The modeling approach can incorporate predefined soil erosion control policies, and address inherent system uncertainties expressed as discrete intervals, fuzzy sets, and probability distributions. The developed model was demonstrated through a case study in the Xiangxi River watershed, China's Three Gorges Reservoir region. Land use transformations were employed as decision variables, and based on these, the land use change dynamics were yielded for a 15-year planning horizon. Finally, the maximum net economic benefit with an interval value of [1.197, 6.311] × 10(9) $ was obtained as well as corresponding land use allocations in the three planning periods. Also, the resulting soil erosion amount was found to be decreased and controlled at a tolerable level over the watershed. Thus, results confirm that the developed model is a useful tool for implementing land use management as not only does it allow local decision makers to optimize land use allocation, but can also help to answer how to accomplish land use changes.

  2. An efficient and accurate two-stage fourth-order gas-kinetic scheme for the Euler and Navier-Stokes equations

    Science.gov (United States)

    Pan, Liang; Xu, Kun; Li, Qibing; Li, Jiequan

    2016-12-01

    For computational fluid dynamics (CFD), the generalized Riemann problem (GRP) solver and the second-order gas-kinetic scheme (GKS) provide a time-accurate flux function starting from a discontinuous piecewise linear flow distributions around a cell interface. With the adoption of time derivative of the flux function, a two-stage Lax-Wendroff-type (L-W for short) time stepping method has been recently proposed in the design of a fourth-order time accurate method for inviscid flow [21]. In this paper, based on the same time-stepping method and the second-order GKS flux function [42], a fourth-order gas-kinetic scheme is constructed for the Euler and Navier-Stokes (NS) equations. In comparison with the formal one-stage time-stepping third-order gas-kinetic solver [24], the current fourth-order method not only reduces the complexity of the flux function, but also improves the accuracy of the scheme. In terms of the computational cost, a two-dimensional third-order GKS flux function takes about six times of the computational time of a second-order GKS flux function. However, a fifth-order WENO reconstruction may take more than ten times of the computational cost of a second-order GKS flux function. Therefore, it is fully legitimate to develop a two-stage fourth order time accurate method (two reconstruction) instead of standard four stage fourth-order Runge-Kutta method (four reconstruction). Most importantly, the robustness of the fourth-order GKS is as good as the second-order one. In the current computational fluid dynamics (CFD) research, it is still a difficult problem to extend the higher-order Euler solver to the NS one due to the change of governing equations from hyperbolic to parabolic type and the initial interface discontinuity. This problem remains distinctively for the hypersonic viscous and heat conducting flow. The GKS is based on the kinetic equation with the hyperbolic transport and the relaxation source term. The time-dependent GKS flux function

  3. Long-term bio-H2 and bio-CH4 production from food waste in a continuous two-stage system: Energy efficiency and conversion pathways.

    Science.gov (United States)

    Algapani, Dalal E; Qiao, Wei; di Pumpo, Francesca; Bianchi, David; Wandera, Simon M; Adani, Fabrizio; Dong, Renjie

    2018-01-01

    Anaerobic digestion is a well-established technology for treating organic waste, but it is still under challenge for food waste due to process stability problems. In this work, continuous H 2 and CH 4 production from canteen food waste (FW) in a two-stage system were successfully established by optimizing process parameters. The optimal hydraulic retention time was 5d for H 2 and 15d for CH 4 . Overall, around 59% of the total COD in FW was converted into H 2 (4%) and into CH 4 (55%). The fluctuations of FW characteristics did not significantly affect process performance. From the energy point view, the H 2 reactor contributed much less than the methane reactor to total energy balance, but it played a key role in maintaining the stability of anaerobic treatment of food waste. Microbial characterization indicated that methane formation was through syntrophic acetate oxidation combined with hydrogenotrophic methanogenesis pathway. Copyright © 2017. Published by Elsevier Ltd.

  4. Sensitivity Analysis in Two-Stage DEA

    Directory of Open Access Journals (Sweden)

    Athena Forghani

    2015-07-01

    Full Text Available Data envelopment analysis (DEA is a method for measuring the efficiency of peer decision making units (DMUs which uses a set of inputs to produce a set of outputs. In some cases, DMUs have a two-stage structure, in which the first stage utilizes inputs to produce outputs used as the inputs of the second stage to produce final outputs. One important issue in two-stage DEA is the sensitivity of the results of an analysis to perturbations in the data. The current paper looks into combined model for two-stage DEA and applies the sensitivity analysis to DMUs on the entire frontier. In fact, necessary and sufficient conditions for preserving a DMU's efficiency classiffication are developed when various data changes are applied to all DMUs.

  5. Sensitivity Analysis in Two-Stage DEA

    Directory of Open Access Journals (Sweden)

    Athena Forghani

    2015-12-01

    Full Text Available Data envelopment analysis (DEA is a method for measuring the efficiency of peer decision making units (DMUs which uses a set of inputs to produce a set of outputs. In some cases, DMUs have a two-stage structure, in which the first stage utilizes inputs to produce outputs used as the inputs of the second stage to produce final outputs. One important issue in two-stage DEA is the sensitivity of the results of an analysis to perturbations in the data. The current paper looks into combined model for two-stage DEA and applies the sensitivity analysis to DMUs on the entire frontier. In fact, necessary and sufficient conditions for preserving a DMU's efficiency classiffication are developed when various data changes are applied to all DMUs.

  6. Two stages of economic development

    OpenAIRE

    Gong, Gang

    2016-01-01

    This study suggests that the development process of a less-developed country can be divided into two stages, which demonstrate significantly different properties in areas such as structural endowments, production modes, income distribution, and the forces that drive economic growth. The two stages of economic development have been indicated in the growth theory of macroeconomics and in the various "turning point" theories in development economics, including Lewis's dual economy theory, Kuznet...

  7. PERIODIC REVIEW SYSTEM FOR INVENTORY REPLENISHMENT CONTROL FOR A TWO-ECHELON LOGISTICS NETWORK UNDER DEMAND UNCERTAINTY: A TWO-STAGE STOCHASTIC PROGRAMING APPROACH

    OpenAIRE

    Cunha, P.S.A.; Oliveira, F.; Raupp, Fernanda M.P.

    2017-01-01

    ABSTRACT Here, we propose a novel methodology for replenishment and control systems for inventories of two-echelon logistics networks using a two-stage stochastic programming, considering periodic review and uncertain demands. In addition, to achieve better customer services, we introduce a variable rationing rule to address quantities of the item in short. The devised models are reformulated into their deterministic equivalent, resulting in nonlinear mixed-integer programming models, which a...

  8. A Two-Stage Approach for Improving the Convergence of Least-Mean-Square Adaptive Decision-Feedback Equalizers in the Presence of Severe Narrowband Interference

    Science.gov (United States)

    Batra, Arun; Zeidler, James R.; Beex, A. A. Louis

    2007-12-01

    It has previously been shown that a least-mean-square (LMS) decision-feedback filter can mitigate the effect of narrowband interference (L.-M. Li and L. Milstein, 1983). An adaptive implementation of the filter was shown to converge relatively quickly for mild interference. It is shown here, however, that in the case of severe narrowband interference, the LMS decision-feedback equalizer (DFE) requires a very large number of training symbols for convergence, making it unsuitable for some types of communication systems. This paper investigates the introduction of an LMS prediction-error filter (PEF) as a prefilter to the equalizer and demonstrates that it reduces the convergence time of the two-stage system by as much as two orders of magnitude. It is also shown that the steady-state bit-error rate (BER) performance of the proposed system is still approximately equal to that attained in steady-state by the LMS DFE-only. Finally, it is shown that the two-stage system can be implemented without the use of training symbols. This two-stage structure lowers the complexity of the overall system by reducing the number of filter taps that need to be adapted, while incurring a slight loss in the steady-state BER.

  9. A Two-Stage Approach for Improving the Convergence of Least-Mean-Square Adaptive Decision-Feedback Equalizers in the Presence of Severe Narrowband Interference

    Directory of Open Access Journals (Sweden)

    A. A. (Louis Beex

    2008-02-01

    Full Text Available It has previously been shown that a least-mean-square (LMS decision-feedback filter can mitigate the effect of narrowband interference (L.-M. Li and L. Milstein, 1983. An adaptive implementation of the filter was shown to converge relatively quickly for mild interference. It is shown here, however, that in the case of severe narrowband interference, the LMS decision-feedback equalizer (DFE requires a very large number of training symbols for convergence, making it unsuitable for some types of communication systems. This paper investigates the introduction of an LMS prediction-error filter (PEF as a prefilter to the equalizer and demonstrates that it reduces the convergence time of the two-stage system by as much as two orders of magnitude. It is also shown that the steady-state bit-error rate (BER performance of the proposed system is still approximately equal to that attained in steady-state by the LMS DFE-only. Finally, it is shown that the two-stage system can be implemented without the use of training symbols. This two-stage structure lowers the complexity of the overall system by reducing the number of filter taps that need to be adapted, while incurring a slight loss in the steady-state BER.

  10. Two-stage implant systems.

    Science.gov (United States)

    Fritz, M E

    1999-06-01

    Since the advent of osseointegration approximately 20 years ago, there has been a great deal of scientific data developed on two-stage integrated implant systems. Although these implants were originally designed primarily for fixed prostheses in the mandibular arch, they have been used in partially dentate patients, in patients needing overdentures, and in single-tooth restorations. In addition, this implant system has been placed in extraction sites, in bone-grafted areas, and in maxillary sinus elevations. Often, the documentation of these procedures has lagged. In addition, most of the reports use survival criteria to describe results, often providing overly optimistic data. It can be said that the literature describes a true adhesion of the epithelium to the implant similar to adhesion to teeth, that two-stage implants appear to have direct contact somewhere between 50% and 70% of the implant surface, that the microbial flora of the two-stage implant system closely resembles that of the natural tooth, and that the microbiology of periodontitis appears to be closely related to peri-implantitis. In evaluations of the data from implant placement in all of the above-noted situations by means of meta-analysis, it appears that there is a strong case that two-stage dental implants are successful, usually showing a confidence interval of over 90%. It also appears that the mandibular implants are more successful than maxillary implants. Studies also show that overdenture therapy is valid, and that single-tooth implants and implants placed in partially dentate mouths have a success rate that is quite good, although not quite as high as in the fully edentulous dentition. It would also appear that the potential causes of failure in the two-stage dental implant systems are peri-implantitis, placement of implants in poor-quality bone, and improper loading of implants. There are now data addressing modifications of the implant surface to alter the percentage of

  11. A Two-Stage Information-Theoretic Approach to Modeling Landscape-Level Attributes and Maximum Recruitment of Chinook Salmon in the Columbia River Basin.

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, William L.; Lee, Danny C.

    2000-11-01

    Many anadromous salmonid stocks in the Pacific Northwest are at their lowest recorded levels, which has raised questions regarding their long-term persistence under current conditions. There are a number of factors, such as freshwater spawning and rearing habitat, that could potentially influence their numbers. Therefore, we used the latest advances in information-theoretic methods in a two-stage modeling process to investigate relationships between landscape-level habitat attributes and maximum recruitment of 25 index stocks of chinook salmon (Oncorhynchus tshawytscha) in the Columbia River basin. Our first-stage model selection results indicated that the Ricker-type, stock recruitment model with a constant Ricker a (i.e., recruits-per-spawner at low numbers of fish) across stocks was the only plausible one given these data, which contrasted with previous unpublished findings. Our second-stage results revealed that maximum recruitment of chinook salmon had a strongly negative relationship with percentage of surrounding subwatersheds categorized as predominantly containing U.S. Forest Service and private moderate-high impact managed forest. That is, our model predicted that average maximum recruitment of chinook salmon would decrease by at least 247 fish for every increase of 33% in surrounding subwatersheds categorized as predominantly containing U.S. Forest Service and privately managed forest. Conversely, mean annual air temperature had a positive relationship with salmon maximum recruitment, with an average increase of at least 179 fish for every increase in 2 C mean annual air temperature.

  12. A novel two-stage evaluation system based on a Group-G1 approach to identify appropriate emergency treatment technology schemes in sudden water source pollution accidents.

    Science.gov (United States)

    Qu, Jianhua; Meng, Xianlin; Hu, Qi; You, Hong

    2016-02-01

    Sudden water source pollution resulting from hazardous materials has gradually become a major threat to the safety of the urban water supply. Over the past years, various treatment techniques have been proposed for the removal of the pollutants to minimize the threat of such pollutions. Given the diversity of techniques available, the current challenge is how to scientifically select the most desirable alternative for different threat degrees. Therefore, a novel two-stage evaluation system was developed based on a circulation-correction improved Group-G1 method to determine the optimal emergency treatment technology scheme, considering the areas of contaminant elimination in both drinking water sources and water treatment plants. In stage 1, the threat degree caused by the pollution was predicted using a threat evaluation index system and was subdivided into four levels. Then, a technique evaluation index system containing four sets of criteria weights was constructed in stage 2 to obtain the optimum treatment schemes corresponding to the different threat levels. The applicability of the established evaluation system was tested by a practical cadmium-contaminated accident that occurred in 2012. The results show this system capable of facilitating scientific analysis in the evaluation and selection of emergency treatment technologies for drinking water source security.

  13. A two-stage method for inverse medium scattering

    KAUST Repository

    Ito, Kazufumi

    2013-03-01

    We present a novel numerical method to the time-harmonic inverse medium scattering problem of recovering the refractive index from noisy near-field scattered data. The approach consists of two stages, one pruning step of detecting the scatterer support, and one resolution enhancing step with nonsmooth mixed regularization. The first step is strictly direct and of sampling type, and it faithfully detects the scatterer support. The second step is an innovative application of nonsmooth mixed regularization, and it accurately resolves the scatterer size as well as intensities. The nonsmooth model can be efficiently solved by a semi-smooth Newton-type method. Numerical results for two- and three-dimensional examples indicate that the new approach is accurate, computationally efficient, and robust with respect to data noise. © 2012 Elsevier Inc.

  14. Highly efficient pulsed power supply system with a two-stage LC generator and a step-up transformer for fast capillary discharge soft x-ray laser at shorter wavelength

    International Nuclear Information System (INIS)

    Sakai, Yusuke; Takahashi, Shnsuke; Komatsu, Takanori; Song, Inho; Watanabe, Masato; Hotta, Eiki

    2010-01-01

    Highly efficient and compact pulsed power supply system for a capillary discharge soft x-ray laser (SXRL) has been developed. The system consists of a 2.2 μF two-stage LC inversion generator, a 2:54 step-up transformer, a 3 nF water capacitor, and a discharge section with a few tens of centimeter length capillary. Adoption of the pulsed transformer in combination with the LC inversion generator enables us to use only one gap switch in the circuit for charging the water capacitor up to about 0.5 MV. Furthermore, step-up ratio of a water capacitor voltage to a LC inversion generator initial charging voltage is about 40 with energy transfer efficiency of about 50%. It also leads to good reproducibility of a capillary discharge which is necessary for lasing a SXRL stably. For the study of the possibility of lasing a SXRL at shorter wavelength in a small laboratory scale, high-density and high-temperature plasma column suitable for the laser can be generated relatively easily with this system.

  15. Meta-analysis of Gaussian individual patient data: Two-stage or not two-stage?

    Science.gov (United States)

    Morris, Tim P; Fisher, David J; Kenward, Michael G; Carpenter, James R

    2018-04-30

    Quantitative evidence synthesis through meta-analysis is central to evidence-based medicine. For well-documented reasons, the meta-analysis of individual patient data is held in higher regard than aggregate data. With access to individual patient data, the analysis is not restricted to a "two-stage" approach (combining estimates and standard errors) but can estimate parameters of interest by fitting a single model to all of the data, a so-called "one-stage" analysis. There has been debate about the merits of one- and two-stage analysis. Arguments for one-stage analysis have typically noted that a wider range of models can be fitted and overall estimates may be more precise. The two-stage side has emphasised that the models that can be fitted in two stages are sufficient to answer the relevant questions, with less scope for mistakes because there are fewer modelling choices to be made in the two-stage approach. For Gaussian data, we consider the statistical arguments for flexibility and precision in small-sample settings. Regarding flexibility, several of the models that can be fitted only in one stage may not be of serious interest to most meta-analysis practitioners. Regarding precision, we consider fixed- and random-effects meta-analysis and see that, for a model making certain assumptions, the number of stages used to fit this model is irrelevant; the precision will be approximately equal. Meta-analysts should choose modelling assumptions carefully. Sometimes relevant models can only be fitted in one stage. Otherwise, meta-analysts are free to use whichever procedure is most convenient to fit the identified model. © 2018 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  16. Two stage turbine for rockets

    Science.gov (United States)

    Veres, Joseph P.

    1993-01-01

    The aerodynamic design and rig test evaluation of a small counter-rotating turbine system is described. The advanced turbine airfoils were designed and tested by Pratt & Whitney. The technology represented by this turbine is being developed for a turbopump to be used in an advanced upper stage rocket engine. The advanced engine will use a hydrogen expander cycle and achieve high performance through efficient combustion of hydrogen/oxygen propellants, high combustion pressure, and high area ratio exhaust nozzle expansion. Engine performance goals require that the turbopump drive turbines achieve high efficiency at low gas flow rates. The low mass flow rates and high operating pressures result in very small airfoil heights and diameters. The high efficiency and small size requirements present a challenging turbine design problem. The shrouded axial turbine blades are 50 percent reaction with a maximum thickness to chord ratio near 1. At 6 deg from the tangential direction, the nozzle and blade exit flow angles are well below the traditional design minimum limits. The blade turning angle of 160 deg also exceeds the maximum limits used in traditional turbine designs.

  17. A two-stage method for inverse medium scattering

    KAUST Repository

    Ito, Kazufumi; Jin, Bangti; Zou, Jun

    2013-01-01

    We present a novel numerical method to the time-harmonic inverse medium scattering problem of recovering the refractive index from noisy near-field scattered data. The approach consists of two stages, one pruning step of detecting the scatterer

  18. Comparison between Two Different Two-Stage Transperineal Approaches to Treat Urethral Strictures or Bladder Neck Contracture Associated with Severe Urinary Incontinence that Occurred after Pelvic Surgery: Report of Our Experience

    Directory of Open Access Journals (Sweden)

    A. Simonato

    2012-01-01

    Full Text Available Introduction. The recurrence of urethral/bladder neck stricture after multiple endoscopic procedures is a rare complication that can follow prostatic surgery and its treatment is still controversial. Material and Methods. We retrospectively analyzed our data on 17 patients, operated between September 2001 and January 2010, who presented severe urinary incontinence and urethral/bladder neck stricture after prostatic surgery and failure of at least four conservative endoscopic treatments. Six patients underwent a transperineal urethrovesical anastomosis and 11 patients a combined transperineal suprapubical (endoscopic urethrovesical anastomosis. After six months the patients that presented complete incontinence and no urethral stricture underwent the implantation of an artificial urethral sphincter (AUS. Results. After six months 16 patients were completely incontinent and presented a patent, stable lumen, so that they underwent an AUS implantation. With a mean followup of 50.5 months, 14 patients are perfectly continent with no postvoid residual urine. Conclusions. Two-stage procedures are safe techniques to treat these challenging cases. In our opinion, these cases could be managed with a transperineal approach in patients who present a perfect operative field; on the contrary, in more difficult cases, it would be preferable to use the other technique, with a combined transperineal suprapubical access, to perform a pull-through procedure.

  19. Composite likelihood and two-stage estimation in family studies

    DEFF Research Database (Denmark)

    Andersen, Elisabeth Anne Wreford

    2004-01-01

    In this paper register based family studies provide the motivation for linking a two-stage estimation procedure in copula models for multivariate failure time data with a composite likelihood approach. The asymptotic properties of the estimators in both parametric and semi-parametric models are d...

  20. Two-stage precipitation of plutonium trifluoride

    International Nuclear Information System (INIS)

    Luerkens, D.W.

    1984-04-01

    Plutonium trifluoride was precipitated using a two-stage precipitation system. A series of precipitation experiments identified the significant process variables affecting precipitate characteristics. A mathematical precipitation model was developed which was based on the formation of plutonium fluoride complexes. The precipitation model relates all process variables, in a single equation, to a single parameter that can be used to control particle characteristics

  1. Two-Stage Series-Resonant Inverter

    Science.gov (United States)

    Stuart, Thomas A.

    1994-01-01

    Two-stage inverter includes variable-frequency, voltage-regulating first stage and fixed-frequency second stage. Lightweight circuit provides regulated power and is invulnerable to output short circuits. Does not require large capacitor across ac bus, like parallel resonant designs. Particularly suitable for use in ac-power-distribution system of aircraft.

  2. Randomised study on single stage laparo-endoscopic rendezvous (intra-operative ERCP) procedure versus two stage approach (Pre-operative ERCP followed by laparoscopic cholecystectomy) for the management of cholelithiasis with choledocholithiasis.

    Science.gov (United States)

    Sahoo, Manash Ranjan; Kumar, Anil T; Patnaik, Aashish

    2014-07-01

    The 'Rendezvous' technique consists of laparoscopic cholecystectomy (LC) standards with intra-operative cholangiography followed by endoscopic sphincterotomy. The sphincterotome is driven across the papilla through a guidewire inserted by the transcystic route. In this study, we intended to compare the two methods in a prospective randomised trial. From 2005 to 2012, we enrolled 83 patients with a diagnosis of cholecysto-choledocolithiasis. They were randomised into two groups. In 'group-A',41 patients were treated with two stages management, first by pre-operative endoscopic retrograde cholangiopancreatography (ERCP) and common bile duct (CBD) clearance and second by LC. In 'group-B', 42 patients were treated with LC and intra-operative cholangiography; and when diagnosis of choledocholithiasis was confirmed, patients had undergone one stage management of by Laparo-endoscopic Rendezvous technique. In arm-A and arm-B groups, complete CBD clearance was achieved in 29 and 38 patients, respectively. Failure of the treatment in arm-A was 29% and in arm-B was 9.5%. In arm-A, selective CBD cannulation was achieved in 33 cases (80.5%) and in arm-B in 39 cases (93%). In arm-Agroup, post-ERCP hyperamylasia was presented in nine patients (22%) and severe pancreatitis in five patients (12%) versus none of the patients (0%) in arm-B group, respectively. Mean post-operative hospital stay in arm-A and arm-B groups are 10.9 and 6.8 days, respectively. One stage laparo-endoscopic rendezvous approach increases selective cannulation of CBD, reduces post-ERCP pancreatitis, reduces days of hospital stay, increases patient's compliance and prevents unnecessary intervention to CBD.

  3. Randomised study on single stage laparo-endoscopic rendezvous (intra-operative ERCP procedure versus two stage approach (Pre-operative ERCP followed by laparoscopic cholecystectomy for the management of cholelithiasis with choledocholithiasis

    Directory of Open Access Journals (Sweden)

    Manash Ranjan Sahoo

    2014-01-01

    Full Text Available Introduction : The ′Rendezvous′ technique consists of laparoscopic cholecystectomy (LC standards with intra-operative cholangiography followed by endoscopic sphincterotomy. The sphincterotome is driven across the papilla through a guidewire inserted by the transcystic route. In this study, we intended to compare the two methods in a prospective randomised trial. Materials And Methods: From 2005 to 2012, we enrolled 83 patients with a diagnosis of cholecysto-choledocolithiasis. They were randomised into two groups. In ′group-A′,41 patients were treated with two stages management, first by pre-operative endoscopic retrograde cholangiopancreatography (ERCP and common bile duct (CBD clearance and second by LC. In ′group-B′, 42 patients were treated with LC and intra-operative cholangiography; and when diagnosis of choledocholithiasis was confirmed, patients had undergone one stage management of by Laparo-endoscopic Rendezvous technique. Results: In arm-A and arm-B groups, complete CBD clearance was achieved in 29 and 38 patients, respectively. Failure of the treatment in arm-A was 29% and in arm-B was 9.5%. In arm-A, selective CBD cannulation was achieved in 33 cases (80.5% and in arm-B in 39 cases (93%. In arm-Agroup, post-ERCP hyperamylasia was presented in nine patients (22% and severe pancreatitis in five patients (12% versus none of the patients (0% in arm-B group, respectively. Mean post-operative hospital stay in arm-A and arm-B groups are 10.9 and 6.8 days, respectively. Conclusion: One stage laparo-endoscopic rendezvous approach increases selective cannulation of CBD, reduces post-ERCP pancreatitis, reduces days of hospital stay, increases patient′s compliance and prevents unnecessary intervention to CBD.

  4. Condensate from a two-stage gasifier

    DEFF Research Database (Denmark)

    Bentzen, Jens Dall; Henriksen, Ulrik Birk; Hindsgaul, Claus

    2000-01-01

    Condensate, produced when gas from downdraft biomass gasifier is cooled, contains organic compounds that inhibit nitrifiers. Treatment with activated carbon removes most of the organics and makes the condensate far less inhibitory. The condensate from an optimised two-stage gasifier is so clean...... that the organic compounds and the inhibition effect are very low even before treatment with activated carbon. The moderate inhibition effect relates to a high content of ammonia in the condensate. The nitrifiers become tolerant to the condensate after a few weeks of exposure. The level of organic compounds...... and the level of inhibition are so low that condensate from the optimised two-stage gasifier can be led to the public sewer....

  5. Two-stage nonrecursive filter/decimator

    International Nuclear Information System (INIS)

    Yoder, J.R.; Richard, B.D.

    1980-08-01

    A two-stage digital filter/decimator has been designed and implemented to reduce the sampling rate associated with the long-term computer storage of certain digital waveforms. This report describes the design selection and implementation process and serves as documentation for the system actually installed. A filter design with finite-impulse response (nonrecursive) was chosen for implementation via direct convolution. A newly-developed system-test statistic validates the system under different computer-operating environments

  6. Two stage-type railgun accelerator

    International Nuclear Information System (INIS)

    Ogino, Mutsuo; Azuma, Kingo.

    1995-01-01

    The present invention provides a two stage-type railgun accelerator capable of spiking a flying body (ice pellet) formed by solidifying a gaseous hydrogen isotope as a fuel to a thermonuclear reactor at a higher speed into a central portion of plasmas. Namely, the two stage-type railgun accelerator accelerates the flying body spiked from a initial stage accelerator to a portion between rails by Lorentz force generated when electric current is supplied to the two rails by way of a plasma armature. In this case, two sets of solenoids are disposed for compressing the plasma armature in the longitudinal direction of the rails. The first and the second sets of solenoid coils are previously supplied with electric current. After passing of the flying body, the armature formed into plasmas by a gas laser disposed at the back of the flying body is compressed in the longitudinal direction of the rails by a magnetic force of the first and the second sets of solenoid coils to increase the plasma density. A current density is also increased simultaneously. Then, the first solenoid coil current is turned OFF to accelerate the flying body in two stages by the compressed plasma armature. (I.S.)

  7. Two-stage free electron laser research

    Science.gov (United States)

    Segall, S. B.

    1984-10-01

    KMS Fusion, Inc. began studying the feasibility of two-stage free electron lasers for the Office of Naval Research in June, 1980. At that time, the two-stage FEL was only a concept that had been proposed by Luis Elias. The range of parameters over which such a laser could be successfully operated, attainable power output, and constraints on laser operation were not known. The primary reason for supporting this research at that time was that it had the potential for producing short-wavelength radiation using a relatively low voltage electron beam. One advantage of a low-voltage two-stage FEL would be that shielding requirements would be greatly reduced compared with single-stage short-wavelength FEL's. If the electron energy were kept below about 10 MeV, X-rays, generated by electrons striking the beam line wall, would not excite neutron resonance in atomic nuclei. These resonances cause the emission of neutrons with subsequent induced radioactivity. Therefore, above about 10 MeV, a meter or more of concrete shielding is required for the system, whereas below 10 MeV, a few millimeters of lead would be adequate.

  8. Two-Stage Electricity Demand Modeling Using Machine Learning Algorithms

    Directory of Open Access Journals (Sweden)

    Krzysztof Gajowniczek

    2017-10-01

    Full Text Available Forecasting of electricity demand has become one of the most important areas of research in the electric power industry, as it is a critical component of cost-efficient power system management and planning. In this context, accurate and robust load forecasting is supposed to play a key role in reducing generation costs, and deals with the reliability of the power system. However, due to demand peaks in the power system, forecasts are inaccurate and prone to high numbers of errors. In this paper, our contributions comprise a proposed data-mining scheme for demand modeling through peak detection, as well as the use of this information to feed the forecasting system. For this purpose, we have taken a different approach from that of time series forecasting, representing it as a two-stage pattern recognition problem. We have developed a peak classification model followed by a forecasting model to estimate an aggregated demand volume. We have utilized a set of machine learning algorithms to benefit from both accurate detection of the peaks and precise forecasts, as applied to the Polish power system. The key finding is that the algorithms can detect 96.3% of electricity peaks (load value equal to or above the 99th percentile of the load distribution and deliver accurate forecasts, with mean absolute percentage error (MAPE of 3.10% and resistant mean absolute percentage error (r-MAPE of 2.70% for the 24 h forecasting horizon.

  9. Artificial immune system and sheep flock algorithms for two-stage fixed-charge transportation problem

    DEFF Research Database (Denmark)

    Kannan, Devika; Govindan, Kannan; Soleimani, Hamed

    2014-01-01

    In this paper, we cope with a two-stage distribution planning problem of supply chain regarding fixed charges. The focus of the paper is on developing efficient solution methodologies of the selected NP-hard problem. Based on computational limitations, common exact and approximation solution...... approaches are unable to solve real-world instances of such NP-hard problems in a reasonable time. These approaches involve cumbersome computational steps in real-size cases. In order to solve the mixed integer linear programming model, we develop an artificial immune system and a sheep flock algorithm...

  10. Hypospadias repair: Byar's two stage operation revisited.

    Science.gov (United States)

    Arshad, A R

    2005-06-01

    Hypospadias is a congenital deformity characterised by an abnormally located urethral opening, that could occur anywhere proximal to its normal location on the ventral surface of glans penis to the perineum. Many operations had been described for the management of this deformity. One hundred and fifteen patients with hypospadias were treated at the Department of Plastic Surgery, Hospital Kuala Lumpur, Malaysia between September 1987 and December 2002, of which 100 had Byar's procedure performed on them. The age of the patients ranged from neonates to 26 years old. Sixty-seven patients had penoscrotal (58%), 20 had proximal penile (18%), 13 had distal penile (11%) and 15 had subcoronal hypospadias (13%). Operations performed were Byar's two-staged (100), Bracka's two-staged (11), flip-flap (2) and MAGPI operation (2). The most common complication encountered following hypospadias surgery was urethral fistula at a rate of 18%. There is a higher incidence of proximal hypospadias in the Malaysian community. Byar's procedure is a very versatile technique and can be used for all types of hypospadias. Fistula rate is 18% in this series.

  11. A first implementation of an efficient combustion strategy in a multi cylinder two-stage turbo CI-engine producing low emissions while consuming a gasoline/EHN blend

    NARCIS (Netherlands)

    Doornbos, G.; Somhorst, J.; Boot, M.D.

    2013-01-01

    A Gasoline Compression Ignition combustion strategy was developed and showed its capabilities in the heavy duty single cylinder test-cell, resulting in indicated efficiencies up to 50% and low engine out emissions applying to EU VI and US 10 legislations while the soot remained at a controllable 1.5

  12. Two-Stage Fan I: Aerodynamic and Mechanical Design

    Science.gov (United States)

    Messenger, H. E.; Kennedy, E. E.

    1972-01-01

    A two-stage, highly-loaded fan was designed to deliver an overall pressure ratio of 2.8 with an adiabatic efficiency of 83.9 percent. At the first rotor inlet, design flow per unit annulus area is 42 lbm/sec/sq ft (205 kg/sec/sq m), hub/tip ratio is 0.4 with a tip diameter of 31 inches (0.787 m), and design tip speed is 1450 ft/sec (441.96 m/sec). Other features include use of multiple-circular-arc airfoils, resettable stators, and split casings over the rotor tip sections for casing treatment tests.

  13. TWO-STAGE HEAT PUMPS FOR ENERGY SAVING TECHNOLOGIES

    Directory of Open Access Journals (Sweden)

    A. E. Denysova

    2017-09-01

    Full Text Available The problem of energy saving becomes one of the most important in power engineering. It is caused by exhaustion of world reserves in hydrocarbon fuel, such as gas, oil and coal representing sources of traditional heat supply. Conventional sources have essential shortcomings: low power, ecological and economic efficiencies, that can be eliminated by using alternative methods of power supply, like the considered one: low-temperature natural heat of ground waters of on the basis of heat pump installations application. The heat supply system considered provides an effective use of two stages heat pump installation operating as heat source at ground waters during the lowest ambient temperature period. Proposed is a calculation method of heat pump installations on the basis of groundwater energy. Calculated are the values of electric energy consumption by the compressors’ drive, and the heat supply system transformation coefficient µ for a low-potential source of heat from ground waters allowing to estimate high efficiency of two stages heat pump installations.

  14. High Performance Gasification with the Two-Stage Gasifier

    DEFF Research Database (Denmark)

    Gøbel, Benny; Hindsgaul, Claus; Henriksen, Ulrik Birk

    2002-01-01

    , air preheating and pyrolysis, hereby very high energy efficiencies can be achieved. Encouraging results are obtained at a 100 kWth laboratory facility. The tar content in the raw gas is measured to be below 25 mg/Nm3 and around 5 mg/Nm3 after gas cleaning with traditional baghouse filter. Furthermore...... a cold gas efficiency exceeding 90% is obtained. In the original design of the two-stage gasification process, the pyrolysis unit consists of a screw conveyor with external heating, and the char unit is a fixed bed gasifier. This design is well proven during more than 1000 hours of testing with various...... fuels, and is a suitable design for medium size gasifiers....

  15. Two-stage Security Controls Selection

    NARCIS (Netherlands)

    Yevseyeva, I.; Basto, Fernandes V.; Moorsel, van A.; Janicke, H.; Michael, Emmerich T. M.

    2016-01-01

    To protect a system from potential cyber security breaches and attacks, one needs to select efficient security controls, taking into account technical and institutional goals and constraints, such as available budget, enterprise activity, internal and external environment. Here we model the security

  16. On the robustness of two-stage estimators

    KAUST Repository

    Zhelonkin, Mikhail

    2012-04-01

    The aim of this note is to provide a general framework for the analysis of the robustness properties of a broad class of two-stage models. We derive the influence function, the change-of-variance function, and the asymptotic variance of a general two-stage M-estimator, and provide their interpretations. We illustrate our results in the case of the two-stage maximum likelihood estimator and the two-stage least squares estimator. © 2011.

  17. 'Lean' approach gives greater efficiency.

    Science.gov (United States)

    Call, Roger

    2014-02-01

    Adapting the 'Lean' methodologies used for many years by many manufacturers on the production line - such as in the automotive industry - and deploying them in healthcare 'spaces' can, Roger Call, an architect at Herman Miller Healthcare in the US, argues, 'easily remedy many of the inefficiencies' found within a healthcare facility. In an article that first appeared in the September 2013 issue of The Australian Hospital Engineer, he explains how 'Lean' approaches such as the 'Toyota production system', and 'Six Sigma', can be harnessed to good effect in the healthcare sphere.

  18. Determinan Tingkat Efisiensi Perbankan Syariah Di Indonesia: Two Stages Data Envelopment Analysis

    Directory of Open Access Journals (Sweden)

    Zulfikar Bagus Pambuko

    2016-12-01

    Full Text Available Efficiency is an important indicator to observe the banks’ ability in resisting and facing the tight rivalry at banking industry. The study aims to evaluate the efficiency and to analyze the determinants of efficiency of Islamic bank in Indonesia on 2010 – 2013 with Two-Stage Data Envelopment Analysis approach. The objects of the study are 11 Islamic banks (BUS. The first phase of testing uses the Data Envelopment Analysis (DEA method showed that the efficiency of Islamic bank is inefficient on managing the resources and small Islamic banks are more efficient than the larger. The second phase of testing uses Tobit model showed that Capital Adequacy Ratio (CAR, Return on Asset (ROA, Non Performing Financing (NPF, Financing to Deposit Ratio (FDR, and Net Interest Margin (NIM have positive significant effect on the efficiency of Islamic banks, while Good Corporate Governance (GCG has a negative significant effect. Moreover, the macroeconomic variables, such as GDP growth and inflation have no significant effect on efficiency of Islamic banks. It suggests that to realize the optimum level of Islamic banks’ efficiency is only related with bank-specific, while the volatility of macroeconomics condition contributes nothing

  19. On the prior probabilities for two-stage Bayesian estimates

    International Nuclear Information System (INIS)

    Kohut, P.

    1992-01-01

    The method of Bayesian inference is reexamined for its applicability and for the required underlying assumptions in obtaining and using prior probability estimates. Two different approaches are suggested to determine the first-stage priors in the two-stage Bayesian analysis which avoid certain assumptions required for other techniques. In the first scheme, the prior is obtained through a true frequency based distribution generated at selected intervals utilizing actual sampling of the failure rate distributions. The population variability distribution is generated as the weighed average of the frequency distributions. The second method is based on a non-parametric Bayesian approach using the Maximum Entropy Principle. Specific features such as integral properties or selected parameters of prior distributions may be obtained with minimal assumptions. It is indicated how various quantiles may also be generated with a least square technique

  20. Two-Stage Part-Based Pedestrian Detection

    DEFF Research Database (Denmark)

    Møgelmose, Andreas; Prioletti, Antonio; Trivedi, Mohan M.

    2012-01-01

    Detecting pedestrians is still a challenging task for automotive vision system due the extreme variability of targets, lighting conditions, occlusions, and high speed vehicle motion. A lot of research has been focused on this problem in the last 10 years and detectors based on classifiers has...... gained a special place among the different approaches presented. This work presents a state-of-the-art pedestrian detection system based on a two stages classifier. Candidates are extracted with a Haar cascade classifier trained with the DaimlerDB dataset and then validated through part-based HOG...... of several metrics, such as detection rate, false positives per hour, and frame rate. The novelty of this system rely in the combination of HOG part-based approach, tracking based on specific optimized feature and porting on a real prototype....

  1. Day-Ahead Wind Power Forecasting Using a Two-Stage Hybrid Modeling Approach Based on SCADA and Meteorological Information, and Evaluating the Impact of Input-Data Dependency on Forecasting Accuracy

    OpenAIRE

    Dehua Zheng; Min Shi; Yifeng Wang; Abinet Tesfaye Eseye; Jianhua Zhang

    2017-01-01

    The power generated by wind generators is usually associated with uncertainties, due to the intermittency of wind speed and other weather variables. This creates a big challenge for transmission system operators (TSOs) and distribution system operators (DSOs) in terms of connecting, controlling and managing power networks with high-penetration wind energy. Hence, in these power networks, accurate wind power forecasts are essential for their reliable and efficient operation. They support TSOs ...

  2. A robust decision-making approach for p-hub median location problems based on two-stage stochastic programming and mean-variance theory : a real case study

    NARCIS (Netherlands)

    Ahmadi, T.; Karimi, H.; Davoudpour, H.

    2015-01-01

    The stochastic location-allocation p-hub median problems are related to long-term decisions made in risky situations. Due to the importance of this type of problems in real-world applications, the authors were motivated to propose an approach to obtain more reliable policies in stochastic

  3. Short term load forecasting: two stage modelling

    Directory of Open Access Journals (Sweden)

    SOARES, L. J.

    2009-06-01

    Full Text Available This paper studies the hourly electricity load demand in the area covered by a utility situated in the Seattle, USA, called Puget Sound Power and Light Company. Our proposal is put into proof with the famous dataset from this company. We propose a stochastic model which employs ANN (Artificial Neural Networks to model short-run dynamics and the dependence among adjacent hours. The model proposed treats each hour's load separately as individual single series. This approach avoids modeling the intricate intra-day pattern (load profile displayed by the load, which varies throughout days of the week and seasons. The forecasting performance of the model is evaluated in similiar mode a TLSAR (Two-Level Seasonal Autoregressive model proposed by Soares (2003 using the years of 1995 and 1996 as the holdout sample. Moreover, we conclude that non linearity is present in some series of these data. The model results are analyzed. The experiment shows that our tool can be used to produce load forecasting in tropical climate places.

  4. A Two-Stage Fuzzy Logic Control Method of Traffic Signal Based on Traffic Urgency Degree

    OpenAIRE

    Yan Ge

    2014-01-01

    City intersection traffic signal control is an important method to improve the efficiency of road network and alleviate traffic congestion. This paper researches traffic signal fuzzy control method on a single intersection. A two-stage traffic signal control method based on traffic urgency degree is proposed according to two-stage fuzzy inference on single intersection. At the first stage, calculate traffic urgency degree for all red phases using traffic urgency evaluation module and select t...

  5. Quick pace of property acquisitions requires two-stage evaluations

    International Nuclear Information System (INIS)

    Hollo, R.; Lockwood, S.

    1994-01-01

    The traditional method of evaluating oil and gas reserves may be too cumbersome for the quick pace of oil and gas property acquisition. An acquisition evaluator must decide quickly if a property meets basic purchase criteria. The current business climate requires a two-stage approach. First, the evaluator makes a quick assessment of the property and submits a bid. If the bid is accepted then the evaluator goes on with a detailed analysis, which represents the second stage. Acquisition of producing properties has become an important activity for many independent oil and gas producers, who must be able to evaluate reserves quickly enough to make effective business decisions yet accurately enough to avoid costly mistakes. Independent thus must be familiar with how transactions usually progress as well as with the basic methods of property evaluation. The paper discusses acquisition activity, the initial offer, the final offer, property evaluation, and fair market value

  6. Fueling of magnetically confined plasmas by single- and two-stage repeating pneumatic pellet injectors

    International Nuclear Information System (INIS)

    Gouge, M.J.; Combs, S.K.; Foust, C.R.; Milora, S.L.

    1990-01-01

    Advanced plasma fueling systems for magnetic fusion confinement experiments are under development at Oak Ridge National Laboratory (ORNL). The general approach is that of producing and accelerating frozen hydrogenic pellets to speeds in the kilometer-per-second range using single shot and repetitive pneumatic (light-gas gun) pellet injectors. The millimeter-to-centimeter size pellets enter the plasma and continuously ablate because of the plasma electron heat flux, depositing fuel atoms along the pellet trajectory. This fueling method allows direct fueling in the interior of the hot plasma and is more efficient than the alternative method of injecting room temperature fuel gas at the wall of the plasma vacuum chamber. Single-stage pneumatic injectors based on the light-gas gun concept have provided hydrogenic fuel pellets in the speed range of 1--2 km/s in single-shot injector designs. Repetition rates up to 5 Hz have been demonstrated in repetitive injector designs. Future fusion reactor-scale devices may need higher pellet velocities because of the larger plasma size and higher plasma temperatures. Repetitive two-stage pneumatic injectors are under development at ORNL to provide long-pulse plasma fueling in the 3--5 km/s speed range. Recently, a repeating, two-stage light-gas gun achieved repetitive operation at 1 Hz with speeds in the range of 2--3 km/s

  7. An unit cost adjusting heuristic algorithm for the integrated planning and scheduling of a two-stage supply chain

    Directory of Open Access Journals (Sweden)

    Jianhua Wang

    2014-10-01

    Full Text Available Purpose: The stable relationship of one-supplier-one-customer is replaced by a dynamic relationship of multi-supplier-multi-customer in current market gradually, and efficient scheduling techniques are important tools of the dynamic supply chain relationship establishing process. This paper studies the optimization of the integrated planning and scheduling problem of a two-stage supply chain with multiple manufacturers and multiple retailers to obtain a minimum supply chain operating cost, whose manufacturers have different production capacities, holding and producing cost rates, transportation costs to retailers.Design/methodology/approach: As a complex task allocation and scheduling problem, this paper sets up an INLP model for it and designs a Unit Cost Adjusting (UCA heuristic algorithm that adjust the suppliers’ supplying quantity according to their unit costs step by step to solve the model.Findings: Relying on the contrasting analysis between the UCA and the Lingo solvers for optimizing many numerical experiments, results show that the INLP model and the UCA algorithm can obtain its near optimal solution of the two-stage supply chain’s planning and scheduling problem within very short CPU time.Research limitations/implications: The proposed UCA heuristic can easily help managers to optimizing the two-stage supply chain scheduling problems which doesn’t include the delivery time and batch of orders. For two-stage supply chains are the most common form of actual commercial relationships, so to make some modification and study on the UCA heuristic should be able to optimize the integrated planning and scheduling problems of a supply chain with more reality constraints.Originality/value: This research proposes an innovative UCA heuristic for optimizing the integrated planning and scheduling problem of two-stage supply chains with the constraints of suppliers’ production capacity and the orders’ delivering time, and has a great

  8. Runway Operations Planning: A Two-Stage Solution Methodology

    Science.gov (United States)

    Anagnostakis, Ioannis; Clarke, John-Paul

    2003-01-01

    The airport runway is a scarce resource that must be shared by different runway operations (arrivals, departures and runway crossings). Given the possible sequences of runway events, careful Runway Operations Planning (ROP) is required if runway utilization is to be maximized. Thus, Runway Operations Planning (ROP) is a critical component of airport operations planning in general and surface operations planning in particular. From the perspective of departures, ROP solutions are aircraft departure schedules developed by optimally allocating runway time for departures given the time required for arrivals and crossings. In addition to the obvious objective of maximizing throughput, other objectives, such as guaranteeing fairness and minimizing environmental impact, may be incorporated into the ROP solution subject to constraints introduced by Air Traffic Control (ATC) procedures. Generating optimal runway operations plans was approached in with a 'one-stage' optimization routine that considered all the desired objectives and constraints, and the characteristics of each aircraft (weight class, destination, Air Traffic Control (ATC) constraints) at the same time. Since, however, at any given point in time, there is less uncertainty in the predicted demand for departure resources in terms of weight class than in terms of specific aircraft, the ROP problem can be parsed into two stages. In the context of the Departure Planner (OP) research project, this paper introduces Runway Operations Planning (ROP) as part of the wider Surface Operations Optimization (SOO) and describes a proposed 'two stage' heuristic algorithm for solving the Runway Operations Planning (ROP) problem. Focus is specifically given on including runway crossings in the planning process of runway operations. In the first stage, sequences of departure class slots and runwy crossings slots are generated and ranked based on departure runway throughput under stochastic conditions. In the second stage, the

  9. Two-stage anaerobic digestion of cheese whey

    Energy Technology Data Exchange (ETDEWEB)

    Lo, K V; Liao, P H

    1986-01-01

    A two-stage digestion of cheese whey was studied using two anaerobic rotating biological contact reactors. The second-stage reactor receiving partially treated effluent from the first-stage reactor could be operated at a hydraulic retention time of one day. The results indicated that two-stage digestion is a feasible alternative for treating whey. 6 references.

  10. Regional level approach for increasing energy efficiency

    International Nuclear Information System (INIS)

    Viholainen, Juha; Luoranen, Mika; Väisänen, Sanni; Niskanen, Antti; Horttanainen, Mika; Soukka, Risto

    2016-01-01

    Highlights: • Comprehensive snapshot of regional energy system for decision makers. • Connecting regional sustainability targets and energy planning. • Involving local players in energy planning. - Abstract: Actions for increasing the renewable share in the energy supply and improving both production and end-use energy efficiency are often built into the regional level sustainability targets. Because of this, many local stakeholders such as local governments, energy producers and distributors, industry, and public and private sector operators require information on the current state and development aspects of the regional energy efficiency. The drawback is that an overall view on the focal energy system operators, their energy interests, and future energy service needs in the region is often not available for the stakeholders. To support the local energy planning and management of the regional energy services, an approach for increasing the regional energy efficiency is being introduced. The presented approach can be seen as a solid framework for gathering the required data for energy efficiency analysis and also evaluating the energy system development, planned improvement actions, and the required energy services at the region. This study defines the theoretical structure of the energy efficiency approach and the required steps for revealing such energy system improvement actions that support the regional energy plan. To demonstrate the use of the approach, a case study of a Finnish small-town of Lohja is presented. In the case example, possible actions linked to the regional energy targets were evaluated with energy efficiency analysis. The results of the case example are system specific, but the conducted study can be seen as a justified example of generating easily attainable and transparent information on the impacts of different improvement actions on the regional energy system.

  11. Two-stage image denoising considering interscale and intrascale dependencies

    Science.gov (United States)

    Shahdoosti, Hamid Reza

    2017-11-01

    A solution to the problem of reducing the noise of grayscale images is presented. To consider the intrascale and interscale dependencies, this study makes use of a model. It is shown that the dependency between a wavelet coefficient and its predecessors can be modeled by the first-order Markov chain, which means that the parent conveys all of the information necessary for efficient estimation. Using this fact, the proposed method employs the Kalman filter in the wavelet domain for image denoising. The proposed method has two stages. The first stage employs a simple denoising algorithm to provide the noise-free image, by which the parameters of the model such as state transition matrix, variance of the process noise, the observation model, and the covariance of the observation noise are estimated. In the second stage, the Kalman filter is applied to the wavelet coefficients of the noisy image to estimate the noise-free coefficients. In fact, the Kalman filter is used to estimate the coefficients of high-frequency subbands from the coefficients of coarser scales and noisy observations of neighboring coefficients. In this way, both the interscale and intrascale dependencies are taken into account. Results are presented and discussed on a set of standard 8-bit grayscale images. The experimental results demonstrate that the proposed method achieves performances competitive with the state-of-the-art denoising methods in terms of both peak-signal-to-noise ratio and subjective visual quality.

  12. Evidence of two-stage melting of Wigner solids

    Science.gov (United States)

    Knighton, Talbot; Wu, Zhe; Huang, Jian; Serafin, Alessandro; Xia, J. S.; Pfeiffer, L. N.; West, K. W.

    2018-02-01

    Ultralow carrier concentrations of two-dimensional holes down to p =1 ×109cm-2 are realized. Remarkable insulating states are found below a critical density of pc=4 ×109cm-2 or rs≈40 . Sensitive dc V-I measurement as a function of temperature and electric field reveals a two-stage phase transition supporting the melting of a Wigner solid as a two-stage first-order transition.

  13. Holistic Approach to Data Center Energy Efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Hammond, Steven W [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-09-18

    This presentation discusses NREL's Energy System Integrations Facility and NREL's holistic design approach to sustainable data centers that led to the world's most energy-efficient data center. It describes Peregrine, a warm water liquid cooled supercomputer, waste heat reuse in the data center, demonstrated PUE and ERE, and lessons learned during four years of operation.

  14. Two-stage simplified swarm optimization for the redundancy allocation problem in a multi-state bridge system

    International Nuclear Information System (INIS)

    Lai, Chyh-Ming; Yeh, Wei-Chang

    2016-01-01

    The redundancy allocation problem involves configuring an optimal system structure with high reliability and low cost, either by alternating the elements with more reliable elements and/or by forming them redundantly. The multi-state bridge system is a special redundancy allocation problem and is commonly used in various engineering systems for load balancing and control. Traditional methods for redundancy allocation problem cannot solve multi-state bridge systems efficiently because it is impossible to transfer and reduce a multi-state bridge system to series and parallel combinations. Hence, a swarm-based approach called two-stage simplified swarm optimization is proposed in this work to effectively and efficiently solve the redundancy allocation problem in a multi-state bridge system. For validating the proposed method, two experiments are implemented. The computational results indicate the advantages of the proposed method in terms of solution quality and computational efficiency. - Highlights: • Propose two-stage SSO (SSO_T_S) to deal with RAP in multi-state bridge system. • Dynamic upper bound enhances the efficiency of searching near-optimal solution. • Vector-update stages reduces the problem dimensions. • Statistical results indicate SSO_T_S is robust both in solution quality and runtime.

  15. Frequency analysis of a two-stage planetary gearbox using two different methodologies

    Science.gov (United States)

    Feki, Nabih; Karray, Maha; Khabou, Mohamed Tawfik; Chaari, Fakher; Haddar, Mohamed

    2017-12-01

    This paper is focused on the characterization of the frequency content of vibration signals issued from a two-stage planetary gearbox. To achieve this goal, two different methodologies are adopted: the lumped-parameter modeling approach and the phenomenological modeling approach. The two methodologies aim to describe the complex vibrations generated by a two-stage planetary gearbox. The phenomenological model describes directly the vibrations as measured by a sensor fixed outside the fixed ring gear with respect to an inertial reference frame, while results from a lumped-parameter model are referenced with respect to a rotating frame and then transferred into an inertial reference frame. Two different case studies of the two-stage planetary gear are adopted to describe the vibration and the corresponding spectra using both models. Each case presents a specific geometry and a specific spectral structure.

  16. Materials Approach to Fuel Efficient Tires

    Energy Technology Data Exchange (ETDEWEB)

    Votruba-Drzal, Peter [PPG Industries, Monroeville, PA (United States); Kornish, Brian [PPG Industries, Monroeville, PA (United States)

    2015-06-30

    The objective of this project was to design, develop, and demonstrate fuel efficient and safety regulation compliant tire filler and barrier coating technologies that will improve overall fuel efficiency by at least 2%. The program developed and validated two complementary approaches to improving fuel efficiency through tire improvements. The first technology was a modified silica-based product that is 15% lower in cost and/or enables a 10% improvement in tread wear while maintaining the already demonstrated minimum of 2% improvement in average fuel efficiency. The second technology was a barrier coating with reduced oxygen transmission rate compared to the state-of-the-art halobutyl rubber inner liners that will provide extended placarded tire pressure retention at significantly reduced material usage. A lower-permeance, thinner inner liner coating which retains tire pressure was expected to deliver the additional 2% reduction in fleet fuel consumption. From the 2006 Transportation Research Board Report1, a 10 percent reduction in rolling resistance can reduce consumer fuel expenditures by 1 to 2 percent for typical vehicles. This savings is equivalent to 6 to 12 gallons per year. A 1 psi drop in inflation pressure increases the tire's rolling resistance by about 1.4 percent.

  17. Cell sorting using efficient light shaping approaches

    DEFF Research Database (Denmark)

    Banas, Andrew; Palima, Darwin; Villangca, Mark Jayson

    2016-01-01

    distributions aimed at the positions of the detected cells. Furthermore, the beam shaping freedom provided by GPC can allow optimizations in the beam’s propagation and its interaction with the catapulted cells. © (2016) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading...... is gentler, less invasive and more economical compared to conventional FACS systems. As cells are less responsive to plastic or glass beads commonly used in the optical manipulation literature, and since laser safety would be an issue in clinical use, we develop efficient approaches in utilizing lasers...... and light modulation devices. The Generalized Phase Contrast (GPC) method that can be used for efficiently illuminating spatial light modulators or creating well-defined contiguous optical traps is supplemented by diffractive techniques capable of integrating the available light and creating 2D or 3D beam...

  18. Multiscale approaches to high efficiency photovoltaics

    Directory of Open Access Journals (Sweden)

    Connolly James Patrick

    2016-01-01

    Full Text Available While renewable energies are achieving parity around the globe, efforts to reach higher solar cell efficiencies becomes ever more difficult as they approach the limiting efficiency. The so-called third generation concepts attempt to break this limit through a combination of novel physical processes and new materials and concepts in organic and inorganic systems. Some examples of semi-empirical modelling in the field are reviewed, in particular for multispectral solar cells on silicon (French ANR project MultiSolSi. Their achievements are outlined, and the limits of these approaches shown. This introduces the main topic of this contribution, which is the use of multiscale experimental and theoretical techniques to go beyond the semi-empirical understanding of these systems. This approach has already led to great advances at modelling which have led to modelling software, which is widely known. Yet, a survey of the topic reveals a fragmentation of efforts across disciplines, firstly, such as organic and inorganic fields, but also between the high efficiency concepts such as hot carrier cells and intermediate band concepts. We show how this obstacle to the resolution of practical research obstacles may be lifted by inter-disciplinary cooperation across length scales, and across experimental and theoretical fields, and finally across materials systems. We present a European COST Action “MultiscaleSolar” kicking off in early 2015, which brings together experimental and theoretical partners in order to develop multiscale research in organic and inorganic materials. The goal of this defragmentation and interdisciplinary collaboration is to develop understanding across length scales, which will enable the full potential of third generation concepts to be evaluated in practise, for societal and industrial applications.

  19. Optimisation of Refrigeration System with Two-Stage and Intercooler Using Fuzzy Logic and Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Bayram Kılıç

    2017-04-01

    Full Text Available Two-stage compression operation prevents excessive compressor outlet pressure and temperature and this operation provides more efficient working condition in low-temperature refrigeration applications. Vapor compression refrigeration system with two-stage and intercooler is very good solution for low-temperature refrigeration applications. In this study, refrigeration system with two-stage and intercooler were optimized using fuzzy logic and genetic algorithm. The necessary thermodynamic characteristics for optimization were estimated with Fuzzy Logic and liquid phase enthalpy, vapour phase enthalpy, liquid phase entropy, vapour phase entropy values were compared with actual values. As a result, optimum working condition of system was estimated by the Genetic Algorithm as -6.0449 oC for evaporator temperature, 25.0115 oC for condenser temperature and 5.9666 for COP. Morever, irreversibility values of the refrigeration system are calculated.

  20. Two-stage electrolysis to enrich tritium in environmental water

    International Nuclear Information System (INIS)

    Shima, Nagayoshi; Muranaka, Takeshi

    2007-01-01

    We present a two-stage electrolyzing procedure to enrich tritium in environmental waters. Tritium is first enriched rapidly through a commercially-available electrolyser with a large 50A current, and then through a newly-designed electrolyser that avoids the memory effect, with a 6A current. Tritium recovery factor obtained by such a two-stage electrolysis was greater than that obtained when using the commercially-available device solely. Water samples collected in 2006 in lakes and along the Pacific coast of Aomori prefecture, Japan, were electrolyzed using the two-stage method. Tritium concentrations in these samples ranged from 0.2 to 0.9 Bq/L and were half or less, that in samples collected at the same sites in 1992. (author)

  1. Two-stage thermal/nonthermal waste treatment process

    International Nuclear Information System (INIS)

    Rosocha, L.A.; Anderson, G.K.; Coogan, J.J.; Kang, M.; Tennant, R.A.; Wantuck, P.J.

    1993-01-01

    An innovative waste treatment technology is being developed in Los Alamos to address the destruction of hazardous organic wastes. The technology described in this report uses two stages: a packed bed reactor (PBR) in the first stage to volatilize and/or combust liquid organics and a silent discharge plasma (SDP) reactor to remove entrained hazardous compounds in the off-gas to even lower levels. We have constructed pre-pilot-scale PBR-SDP apparatus and tested the two stages separately and in combined modes. These tests are described in the report

  2. Exergy analysis of vapor compression refrigeration cycle with two-stage and intercooler

    Energy Technology Data Exchange (ETDEWEB)

    Kilic, Bayram [Mehmet Akif Ersoy University, Bucak Emin Guelmez Vocational School, Bucak, Burdur (Turkey)

    2012-07-15

    In this study, exergy analyses of vapor compression refrigeration cycle with two-stage and intercooler using refrigerants R507, R407c, R404a were carried out. The necessary thermodynamic values for analyses were calculated by Solkane program. The coefficient of performance, exergetic efficiency and total irreversibility rate of the system in the different operating conditions for these refrigerants were investigated. The coefficient of performance, exergetic efficiency and total irreversibility rate for alternative refrigerants were compared. (orig.)

  3. Development of Explosive Ripper with Two-Stage Combustion

    Science.gov (United States)

    1974-10-01

    inch pipe duct work, the width of this duct proved to be detrimental in marginally rippable material; the duct, instead of the penetrator tip, was...marginally rippable rock. ID. Operating Requirements 2. Fuel The two-stage combustion device is designed to operate using S A 42. the same diesel

  4. Engineering analysis of the two-stage trifluoride precipitation process

    International Nuclear Information System (INIS)

    Luerkens, D.w.W.

    1984-06-01

    An engineering analysis of two-stage trifluoride precipitation processes is developed. Precipitation kinetics are modeled using consecutive reactions to represent fluoride complexation. Material balances across the precipitators are used to model the time dependent concentration profiles of the main chemical species. The results of the engineering analysis are correlated with previous experimental work on plutonium trifluoride and cerium trifluoride

  5. On the robustness of two-stage estimators

    KAUST Repository

    Zhelonkin, Mikhail; Genton, Marc G.; Ronchetti, Elvezio

    2012-01-01

    The aim of this note is to provide a general framework for the analysis of the robustness properties of a broad class of two-stage models. We derive the influence function, the change-of-variance function, and the asymptotic variance of a general

  6. Two-Stage Fuzzy Portfolio Selection Problem with Transaction Costs

    Directory of Open Access Journals (Sweden)

    Yanju Chen

    2015-01-01

    Full Text Available This paper studies a two-period portfolio selection problem. The problem is formulated as a two-stage fuzzy portfolio selection model with transaction costs, in which the future returns of risky security are characterized by possibility distributions. The objective of the proposed model is to achieve the maximum utility in terms of the expected value and variance of the final wealth. Given the first-stage decision vector and a realization of fuzzy return, the optimal value expression of the second-stage programming problem is derived. As a result, the proposed two-stage model is equivalent to a single-stage model, and the analytical optimal solution of the two-stage model is obtained, which helps us to discuss the properties of the optimal solution. Finally, some numerical experiments are performed to demonstrate the new modeling idea and the effectiveness. The computational results provided by the proposed model show that the more risk-averse investor will invest more wealth in the risk-free security. They also show that the optimal invested amount in risky security increases as the risk-free return decreases and the optimal utility increases as the risk-free return increases, whereas the optimal utility increases as the transaction costs decrease. In most instances the utilities provided by the proposed two-stage model are larger than those provided by the single-stage model.

  7. Bias due to two-stage residual-outcome regression analysis in genetic association studies.

    Science.gov (United States)

    Demissie, Serkalem; Cupples, L Adrienne

    2011-11-01

    Association studies of risk factors and complex diseases require careful assessment of potential confounding factors. Two-stage regression analysis, sometimes referred to as residual- or adjusted-outcome analysis, has been increasingly used in association studies of single nucleotide polymorphisms (SNPs) and quantitative traits. In this analysis, first, a residual-outcome is calculated from a regression of the outcome variable on covariates and then the relationship between the adjusted-outcome and the SNP is evaluated by a simple linear regression of the adjusted-outcome on the SNP. In this article, we examine the performance of this two-stage analysis as compared with multiple linear regression (MLR) analysis. Our findings show that when a SNP and a covariate are correlated, the two-stage approach results in biased genotypic effect and loss of power. Bias is always toward the null and increases with the squared-correlation between the SNP and the covariate (). For example, for , 0.1, and 0.5, two-stage analysis results in, respectively, 0, 10, and 50% attenuation in the SNP effect. As expected, MLR was always unbiased. Since individual SNPs often show little or no correlation with covariates, a two-stage analysis is expected to perform as well as MLR in many genetic studies; however, it produces considerably different results from MLR and may lead to incorrect conclusions when independent variables are highly correlated. While a useful alternative to MLR under , the two -stage approach has serious limitations. Its use as a simple substitute for MLR should be avoided. © 2011 Wiley Periodicals, Inc.

  8. High-power free-electron laser amplifier using a scalloped electron beam and a two-stage wiggler

    Directory of Open Access Journals (Sweden)

    D. C. Nguyen

    2006-05-01

    Full Text Available High-power free-electron laser (FEL amplifiers present many practical design and construction problems. One such problem is possible damage to any optical beam control elements beyond the wiggler. The ability to increase the optical beam’s divergence angle after the wiggler, thereby reducing the intensity on the first optical element, is important to minimize such damage. One proposal to accomplish this optical beam spreading is to pinch the electron beam thereby focusing the radiation as well. In this paper, we analyze an approach that relies on the natural betatron motion to pinch the electron beam near the end of the wiggler. We also consider a step-tapered, two-stage wiggler to enhance the efficiency. The combination of a pinched electron beam and step-taper wiggler leads to additional optical guiding of the optical beam. This novel configuration is studied in simulation using the MEDUSA code. For a representative set of beam and wiggler parameters, we discuss (i the effect of the scalloped beam on the interaction in the FEL and on the focusing and propagation of the radiation, and (ii the efficiency enhancement in the two-stage wiggler.

  9. Two-Stage Variable Sample-Rate Conversion System

    Science.gov (United States)

    Tkacenko, Andre

    2009-01-01

    A two-stage variable sample-rate conversion (SRC) system has been pro posed as part of a digital signal-processing system in a digital com munication radio receiver that utilizes a variety of data rates. The proposed system would be used as an interface between (1) an analog- todigital converter used in the front end of the receiver to sample an intermediatefrequency signal at a fixed input rate and (2) digita lly implemented tracking loops in subsequent stages that operate at v arious sample rates that are generally lower than the input sample r ate. This Two-Stage System would be capable of converting from an input sample rate to a desired lower output sample rate that could be var iable and not necessarily a rational fraction of the input rate.

  10. Energy demand in Portuguese manufacturing: a two-stage model

    International Nuclear Information System (INIS)

    Borges, A.M.; Pereira, A.M.

    1992-01-01

    We use a two-stage model of factor demand to estimate the parameters determining energy demand in Portuguese manufacturing. In the first stage, a capital-labor-energy-materials framework is used to analyze the substitutability between energy as a whole and other factors of production. In the second stage, total energy demand is decomposed into oil, coal and electricity demands. The two stages are fully integrated since the energy composite used in the first stage and its price are obtained from the second stage energy sub-model. The estimates obtained indicate that energy demand in manufacturing responds significantly to price changes. In addition, estimation results suggest that there are important substitution possibilities among energy forms and between energy and other factors of production. The role of price changes in energy-demand forecasting, as well as in energy policy in general, is clearly established. (author)

  11. Two-step two-stage fission gas release model

    International Nuclear Information System (INIS)

    Kim, Yong-soo; Lee, Chan-bock

    2006-01-01

    Based on the recent theoretical model, two-step two-stage model is developed which incorporates two stage diffusion processes, grain lattice and grain boundary diffusion, coupled with the two step burn-up factor in the low and high burn-up regime. FRAPCON-3 code and its in-pile data sets have been used for the benchmarking and validation of this model. Results reveals that its prediction is in better agreement with the experimental measurements than that by any model contained in the FRAPCON-3 code such as ANS 5.4, modified ANS5.4, and Forsberg-Massih model over whole burn-up range up to 70,000 MWd/MTU. (author)

  12. Two-Stage Fuzzy Portfolio Selection Problem with Transaction Costs

    OpenAIRE

    Chen, Yanju; Wang, Ye

    2015-01-01

    This paper studies a two-period portfolio selection problem. The problem is formulated as a two-stage fuzzy portfolio selection model with transaction costs, in which the future returns of risky security are characterized by possibility distributions. The objective of the proposed model is to achieve the maximum utility in terms of the expected value and variance of the final wealth. Given the first-stage decision vector and a realization of fuzzy return, the optimal value expression of the s...

  13. Two-stage precipitation of neptunium (IV) oxalate

    International Nuclear Information System (INIS)

    Luerkens, D.W.

    1983-07-01

    Neptunium (IV) oxalate was precipitated using a two-stage precipitation system. A series of precipitation experiments was used to identify the significant process variables affecting precipitate characteristics. Process variables tested were input concentrations, solubility conditions in the first stage precipitator, precipitation temperatures, and residence time in the first stage precipitator. A procedure has been demonstrated that produces neptunium (IV) oxalate particles that filter well and readily calcine to the oxide

  14. Gas pollutants removal in a single- and two-stage ejector-venturi scrubber.

    Science.gov (United States)

    Gamisans, Xavier; Sarrà, Montserrrat; Lafuente, F Javier

    2002-03-29

    The absorption of SO(2) and NH(3) from the flue gas into NaOH and H(2)SO(4) solutions, respectively has been studied using an industrial scale ejector-venturi scrubber. A statistical methodology is presented to characterise the performance of the scrubber by varying several factors such as gas pollutant concentration, air flowrate and absorbing solution flowrate. Some types of venturi tube constructions were assessed, including the use of a two-stage venturi tube. The results showed a strong influence of the liquid scrubbing flowrate on pollutant removal efficiency. The initial pollutant concentration and the gas flowrate had a slight influence. The use of a two-stage venturi tube considerably improved the absorption efficiency, although it increased energy consumption. The results of this study will be applicable to the optimal design of venturi-based absorbers for gaseous pollution control or chemical reactors.

  15. SU-E-J-128: Two-Stage Atlas Selection in Multi-Atlas-Based Image Segmentation

    International Nuclear Information System (INIS)

    Zhao, T; Ruan, D

    2015-01-01

    Purpose: In the new era of big data, multi-atlas-based image segmentation is challenged by heterogeneous atlas quality and high computation burden from extensive atlas collection, demanding efficient identification of the most relevant atlases. This study aims to develop a two-stage atlas selection scheme to achieve computational economy with performance guarantee. Methods: We develop a low-cost fusion set selection scheme by introducing a preliminary selection to trim full atlas collection into an augmented subset, alleviating the need for extensive full-fledged registrations. More specifically, fusion set selection is performed in two successive steps: preliminary selection and refinement. An augmented subset is first roughly selected from the whole atlas collection with a simple registration scheme and the corresponding preliminary relevance metric; the augmented subset is further refined into the desired fusion set size, using full-fledged registration and the associated relevance metric. The main novelty of this work is the introduction of an inference model to relate the preliminary and refined relevance metrics, based on which the augmented subset size is rigorously derived to ensure the desired atlases survive the preliminary selection with high probability. Results: The performance and complexity of the proposed two-stage atlas selection method were assessed using a collection of 30 prostate MR images. It achieved comparable segmentation accuracy as the conventional one-stage method with full-fledged registration, but significantly reduced computation time to 1/3 (from 30.82 to 11.04 min per segmentation). Compared with alternative one-stage cost-saving approach, the proposed scheme yielded superior performance with mean and medium DSC of (0.83, 0.85) compared to (0.74, 0.78). Conclusion: This work has developed a model-guided two-stage atlas selection scheme to achieve significant cost reduction while guaranteeing high segmentation accuracy. The benefit

  16. The construction and use of bacterial DNA microarrays based on an optimized two-stage PCR strategy

    Directory of Open Access Journals (Sweden)

    Pesta David

    2003-06-01

    Full Text Available Abstract Background DNA microarrays are a powerful tool with important applications such as global gene expression profiling. Construction of bacterial DNA microarrays from genomic sequence data using a two-stage PCR amplification approach for the production of arrayed DNA is attractive because it allows, in principal, the continued re-amplification of DNA fragments and facilitates further utilization of the DNA fragments for additional uses (e.g. over-expression of protein. We describe the successful construction and use of DNA microarrays by the two-stage amplification approach and discuss the technical challenges that were met and resolved during the project. Results Chimeric primers that contained both gene-specific and shared, universal sequence allowed the two-stage amplification of the 3,168 genes identified on the genome of Synechocystis sp. PCC6803, an important prokaryotic model organism for the study of oxygenic photosynthesis. The gene-specific component of the primer was of variable length to maintain uniform annealing temperatures during the 1st round of PCR synthesis, and situated to preserve full-length ORFs. Genes were truncated at 2 kb for efficient amplification, so that about 92% of the PCR fragments were full-length genes. The two-stage amplification had the additional advantage of normalizing the yield of PCR products and this improved the uniformity of DNA features robotically deposited onto the microarray surface. We also describe the techniques utilized to optimize hybridization conditions and signal-to-noise ratio of the transcription profile. The inter-lab transportability was demonstrated by the virtual error-free amplification of the entire genome complement of 3,168 genes using the universal primers in partner labs. The printed slides have been successfully used to identify differentially expressed genes in response to a number of environmental conditions, including salt stress. Conclusions The technique detailed

  17. CFD simulations of compressed air two stage rotary Wankel expander – Parametric analysis

    International Nuclear Information System (INIS)

    Sadiq, Ghada A.; Tozer, Gavin; Al-Dadah, Raya; Mahmoud, Saad

    2017-01-01

    Highlights: • CFD ANSYS-Fluent 3D simulation of Wankel expander is developed. • Single and two-stage expander’s performance is compared. • Inlet and outlet ports shape and configurations are investigated. • Isentropic efficiency of two stage Wankel expander of 91% is achieved. - Abstract: A small scale volumetric Wankel expander is a powerful device for small-scale power generation in compressed air energy storage (CAES) systems and Organic Rankine cycles powered by different heat sources such as, biomass, low temperature geothermal, solar and waste heat leading to significant reduction in CO_2 emissions. Wankel expanders outperform other types of expander due to their ability to produce two power pulses per revolution per chamber additional to higher compactness, lower noise and vibration and lower cost. In this paper, a computational fluid dynamics (CFD) model was developed using ANSYS 16.2 to simulate the flow dynamics for a single and two stage Wankel expanders and to investigate the effect of port configurations, including size and spacing, on the expander’s power output and isentropic efficiency. Also, single-stage and two-stage expanders were analysed with different operating conditions. Single-stage 3D CFD results were compared to published work showing close agreement. The CFD modelling was used to investigate the performance of the rotary device using air as an ideal gas with various port diameters ranging from 15 mm to 50 mm; port spacing varying from 28 mm to 66 mm; different Wankel expander sizes (r = 48, e = 6.6, b = 32) mm and (r = 58, e = 8, b = 40) mm both as single-stage and as two-stage expanders with different configurations and various operating conditions. Results showed that the best Wankel expander design for a single-stage was (r = 48, e = 6.6, b = 32) mm, with the port diameters 20 mm and port spacing equal to 50 mm. Moreover, combining two Wankel expanders horizontally, with a larger one at front, produced 8.52 kW compared

  18. Two stage treatment of dairy effluent using immobilized Chlorella pyrenoidosa

    Science.gov (United States)

    2013-01-01

    Background Dairy effluents contains high organic load and unscrupulous discharge of these effluents into aquatic bodies is a matter of serious concern besides deteriorating their water quality. Whilst physico-chemical treatment is the common mode of treatment, immobilized microalgae can be potentially employed to treat high organic content which offer numerous benefits along with waste water treatment. Methods A novel low cost two stage treatment was employed for the complete treatment of dairy effluent. The first stage consists of treating the diary effluent in a photobioreactor (1 L) using immobilized Chlorella pyrenoidosa while the second stage involves a two column sand bed filtration technique. Results Whilst NH4+-N was completely removed, a 98% removal of PO43--P was achieved within 96 h of two stage purification processes. The filtrate was tested for toxicity and no mortality was observed in the zebra fish which was used as a model at the end of 96 h bioassay. Moreover, a significant decrease in biological oxygen demand and chemical oxygen demand was achieved by this novel method. Also the biomass separated was tested as a biofertilizer to the rice seeds and a 30% increase in terms of length of root and shoot was observed after the addition of biomass to the rice plants. Conclusions We conclude that the two stage treatment of dairy effluent is highly effective in removal of BOD and COD besides nutrients like nitrates and phosphates. The treatment also helps in discharging treated waste water safely into the receiving water bodies since it is non toxic for aquatic life. Further, the algal biomass separated after first stage of treatment was highly capable of increasing the growth of rice plants because of nitrogen fixation ability of the green alga and offers a great potential as a biofertilizer. PMID:24355316

  19. Experimental studies of two-stage centrifugal dust concentrator

    Science.gov (United States)

    Vechkanova, M. V.; Fadin, Yu M.; Ovsyannikov, Yu G.

    2018-03-01

    The article presents data of experimental results of two-stage centrifugal dust concentrator, describes its design, and shows the development of a method of engineering calculation and laboratory investigations. For the experiments, the authors used quartz, ceramic dust and slag. Experimental dispersion analysis of dust particles was obtained by sedimentation method. To build a mathematical model of the process, dust collection was built using central composite rotatable design of the four factorial experiment. A sequence of experiments was conducted in accordance with the table of random numbers. Conclusion were made.

  20. Evaluating damping elements for two-stage suspension vehicles

    Directory of Open Access Journals (Sweden)

    Ronald M. Martinod R.

    2012-01-01

    Full Text Available The technical state of the damping elements for a vehicle having two-stage suspension was evaluated by using numerical models based on the multi-body system theory; a set of virtual tests used the eigenproblem mathematical method. A test was developed based on experimental modal analysis (EMA applied to a physical system as the basis for validating the numerical models. The study focused on evaluating vehicle dynamics to determine the influence of the dampers’ technical state in each suspension state.

  1. High-Speed 3D Printing of High-Performance Thermosetting Polymers via Two-Stage Curing.

    Science.gov (United States)

    Kuang, Xiao; Zhao, Zeang; Chen, Kaijuan; Fang, Daining; Kang, Guozheng; Qi, Hang Jerry

    2018-04-01

    Design and direct fabrication of high-performance thermosets and composites via 3D printing are highly desirable in engineering applications. Most 3D printed thermosetting polymers to date suffer from poor mechanical properties and low printing speed. Here, a novel ink for high-speed 3D printing of high-performance epoxy thermosets via a two-stage curing approach is presented. The ink containing photocurable resin and thermally curable epoxy resin is used for the digital light processing (DLP) 3D printing. After printing, the part is thermally cured at elevated temperature to yield an interpenetrating polymer network epoxy composite, whose mechanical properties are comparable to engineering epoxy. The printing speed is accelerated by the continuous liquid interface production assisted DLP 3D printing method, achieving a printing speed as high as 216 mm h -1 . It is also demonstrated that 3D printing structural electronics can be achieved by combining the 3D printed epoxy composites with infilled silver ink in the hollow channels. The new 3D printing method via two-stage curing combines the attributes of outstanding printing speed, high resolution, low volume shrinkage, and excellent mechanical properties, and provides a new avenue to fabricate 3D thermosetting composites with excellent mechanical properties and high efficiency toward high-performance and functional applications. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Alternative approaches to evaluation of cow efficiency

    African Journals Online (AJOL)

    anonymous

    2017-01-26

    Jan 26, 2017 ... Indexes that are consistent with the econometric definition of efficiency and seek to ... defined as ratios, such as the biological efficiency metric calf weight/cow weight. Dinkel & Brown ..... Biometrics 15, 469-485. Scholtz, M.M.& ...

  3. Two-stage, high power X-band amplifier experiment

    International Nuclear Information System (INIS)

    Kuang, E.; Davis, T.J.; Ivers, J.D.; Kerslick, G.S.; Nation, J.A.; Schaechter, L.

    1993-01-01

    At output powers in excess of 100 MW the authors have noted the development of sidebands in many TWT structures. To address this problem an experiment using a narrow bandwidth, two-stage TWT is in progress. The TWT amplifier consists of a dielectric (e = 5) slow-wave structure, a 30 dB sever section and a 8.8-9.0 GHz passband periodic, metallic structure. The electron beam used in this experiment is a 950 kV, 1 kA, 50 ns pencil beam propagating along an applied axial field of 9 kG. The dielectric first stage has a maximum gain of 30 dB measured at 8.87 GHz, with output powers of up to 50 MW in the TM 01 mode. In these experiments the dielectric amplifier output power is about 3-5 MW and the output power of the complete two-stage device is ∼160 MW at the input frequency. The sidebands detected in earlier experiments have been eliminated. The authors also report measurements of the energy spread of the electron beam resulting from the amplification process. These experimental results are compared with MAGIC code simulations and analytic work they have carried out on such devices

  4. Two-stage liquefaction of a Spanish subbituminous coal

    Energy Technology Data Exchange (ETDEWEB)

    Martinez, M.T.; Fernandez, I.; Benito, A.M.; Cebolla, V.; Miranda, J.L.; Oelert, H.H. (Instituto de Carboquimica, Zaragoza (Spain))

    1993-05-01

    A Spanish subbituminous coal has been processed in two-stage liquefaction in a non-integrated process. The first-stage coal liquefaction has been carried out in a continuous pilot plant in Germany at Clausthal Technical University at 400[degree]C, 20 MPa hydrogen pressure and anthracene oil as solvent. The second-stage coal liquefaction has been performed in continuous operation in a hydroprocessing unit at the Instituto de Carboquimica at 450[degree]C and 10 MPa hydrogen pressure, with two commercial catalysts: Harshaw HT-400E (Co-Mo/Al[sub 2]O[sub 3]) and HT-500E (Ni-Mo/Al[sub 2]O[sub 3]). The total conversion for the first-stage coal liquefaction was 75.41 wt% (coal d.a.f.), being 3.79 wt% gases, 2.58 wt% primary condensate and 69.04 wt% heavy liquids. The heteroatoms removal for the second-stage liquefaction was 97-99 wt% of S, 85-87 wt% of N and 93-100 wt% of O. The hydroprocessed liquids have about 70% of compounds with boiling point below 350[degree]C, and meet the sulphur and nitrogen specifications for refinery feedstocks. Liquids from two-stage coal liquefaction have been distilled, and the naphtha, kerosene and diesel fractions obtained have been characterized. 39 refs., 3 figs., 8 tabs.

  5. Two-stage perceptual learning to break visual crowding.

    Science.gov (United States)

    Zhu, Ziyun; Fan, Zhenzhi; Fang, Fang

    2016-01-01

    When a target is presented with nearby flankers in the peripheral visual field, it becomes harder to identify, which is referred to as crowding. Crowding sets a fundamental limit of object recognition in peripheral vision, preventing us from fully appreciating cluttered visual scenes. We trained adult human subjects on a crowded orientation discrimination task and investigated whether crowding could be completely eliminated by training. We discovered a two-stage learning process with this training task. In the early stage, when the target and flankers were separated beyond a certain distance, subjects acquired a relatively general ability to break crowding, as evidenced by the fact that the breaking of crowding could transfer to another crowded orientation, even a crowded motion stimulus, although the transfer to the opposite visual hemi-field was weak. In the late stage, like many classical perceptual learning effects, subjects' performance gradually improved and showed specificity to the trained orientation. We also found that, when the target and flankers were spaced too finely, training could only reduce, rather than completely eliminate, the crowding effect. This two-stage learning process illustrates a learning strategy for our brain to deal with the notoriously difficult problem of identifying peripheral objects in clutter. The brain first learned to solve the "easy and general" part of the problem (i.e., improving the processing resolution and segmenting the target and flankers) and then tackle the "difficult and specific" part (i.e., refining the representation of the target).

  6. Runway Operations Planning: A Two-Stage Heuristic Algorithm

    Science.gov (United States)

    Anagnostakis, Ioannis; Clarke, John-Paul

    2003-01-01

    The airport runway is a scarce resource that must be shared by different runway operations (arrivals, departures and runway crossings). Given the possible sequences of runway events, careful Runway Operations Planning (ROP) is required if runway utilization is to be maximized. From the perspective of departures, ROP solutions are aircraft departure schedules developed by optimally allocating runway time for departures given the time required for arrivals and crossings. In addition to the obvious objective of maximizing throughput, other objectives, such as guaranteeing fairness and minimizing environmental impact, can also be incorporated into the ROP solution subject to constraints introduced by Air Traffic Control (ATC) procedures. This paper introduces a two stage heuristic algorithm for solving the Runway Operations Planning (ROP) problem. In the first stage, sequences of departure class slots and runway crossings slots are generated and ranked based on departure runway throughput under stochastic conditions. In the second stage, the departure class slots are populated with specific flights from the pool of available aircraft, by solving an integer program with a Branch & Bound algorithm implementation. Preliminary results from this implementation of the two-stage algorithm on real-world traffic data are presented.

  7. Repetitive, small-bore two-stage light gas gun

    International Nuclear Information System (INIS)

    Combs, S.K.; Foust, C.R.; Fehling, D.T.; Gouge, M.J.; Milora, S.L.

    1991-01-01

    A repetitive two-stage light gas gun for high-speed pellet injection has been developed at Oak Ridge National Laboratory. In general, applications of the two-stage light gas gun have been limited to only single shots, with a finite time (at least minutes) needed for recovery and preparation for the next shot. The new device overcomes problems associated with repetitive operation, including rapidly evacuating the propellant gases, reloading the gun breech with a new projectile, returning the piston to its initial position, and refilling the first- and second-stage gas volumes to the appropriate pressure levels. In addition, some components are subjected to and must survive severe operating conditions, which include rapid cycling to high pressures and temperatures (up to thousands of bars and thousands of kelvins) and significant mechanical shocks. Small plastic projectiles (4-mm nominal size) and helium gas have been used in the prototype device, which was equipped with a 1-m-long pump tube and a 1-m-long gun barrel, to demonstrate repetitive operation (up to 1 Hz) at relatively high pellet velocities (up to 3000 m/s). The equipment is described, and experimental results are presented. 124 refs., 6 figs., 5 tabs

  8. Two-stage energy storage equalization system for lithium-ion battery pack

    Science.gov (United States)

    Chen, W.; Yang, Z. X.; Dong, G. Q.; Li, Y. B.; He, Q. Y.

    2017-11-01

    How to raise the efficiency of energy storage and maximize storage capacity is a core problem in current energy storage management. For that, two-stage energy storage equalization system which contains two-stage equalization topology and control strategy based on a symmetric multi-winding transformer and DC-DC (direct current-direct current) converter is proposed with bidirectional active equalization theory, in order to realize the objectives of consistent lithium-ion battery packs voltages and cells voltages inside packs by using a method of the Range. Modeling analysis demonstrates that the voltage dispersion of lithium-ion battery packs and cells inside packs can be kept within 2 percent during charging and discharging. Equalization time was 0.5 ms, which shortened equalization time of 33.3 percent compared with DC-DC converter. Therefore, the proposed two-stage lithium-ion battery equalization system can achieve maximum storage capacity between lithium-ion battery packs and cells inside packs, meanwhile efficiency of energy storage is significantly improved.

  9. Combining evidence from multiple electronic health care databases: performances of one-stage and two-stage meta-analysis in matched case-control studies.

    Science.gov (United States)

    La Gamba, Fabiola; Corrao, Giovanni; Romio, Silvana; Sturkenboom, Miriam; Trifirò, Gianluca; Schink, Tania; de Ridder, Maria

    2017-10-01

    Clustering of patients in databases is usually ignored in one-stage meta-analysis of multi-database studies using matched case-control data. The aim of this study was to compare bias and efficiency of such a one-stage meta-analysis with a two-stage meta-analysis. First, we compared the approaches by generating matched case-control data under 5 simulated scenarios, built by varying: (1) the exposure-outcome association; (2) its variability among databases; (3) the confounding strength of one covariate on this association; (4) its variability; and (5) the (heterogeneous) confounding strength of two covariates. Second, we made the same comparison using empirical data from the ARITMO project, a multiple database study investigating the risk of ventricular arrhythmia following the use of medications with arrhythmogenic potential. In our study, we specifically investigated the effect of current use of promethazine. Bias increased for one-stage meta-analysis with increasing (1) between-database variance of exposure effect and (2) heterogeneous confounding generated by two covariates. The efficiency of one-stage meta-analysis was slightly lower than that of two-stage meta-analysis for the majority of investigated scenarios. Based on ARITMO data, there were no evident differences between one-stage (OR = 1.50, CI = [1.08; 2.08]) and two-stage (OR = 1.55, CI = [1.12; 2.16]) approaches. When the effect of interest is heterogeneous, a one-stage meta-analysis ignoring clustering gives biased estimates. Two-stage meta-analysis generates estimates at least as accurate and precise as one-stage meta-analysis. However, in a study using small databases and rare exposures and/or outcomes, a correct one-stage meta-analysis becomes essential. Copyright © 2017 John Wiley & Sons, Ltd.

  10. Technical and scale efficiency in public and private Irish nursing homes - a bootstrap DEA approach.

    Science.gov (United States)

    Ni Luasa, Shiovan; Dineen, Declan; Zieba, Marta

    2016-10-27

    This article provides methodological and empirical insights into the estimation of technical efficiency in the nursing home sector. Focusing on long-stay care and using primary data, we examine technical and scale efficiency in 39 public and 73 private Irish nursing homes by applying an input-oriented data envelopment analysis (DEA). We employ robust bootstrap methods to validate our nonparametric DEA scores and to integrate the effects of potential determinants in estimating the efficiencies. Both the homogenous and two-stage double bootstrap procedures are used to obtain confidence intervals for the bias-corrected DEA scores. Importantly, the application of the double bootstrap approach affords true DEA technical efficiency scores after adjusting for the effects of ownership, size, case-mix, and other determinants such as location, and quality. Based on our DEA results for variable returns to scale technology, the average technical efficiency score is 62 %, and the mean scale efficiency is 88 %, with nearly all units operating on the increasing returns to scale part of the production frontier. Moreover, based on the double bootstrap results, Irish nursing homes are less technically efficient, and more scale efficient than the conventional DEA estimates suggest. Regarding the efficiency determinants, in terms of ownership, we find that private facilities are less efficient than the public units. Furthermore, the size of the nursing home has a positive effect, and this reinforces our finding that Irish homes produce at increasing returns to scale. Also, notably, we find that a tendency towards quality improvements can lead to poorer technical efficiency performance.

  11. A Two Stage Solution Procedure for Production Planning System with Advance Demand Information

    Science.gov (United States)

    Ueno, Nobuyuki; Kadomoto, Kiyotaka; Hasuike, Takashi; Okuhara, Koji

    We model for ‘Naiji System’ which is a unique corporation technique between a manufacturer and suppliers in Japan. We propose a two stage solution procedure for a production planning problem with advance demand information, which is called ‘Naiji’. Under demand uncertainty, this model is formulated as a nonlinear stochastic programming problem which minimizes the sum of production cost and inventory holding cost subject to a probabilistic constraint and some linear production constraints. By the convexity and the special structure of correlation matrix in the problem where inventory for different periods is not independent, we propose a solution procedure with two stages which are named Mass Customization Production Planning & Management System (MCPS) and Variable Mesh Neighborhood Search (VMNS) based on meta-heuristics. It is shown that the proposed solution procedure is available to get a near optimal solution efficiently and practical for making a good master production schedule in the suppliers.

  12. A comprehensive study of task coalescing for selecting parallelism granularity in a two-stage bidiagonal reduction

    KAUST Repository

    Haidar, Azzam

    2012-05-01

    We present new high performance numerical kernels combined with advanced optimization techniques that significantly increase the performance of parallel bidiagonal reduction. Our approach is based on developing efficient fine-grained computational tasks as well as reducing overheads associated with their high-level scheduling during the so-called bulge chasing procedure that is an essential phase of a scalable bidiagonalization procedure. In essence, we coalesce multiple tasks in a way that reduces the time needed to switch execution context between the scheduler and useful computational tasks. At the same time, we maintain the crucial information about the tasks and their data dependencies between the coalescing groups. This is the necessary condition to preserve numerical correctness of the computation. We show our annihilation strategy based on multiple applications of single orthogonal reflectors. Despite non-trivial characteristics in computational complexity and memory access patterns, our optimization approach smoothly applies to the annihilation scenario. The coalescing positively influences another equally important aspect of the bulge chasing stage: the memory reuse. For the tasks within the coalescing groups, the data is retained in high levels of the cache hierarchy and, as a consequence, operations that are normally memory-bound increase their ratio of computation to off-chip communication and become compute-bound which renders them amenable to efficient execution on multicore architectures. The performance for the new two-stage bidiagonal reduction is staggering. Our implementation results in up to 50-fold and 12-fold improvement (∼130 Gflop/s) compared to the equivalent routines from LAPACK V3.2 and Intel MKL V10.3, respectively, on an eight socket hexa-core AMD Opteron multicore shared-memory system with a matrix size of 24000 x 24000. Last but not least, we provide a comprehensive study on the impact of the coalescing group size in terms of cache

  13. A Two-Stage Estimation Method for Random Coefficient Differential Equation Models with Application to Longitudinal HIV Dynamic Data.

    Science.gov (United States)

    Fang, Yun; Wu, Hulin; Zhu, Li-Xing

    2011-07-01

    We propose a two-stage estimation method for random coefficient ordinary differential equation (ODE) models. A maximum pseudo-likelihood estimator (MPLE) is derived based on a mixed-effects modeling approach and its asymptotic properties for population parameters are established. The proposed method does not require repeatedly solving ODEs, and is computationally efficient although it does pay a price with the loss of some estimation efficiency. However, the method does offer an alternative approach when the exact likelihood approach fails due to model complexity and high-dimensional parameter space, and it can also serve as a method to obtain the starting estimates for more accurate estimation methods. In addition, the proposed method does not need to specify the initial values of state variables and preserves all the advantages of the mixed-effects modeling approach. The finite sample properties of the proposed estimator are studied via Monte Carlo simulations and the methodology is also illustrated with application to an AIDS clinical data set.

  14. Two-stage hydroprocessing of synthetic crude gas oil

    Energy Technology Data Exchange (ETDEWEB)

    Mahay, A.; Chmielowiec, J.; Fisher, I.P.; Monnier, J. (Petro-Canada Products, Missisauga, ON (Canada). Research and Development Centre)

    1992-02-01

    The hydrocracking of synthetic crude gas oils (SGO), which are commercially produced from Canadian oil sands, is strongly inhibited by nitrogen-containing species. To alleviate the pronounced effect of these nitrogenous compounds, SGO was hydrotreated at severe conditions prior to hydrocracking to reduce its N content from 1665 to about 390 ppm (by weight). Hydrocracking was then performed using a commercial nickel-tungsten catalyst supported on silica-alumina. Two-stage hydroprocessing of SGO was assessed in terms of product yields and quality. As expected, higher gas oil conversion were achieved mostly from an increase in naphtha yield. The middle distillate product quality was also clearly improved as the diesel fuel cetane number increased by 13%. Diesel engine tests indicated that particulate emissions in exhaust gases were lowered by 20%. Finally, pseudo first-order kinetic equations were derived for the overall conversion of the major gas oil components. 17 refs., 2 figs., 8 tabs.

  15. Hybrid biogas upgrading in a two-stage thermophilic reactor

    DEFF Research Database (Denmark)

    Corbellini, Viola; Kougias, Panagiotis; Treu, Laura

    2018-01-01

    The aim of this study is to propose a hybrid biogas upgrading configuration composed of two-stage thermophilic reactors. Hydrogen is directly injected in the first stage reactor. The output gas from the first reactor (in-situ biogas upgrade) is subsequently transferred to a second upflow reactor...... (ex-situ upgrade), in which enriched hydrogenotrophic culture is responsible for the hydrogenation of carbon dioxide to methane. The overall objective of the work was to perform an initial methane enrichment in the in-situ reactor, avoiding deterioration of the process due to elevated pH levels......, and subsequently, to complete the biogas upgrading process in the ex-situ chamber. The methane content in the first stage reactor reached on average 87% and the corresponding value in the second stage was 91%, with a maximum of 95%. A remarkable accumulation of volatile fatty acids was observed in the first...

  16. GENERALISED MODEL BASED CONFIDENCE INTERVALS IN TWO STAGE CLUSTER SAMPLING

    Directory of Open Access Journals (Sweden)

    Christopher Ouma Onyango

    2010-09-01

    Full Text Available Chambers and Dorfman (2002 constructed bootstrap confidence intervals in model based estimation for finite population totals assuming that auxiliary values are available throughout a target population and that the auxiliary values are independent. They also assumed that the cluster sizes are known throughout the target population. We now extend to two stage sampling in which the cluster sizes are known only for the sampled clusters, and we therefore predict the unobserved part of the population total. Jan and Elinor (2008 have done similar work, but unlike them, we use a general model, in which the auxiliary values are not necessarily independent. We demonstrate that the asymptotic properties of our proposed estimator and its coverage rates are better than those constructed under the model assisted local polynomial regression model.

  17. Device for two-stage cementing of casing

    Energy Technology Data Exchange (ETDEWEB)

    Kudimov, D A; Goncharevskiy, Ye N; Luneva, L G; Shchelochkov, S N; Shil' nikova, L N; Tereshchenko, V G; Vasiliev, V A; Volkova, V V; Zhdokov, K I

    1981-01-01

    A device is claimed for two-stage cementing of casing. It consists of a body with lateral plugging vents, upper and lower movable sleeves, a check valve with axial channels that's situated in the lower sleeve, and a displacement limiting device for the lower sleeve. To improve the cementing process of the casing by preventing overflow of cementing fluids from the annular space into the first stage casing, the limiter is equipped with a spring rod that is capable of covering the axial channels of the check valve while it's in an operating mode. In addition, the rod in the upper part is equipped with a reinforced area under the axial channels of the check valve.

  18. The hybrid two stage anticlockwise cycle for ecological energy conversion

    Directory of Open Access Journals (Sweden)

    Cyklis Piotr

    2016-01-01

    Full Text Available The anticlockwise cycle is commonly used for refrigeration, air conditioning and heat pumps applications. The application of refrigerant in the compression cycle is within the temperature limits of the triple point and the critical point. New refrigerants such as 1234yf or 1234ze have many disadvantages, therefore natural refrigerants application is favourable. The carbon dioxide and water can be applied only in the hybrid two stages cycle. The possibilities of this solutions are shown for refrigerating applications, as well some experimental results of the adsorption-compression double stages cycle, powered with solar collectors are shown. As a high temperature cycle the adsorption system is applied. The low temperature cycle is the compression stage with carbon dioxide as a working fluid. This allows to achieve relatively high COP for low temperature cycle and for the whole system.

  19. An adaptive two-stage dose-response design method for establishing proof of concept.

    Science.gov (United States)

    Franchetti, Yoko; Anderson, Stewart J; Sampson, Allan R

    2013-01-01

    We propose an adaptive two-stage dose-response design where a prespecified adaptation rule is used to add and/or drop treatment arms between the stages. We extend the multiple comparison procedures-modeling (MCP-Mod) approach into a two-stage design. In each stage, we use the same set of candidate dose-response models and test for a dose-response relationship or proof of concept (PoC) via model-associated statistics. The stage-wise test results are then combined to establish "global" PoC using a conditional error function. Our simulation studies showed good and more robust power in our design method compared to conventional and fixed designs.

  20. Approaches of Improving University Assets Management Efficiency

    Science.gov (United States)

    Wang, Jingliang

    2015-01-01

    University assets management, as an important content of modern university management, is generally confronted with the issue of low efficiency. Currently, to address the problems exposed in university assets management and take appropriate modification measures is an urgent issue in front of Chinese university assets management sectors. In this…

  1. Application of two-stage biofilter system for the removal of odorous compounds.

    Science.gov (United States)

    Jeong, Gwi-Taek; Park, Don-Hee; Lee, Gwang-Yeon; Cha, Jin-Myeong

    2006-01-01

    Biofiltration is a biological process which is considered to be one of the more successful examples of biotechnological applications to environmental engineering, and is most commonly used in the removal of odoriferous compounds. In this study, we have attempted to assess the efficiency with which both single and complex odoriferous compounds could be removed, using one- or two-stage biofiltration systems. The tested single odor gases, limonene, alpha-pinene, and iso-butyl alcohol, were separately evaluated in the biofilters. Both limonene and alpha-pinene were removed by 90% or more EC (elimination capacity), 364 g/m3/h and 321 g/m3/h, respectively, at an input concentration of 50 ppm and a retention time of 30 s. The iso-butyl alcohol was maintained with an effective removal yield of more than 90% (EC 375 g/m3/h) at an input concentration of 100 ppm. The complex gas removal scheme was applied with a 200 ppm inlet concentration of ethanol, 70 ppm of acetaldehyde, and 70 ppm of toluene with residence time of 45 s in a one- or two-stage biofiltration system. The removal yield of toluene was determined to be lower than that of the other gases in the one-stage biofilter. Otherwise, the complex gases were sufficiently eliminated by the two-stage biofiltration system.

  2. Study on the Control Algorithm of Two-Stage DC-DC Converter for Electric Vehicles

    Directory of Open Access Journals (Sweden)

    Changhao Piao

    2014-01-01

    Full Text Available The fast response, high efficiency, and good reliability are very important characteristics to electric vehicles (EVs dc/dc converters. Two-stage dc-dc converter is a kind of dc-dc topologies that can offer those characteristics to EVs. Presently, nonlinear control is an active area of research in the field of the control algorithm of dc-dc converters. However, very few papers research on two-stage converter for EVs. In this paper, a fixed switching frequency sliding mode (FSFSM controller and double-integral sliding mode (DISM controller for two-stage dc-dc converter are proposed. And a conventional linear control (lag is chosen as the comparison. The performances of the proposed FSFSM controller are compared with those obtained by the lag controller. In consequence, the satisfactory simulation and experiment results show that the FSFSM controller is capable of offering good large-signal operations with fast dynamical responses to the converter. At last, some other simulation results are presented to prove that the DISM controller is a promising method for the converter to eliminate the steady-state error.

  3. Two-stage commercial evaluation of engineering systems production projects for high-rise buildings

    Science.gov (United States)

    Bril, Aleksander; Kalinina, Olga; Levina, Anastasia

    2018-03-01

    The paper is devoted to the current and debatable problem of methodology of choosing the effective innovative enterprises for venture financing. A two-stage system of commercial innovation evaluation based on the UNIDO methodology is proposed. Engineering systems account for 25 to 40% of the cost of high-rise residential buildings. This proportion increases with the use of new construction technologies. Analysis of the construction market in Russia showed that the production of internal engineering systems elements based on innovative technologies has a growth trend. The production of simple elements is organized in small enterprises on the basis of new technologies. The most attractive for development is the use of venture financing of small innovative business. To improve the efficiency of these operations, the paper proposes a methodology for a two-stage evaluation of small business development projects. A two-stage system of commercial evaluation of innovative projects allows creating an information base for informed and coordinated decision-making on venture financing of enterprises that produce engineering systems elements for the construction business.

  4. Two-stage commercial evaluation of engineering systems production projects for high-rise buildings

    Directory of Open Access Journals (Sweden)

    Bril Aleksander

    2018-01-01

    Full Text Available The paper is devoted to the current and debatable problem of methodology of choosing the effective innovative enterprises for venture financing. A two-stage system of commercial innovation evaluation based on the UNIDO methodology is proposed. Engineering systems account for 25 to 40% of the cost of high-rise residential buildings. This proportion increases with the use of new construction technologies. Analysis of the construction market in Russia showed that the production of internal engineering systems elements based on innovative technologies has a growth trend. The production of simple elements is organized in small enterprises on the basis of new technologies. The most attractive for development is the use of venture financing of small innovative business. To improve the efficiency of these operations, the paper proposes a methodology for a two-stage evaluation of small business development projects. A two-stage system of commercial evaluation of innovative projects allows creating an information base for informed and coordinated decision-making on venture financing of enterprises that produce engineering systems elements for the construction business.

  5. Effect of Silica Fume on two-stage Concrete Strength

    Science.gov (United States)

    Abdelgader, H. S.; El-Baden, A. S.

    2015-11-01

    Two-stage concrete (TSC) is an innovative concrete that does not require vibration for placing and compaction. TSC is a simple concept; it is made using the same basic constituents as traditional concrete: cement, coarse aggregate, sand and water as well as mineral and chemical admixtures. As its name suggests, it is produced through a two-stage process. Firstly washed coarse aggregate is placed into the formwork in-situ. Later a specifically designed self compacting grout is introduced into the form from the lowest point under gravity pressure to fill the voids, cementing the aggregate into a monolith. The hardened concrete is dense, homogeneous and has in general improved engineering properties and durability. This paper presents the results from a research work attempt to study the effect of silica fume (SF) and superplasticizers admixtures (SP) on compressive and tensile strength of TSC using various combinations of water to cement ratio (w/c) and cement to sand ratio (c/s). Thirty six concrete mixes with different grout constituents were tested. From each mix twenty four standard cylinder samples of size (150mm×300mm) of concrete containing crushed aggregate were produced. The tested samples were made from combinations of w/c equal to: 0.45, 0.55 and 0.85, and three c/s of values: 0.5, 1 and 1.5. Silica fume was added at a dosage of 6% of weight of cement, while superplasticizer was added at a dosage of 2% of cement weight. Results indicated that both tensile and compressive strength of TSC can be statistically derived as a function of w/c and c/s with good correlation coefficients. The basic principle of traditional concrete, which says that an increase in water/cement ratio will lead to a reduction in compressive strength, was shown to hold true for TSC specimens tested. Using a combination of both silica fume and superplasticisers caused a significant increase in strength relative to control mixes.

  6. The experimental study of a two-stage photovoltaic thermal system based on solar trough concentration

    International Nuclear Information System (INIS)

    Tan, Lijun; Ji, Xu; Li, Ming; Leng, Congbin; Luo, Xi; Li, Haili

    2014-01-01

    Highlights: • A two-stage photovoltaic thermal system based on solar trough concentration. • Maximum cell efficiency of 5.21% with the mirror opening width of 57 cm. • With single cycle, maximum temperatures rise in the heating stage is 12.06 °C. • With 30 min multiple cycles, working medium temperature 62.8 °C, increased 28.7 °C. - Abstract: A two-stage photovoltaic thermal system based on solar trough concentration is proposed, in which the metal cavity heating stage is added on the basis of the PV/T stage, and thermal energy with higher temperature is output while electric energy is output. With the 1.8 m 2 mirror PV/T system, the characteristic parameters of the space solar cell under non-concentrating solar radiation and concentrating solar radiation are respectively tested experimentally, and the solar cell output characteristics at different opening widths of concentrating mirror of the PV/T stage under condensation are also tested experimentally. When the mirror opening width was 57 cm, the solar cell efficiency reached maximum value of 5.21%. The experimental platform of the two-stage photovoltaic thermal system was established, with a 1.8 m 2 mirror PV/T stage and a 15 m 2 mirror heating stage, or a 1.8 m 2 mirror PV/T stage and a 30 m 2 mirror heating stage. The results showed that with single cycle, the long metal cavity heating stage would bring lower thermal efficiency, but temperature rise of the working medium is higher, up to 12.06 °C with only single cycle. With 30 min closed multiple cycles, the temperature of the working medium in the water tank was 62.8 °C, with an increase of 28.7 °C, and thermal energy with higher temperature could be output

  7. Two-stage solar concentrators based on parabolic troughs: asymmetric versus symmetric designs.

    Science.gov (United States)

    Schmitz, Max; Cooper, Thomas; Ambrosetti, Gianluca; Steinfeld, Aldo

    2015-11-20

    While nonimaging concentrators can approach the thermodynamic limit of concentration, they generally suffer from poor compactness when designed for small acceptance angles, e.g., to capture direct solar irradiation. Symmetric two-stage systems utilizing an image-forming primary parabolic concentrator in tandem with a nonimaging secondary concentrator partially overcome this compactness problem, but their achievable concentration ratio is ultimately limited by the central obstruction caused by the secondary. Significant improvements can be realized by two-stage systems having asymmetric cross-sections, particularly for 2D line-focus trough designs. We therefore present a detailed analysis of two-stage line-focus asymmetric concentrators for flat receiver geometries and compare them to their symmetric counterparts. Exemplary designs are examined in terms of the key optical performance metrics, namely, geometric concentration ratio, acceptance angle, concentration-acceptance product, aspect ratio, active area fraction, and average number of reflections. Notably, we show that asymmetric designs can achieve significantly higher overall concentrations and are always more compact than symmetric systems designed for the same concentration ratio. Using this analysis as a basis, we develop novel asymmetric designs, including two-wing and nested configurations, which surpass the optical performance of two-mirror aplanats and are comparable with the best reported 2D simultaneous multiple surface designs for both hollow and dielectric-filled secondaries.

  8. Thermodynamics analysis of a modified dual-evaporator CO2 transcritical refrigeration cycle with two-stage ejector

    International Nuclear Information System (INIS)

    Bai, Tao; Yan, Gang; Yu, Jianlin

    2015-01-01

    In this paper, a modified dual-evaporator CO 2 transcritical refrigeration cycle with two-stage ejector (MDRC) is proposed. In MDRC, the two-stage ejector are employed to recover the expansion work from cycle throttling processes and enhance the system performance and obtain dual-temperature refrigeration simultaneously. The effects of some key parameters on the thermodynamic performance of the modified cycle are theoretically investigated based on energetic and exergetic analyses. The simulation results for the modified cycle show that two-stage ejector exhibits more effective system performance improvement than the single ejector in CO 2 dual-temperature refrigeration cycle, and the improvements of the maximum system COP (coefficient of performance) and system exergy efficiency could reach 37.61% and 31.9% over those of the conventional dual-evaporator cycle under the given operating conditions. The exergetic analysis for each component at optimum discharge pressure indicates that the gas cooler, compressor, two-stage ejector and expansion valves contribute main portion to the total system exergy destruction, and the exergy destruction caused by the two-stage ejector could amount to 16.91% of the exergy input. The performance characteristics of the proposed cycle show its promise in dual-evaporator refrigeration system. - Highlights: • Two-stage ejector is used in dual-evaporator CO 2 transcritical refrigeration cycle. • Energetic and exergetic methods are carried out to analyze the system performance. • The modified cycle could obtain dual-temperature refrigeration simultaneously. • Two-stage ejector could effectively improve system COP and exergy efficiency

  9. Eliminating Survivor Bias in Two-stage Instrumental Variable Estimators.

    Science.gov (United States)

    Vansteelandt, Stijn; Walter, Stefan; Tchetgen Tchetgen, Eric

    2018-07-01

    Mendelian randomization studies commonly focus on elderly populations. This makes the instrumental variables analysis of such studies sensitive to survivor bias, a type of selection bias. A particular concern is that the instrumental variable conditions, even when valid for the source population, may be violated for the selective population of individuals who survive the onset of the study. This is potentially very damaging because Mendelian randomization studies are known to be sensitive to bias due to even minor violations of the instrumental variable conditions. Interestingly, the instrumental variable conditions continue to hold within certain risk sets of individuals who are still alive at a given age when the instrument and unmeasured confounders exert additive effects on the exposure, and moreover, the exposure and unmeasured confounders exert additive effects on the hazard of death. In this article, we will exploit this property to derive a two-stage instrumental variable estimator for the effect of exposure on mortality, which is insulated against the above described selection bias under these additivity assumptions.

  10. FIRST DIRECT EVIDENCE OF TWO STAGES IN FREE RECALL

    Directory of Open Access Journals (Sweden)

    Eugen Tarnow

    2015-12-01

    Full Text Available I find that exactly two stages can be seen directly in sequential free recall distributions. These distributions show that the first three recalls come from the emptying of working memory, recalls 6 and above come from a second stage and the 4th and 5th recalls are mixtures of the two.A discontinuity, a rounded step function, is shown to exist in the fitted linear slope of the recall distributions as the recall shifts from the emptying of working memory (positive slope to the second stage (negative slope. The discontinuity leads to a first estimate of the capacity of working memory at 4-4.5 items. The total recall is shown to be a linear combination of the content of working memory and items recalled in the second stage with 3.0-3.9 items coming from working memory, a second estimate of the capacity of working memory. A third, separate upper limit on the capacity of working memory is found (3.06 items, corresponding to the requirement that the content of working memory cannot exceed the total recall, item by item. This third limit is presumably the best limit on the average capacity of unchunked working memory.The second stage of recall is shown to be reactivation: The average times to retrieve additional items in free recall obey a linear relationship as a function of the recall probability which mimics recognition and cued recall, both mechanisms using reactivation (Tarnow, 2008.

  11. Two-stage nuclear refrigeration with enhanced nuclear moments

    International Nuclear Information System (INIS)

    Hunik, R.

    1979-01-01

    Experiments are described in which an enhanced nuclear system is used as a precoolant for a nuclear demagnetisation stage. The results show the promising advantages of such a system in those circumstances for which a large cooling power is required at extremely low temperatures. A theoretical review of nuclear enhancement at the microscopic level and its macroscopic thermodynamical consequences is given. The experimental equipment for the implementation of the nuclear enhanced refrigeration method is described and the experiments on two-stage nuclear demagnetisation are discussed. With the nuclear enhanced system PrCu 6 the author could precool a nuclear stage of indium in a magnetic field of 6 T down to temperatures below 10 mK; this resulted in temperature below 1 mK after demagnetisation of the indium. It is demonstrated that the interaction energy between the nuclear moments in an enhanced nuclear system can exceed the nuclear dipolar interaction. Several experiments are described on pulsed nuclear magnetic resonance, as utilised for thermometry purposes. It is shown that platinum NMR-thermometry gives very satisfactory results around 1 mK. The results of experiments on nuclear orientation of radioactive nuclei, e.g. the brute force polarisation of 95 NbPt and 60 CoCu, are presented, some of which are of major importance for the thermometry in the milli-Kelvin region. (Auth.)

  12. A comprehensive review on two-stage integrative schemes for the valorization of dark fermentative effluents.

    Science.gov (United States)

    Sivagurunathan, Periyasamy; Kuppam, Chandrasekhar; Mudhoo, Ackmez; Saratale, Ganesh D; Kadier, Abudukeremu; Zhen, Guangyin; Chatellard, Lucile; Trably, Eric; Kumar, Gopalakrishnan

    2017-12-21

    This review provides the alternative routes towards the valorization of dark H 2 fermentation effluents that are mainly rich in volatile fatty acids such as acetate and butyrate. Various enhancement and alternative routes such as photo fermentation, anaerobic digestion, utilization of microbial electrochemical systems, and algal system towards the generation of bioenergy and electricity and also for efficient organic matter utilization are highlighted. What is more, various integration schemes and two-stage fermentation for the possible scale up are reviewed. Moreover, recent progress for enhanced performance towards waste stabilization and overall utilization of useful and higher COD present in the organic source into value-added products are extensively discussed.

  13. A Novel Two-Stage Dynamic Spectrum Sharing Scheme in Cognitive Radio Networks

    Institute of Scientific and Technical Information of China (English)

    Guodong Zhang; Wei Heng; Tian Liang; Chao Meng; Jinming Hu

    2016-01-01

    In order to enhance the efficiency of spectrum utilization and reduce communication overhead in spectrum sharing process,we propose a two-stage dynamic spectrum sharing scheme in which cooperative and noncooperative modes are analyzed in both stages.In particular,the existence and the uniqueness of Nash Equilibrium (NE) strategies for noncooperative mode are proved.In addition,a distributed iterative algorithm is proposed to obtain the optimal solutions of the scheme.Simulation studies are carried out to show the performance comparison between two modes as well as the system revenue improvement of the proposed scheme compared with a conventional scheme without a virtual price control factor.

  14. Preliminary cleaning of brewery waste water in a two-stage anaerobic plant: influence of COD in the inflow on cleaning efficiency and biogas formation; Vorreinigung von Brauereiabwasser in zweistufigen Anaerob-Anlagen: Einfluss des CSB im Zulauf auf die Reinigungsleistung und Biogasbildung

    Energy Technology Data Exchange (ETDEWEB)

    Herrmann, A.P. [Universitaet des Saarlandes, Saarbruecken (Germany). Lehrstuhl fuer Prozesstechnik; Janke, H.D. [Gesellschaft fuer Umweltkompatible Prozesstechnik mbH (upt), Saarbruecken (Germany); Chmiel, H. [Gesellschaft fuer Umweltkompatible Prozesstechnik mbH (upt), Saarbruecken (Germany); Universitaet des Saarlandes, Saarbruecken (Germany). Lehrstuhl fuer Prozesstechnik

    1999-07-01

    Using a continuously operated, two-stage laboratory system (acidification reactor and packed-bed methane reactor) and with brewery waste water as a substrate, systematic studies concerning the influence of COD{sup inflow} on fatty acid formation, COD reduction and biogas formation were carried out. In the upshot, the executed pilot tests permit the conclusion that treatment of a partial stream (COD{sup inflow} {>=} 5000mg/l), though not advantageous in terms of space/time yield, may be more economical on the whole under certain boundary conditions than treatment of the entire stream (COD{sup inflow} 1800-3000 mg/l). (orig.) [German] Mit einer kontinuierlich betriebenen, zweistufigen Laboranlage (Versaeuerungsreaktor und Festbett-Methanreaktor) wurden unter Verwendung von Brauereiabwasser als Substrat systematische Untersuchungen zum Einfluss des CSB{sup ZULAUF} auf die Fettsaeurebildung, CSB-Reduktion und Biogasbildung durchgefuehrt. Aus den durchgefuehrten Modellversuchen laesst sich zusammenfassend ableiten, dass eine Teilstrombehandlung (CSB{sup ZULAUF}{>=}5.000 mg/l) zwar hinsichtlich der Raum/Zeit-Ausbeute keine Vorteile mit sich bringt, aber unter bestimmten Randbedingungen insgesamt wirtschaftlicher als eine Vollstrombehandlung (CSB{sup ZULAUF} 1.800-3.000 mg/l) sein kann. (orig.)

  15. Remote Sensing Based Two-Stage Sampling for Accuracy Assessment and Area Estimation of Land Cover Changes

    Directory of Open Access Journals (Sweden)

    Heinz Gallaun

    2015-09-01

    Full Text Available Land cover change processes are accelerating at the regional to global level. The remote sensing community has developed reliable and robust methods for wall-to-wall mapping of land cover changes; however, land cover changes often occur at rates below the mapping errors. In the current publication, we propose a cost-effective approach to complement wall-to-wall land cover change maps with a sampling approach, which is used for accuracy assessment and accurate estimation of areas undergoing land cover changes, including provision of confidence intervals. We propose a two-stage sampling approach in order to keep accuracy, efficiency, and effort of the estimations in balance. Stratification is applied in both stages in order to gain control over the sample size allocated to rare land cover change classes on the one hand and the cost constraints for very high resolution reference imagery on the other. Bootstrapping is used to complement the accuracy measures and the area estimates with confidence intervals. The area estimates and verification estimations rely on a high quality visual interpretation of the sampling units based on time series of satellite imagery. To demonstrate the cost-effective operational applicability of the approach we applied it for assessment of deforestation in an area characterized by frequent cloud cover and very low change rate in the Republic of Congo, which makes accurate deforestation monitoring particularly challenging.

  16. Two-stage Catalytic Reduction of NOx with Hydrocarbons

    Energy Technology Data Exchange (ETDEWEB)

    Umit S. Ozkan; Erik M. Holmgreen; Matthew M. Yung; Jonathan Halter; Joel Hiltner

    2005-12-21

    A two-stage system for the catalytic reduction of NO from lean-burn natural gas reciprocating engine exhaust is investigated. Each of the two stages uses a distinct catalyst. The first stage is oxidation of NO to NO{sub 2} and the second stage is reduction of NO{sub 2} to N{sub 2} with a hydrocarbon. The central idea is that since NO{sub 2} is a more easily reduced species than NO, it should be better able to compete with oxygen for the combustion reaction of hydrocarbon, which is a challenge in lean conditions. Early work focused on demonstrating that the N{sub 2} yield obtained when NO{sub 2} was reduced was greater than when NO was reduced. NO{sub 2} reduction catalysts were designed and silver supported on alumina (Ag/Al{sub 2}O{sub 3}) was found to be quite active, able to achieve 95% N{sub 2} yield in 10% O{sub 2} using propane as the reducing agent. The design of a catalyst for NO oxidation was also investigated, and a Co/TiO{sub 2} catalyst prepared by sol-gel was shown to have high activity for the reaction, able to reach equilibrium conversion of 80% at 300 C at GHSV of 50,000h{sup -1}. After it was shown that NO{sub 2} could be more easily reduced to N{sub 2} than NO, the focus shifted on developing a catalyst that could use methane as the reducing agent. The Ag/Al{sub 2}O{sub 3} catalyst was tested and found to be inactive for NOx reduction with methane. Through iterative catalyst design, a palladium-based catalyst on a sulfated-zirconia support (Pd/SZ) was synthesized and shown to be able to selectively reduce NO{sub 2} in lean conditions using methane. Development of catalysts for the oxidation reaction also continued and higher activity, as well as stability in 10% water, was observed on a Co/ZrO{sub 2} catalyst, which reached equilibrium conversion of 94% at 250 C at the same GHSV. The Co/ZrO{sub 2} catalyst was also found to be extremely active for oxidation of CO, ethane, and propane, which could potential eliminate the need for any separate

  17. Causes for the two stages of the disruption energy quench

    Energy Technology Data Exchange (ETDEWEB)

    Schueller, F.C.; Donne, A.J.H.; Heijnen, S.H.; Rommers, J.R.; Tanzi, C.P. [FOM-Instituut voor Plasmafysica, Rijnhuizen (Netherlands); Vries, P.C. de; Waidmann, G. [Forschungszentrum Juelich GmbH (Germany). Inst. fuer Plasmaphysik

    1994-12-31

    It is a well-established fact that the energy quench of tokamak disruptions takes place in two stages separated by a plateau period. The total quench duration of typically a few hundred {mu}s is thought to be a combination of Alfven and magnetic diffusion times: Phase 1: a large cold m=1 bubble eats out the hot core within the q=1 surface. Since the normal thermal isolation of the outer layers is still intact this phase means an adiabatic flattening of the inner temperature distribution. Phase 2: after a plateau period the second quench occurs when the edge thermal barrier collapses and a major part of the plasma energy is lost in conjunction with a negative surface voltage spike and a positive spike of the plasma current. In the experimental and theoretical literature on this subject not much attention is given to the evolution of the density distribution during these two phases. This may be caused by the great difficulties one has to keep the fringe counters of multichannel interferometers on track during the very fast changing evolution. The interferometer at TEXTOR can follow this evolution. The spatial resolution after inversion is limited because of the modest number of interferometer channels. In RTP an 18-channel fast interferometer is available next to a 4-channel pulse radar reflectometer which makes it possible to investigate the density profile evolution with both good time (2 {mu}s)- and spatial (0.1a)-resolution. A fast 20-channel ECE-heterodyne radiometer and a 5-camera SXR system allows to follow the temperature profile evolution as well. In this paper theoretical models will be revisited and compared to the new experimental evidence. (author) 9 refs., 3 figs.

  18. Causes for the two stages of the disruption energy quench

    International Nuclear Information System (INIS)

    Schueller, F.C.; Donne, A.J.H.; Heijnen, S.H.; Rommers, J.R.; Tanzi, C.P.; Vries, P.C. de; Waidmann, G.

    1994-01-01

    It is a well-established fact that the energy quench of tokamak disruptions takes place in two stages separated by a plateau period. The total quench duration of typically a few hundred μs is thought to be a combination of Alfven and magnetic diffusion times: Phase 1: a large cold m=1 bubble eats out the hot core within the q=1 surface. Since the normal thermal isolation of the outer layers is still intact this phase means an adiabatic flattening of the inner temperature distribution. Phase 2: after a plateau period the second quench occurs when the edge thermal barrier collapses and a major part of the plasma energy is lost in conjunction with a negative surface voltage spike and a positive spike of the plasma current. In the experimental and theoretical literature on this subject not much attention is given to the evolution of the density distribution during these two phases. This may be caused by the great difficulties one has to keep the fringe counters of multichannel interferometers on track during the very fast changing evolution. The interferometer at TEXTOR can follow this evolution. The spatial resolution after inversion is limited because of the modest number of interferometer channels. In RTP an 18-channel fast interferometer is available next to a 4-channel pulse radar reflectometer which makes it possible to investigate the density profile evolution with both good time (2 μs)- and spatial (0.1a)-resolution. A fast 20-channel ECE-heterodyne radiometer and a 5-camera SXR system allows to follow the temperature profile evolution as well. In this paper theoretical models will be revisited and compared to the new experimental evidence. (author) 9 refs., 3 figs

  19. Transport fuels from two-stage coal liquefaction

    Energy Technology Data Exchange (ETDEWEB)

    Benito, A.; Cebolla, V.; Fernandez, I.; Martinez, M.T.; Miranda, J.L.; Oelert, H.; Prado, J.G. (Instituto de Carboquimica CSIC, Zaragoza (Spain))

    1994-03-01

    Four Spanish lignites and their vitrinite concentrates were evaluated for coal liquefaction. Correlationships between the content of vitrinite and conversion in direct liquefaction were observed for the lignites but not for the vitrinite concentrates. The most reactive of the four coals was processed in two-stage liquefaction at a higher scale. First-stage coal liquefaction was carried out in a continuous unit at Clausthal University at a temperature of 400[degree]C at 20 MPa hydrogen pressure and with anthracene oil as a solvent. The coal conversion obtained was 75.41% being 3.79% gases, 2.58% primary condensate and 69.04% heavy liquids. A hydroprocessing unit was built at the Instituto de Carboquimica for the second-stage coal liquefaction. Whole and deasphalted liquids from the first-stage liquefaction were processed at 450[degree]C and 10 MPa hydrogen pressure, with two commercial catalysts: Harshaw HT-400E (Co-Mo/Al[sub 2]O[sub 3]) and HT-500E (Ni-Mo/Al[sub 2]O[sub 3]). The effects of liquid hourly space velocity (LHSV), temperature, gas/liquid ratio and catalyst on the heteroatom liquids, and levels of 5 ppm of nitrogen and 52 ppm of sulphur were reached at 450[degree]C, 10 MPa hydrogen pressure, 0.08 kg H[sub 2]/kg feedstock and with Harshaw HT-500E catalyst. The liquids obtained were hydroprocessed again at 420[degree]C, 10 MPa hydrogen pressure and 0.06 kg H[sub 2]/kg feedstock to hydrogenate the aromatic structures. In these conditions, the aromaticity was reduced considerably, and 39% of naphthas and 35% of kerosene fractions were obtained. 18 refs., 4 figs., 4 tabs.

  20. Two-Stage Performance Engineering of Container-based Virtualization

    Directory of Open Access Journals (Sweden)

    Zheng Li

    2018-02-01

    Full Text Available Cloud computing has become a compelling paradigm built on compute and storage virtualization technologies. The current virtualization solution in the Cloud widely relies on hypervisor-based technologies. Given the recent booming of the container ecosystem, the container-based virtualization starts receiving more attention for being a promising alternative. Although the container technologies are generally considered to be lightweight, no virtualization solution is ideally resource-free, and the corresponding performance overheads will lead to negative impacts on the quality of Cloud services. To facilitate understanding container technologies from the performance engineering’s perspective, we conducted two-stage performance investigations into Docker containers as a concrete example. At the first stage, we used a physical machine with “just-enough” resource as a baseline to investigate the performance overhead of a standalone Docker container against a standalone virtual machine (VM. With findings contrary to the related work, our evaluation results show that the virtualization’s performance overhead could vary not only on a feature-by-feature basis but also on a job-to-job basis. Moreover, the hypervisor-based technology does not come with higher performance overhead in every case. For example, Docker containers particularly exhibit lower QoS in terms of storage transaction speed. At the ongoing second stage, we employed a physical machine with “fair-enough” resource to implement a container-based MapReduce application and try to optimize its performance. In fact, this machine failed in affording VM-based MapReduce clusters in the same scale. The performance tuning results show that the effects of different optimization strategies could largely be related to the data characteristics. For example, LZO compression can bring the most significant performance improvement when dealing with text data in our case.

  1. Relative efficiency of hydrogen technologies for the hydrogen economy : a fuzzy AHP/DEA hybrid model approach

    International Nuclear Information System (INIS)

    Lee, S.

    2009-01-01

    As a provider of national energy security, the Korean Institute of Energy Research is seeking to establish a long term strategic technology roadmap for a hydrogen-based economy. This paper addressed 5 criteria regarding the strategy, notably economic impact, commercial potential, inner capacity, technical spinoff, and development cost. The fuzzy AHP and DEA hybrid model were used in a two-stage multi-criteria decision making approach to evaluate the relative efficiency of hydrogen technologies for the hydrogen economy. The fuzzy analytic hierarchy process reflects the uncertainty of human thoughts with interval values instead of clear-cut numbers. It therefore allocates the relative importance of 4 criteria, notably economic impact, commercial potential, inner capacity and technical spin-off. The relative efficiency of hydrogen technologies for the hydrogen economy can be measured via data envelopment analysis. It was concluded that the scientific decision making approach can be used effectively to allocate research and development resources and activities

  2. Relative efficiency of hydrogen technologies for the hydrogen economy : a fuzzy AHP/DEA hybrid model approach

    Energy Technology Data Exchange (ETDEWEB)

    Lee, S. [Korea Inst. of Energy Research, Daejeon (Korea, Republic of). Energy Policy Research Division; Mogi, G. [Tokyo Univ., (Japan). Dept. of Technology Management for Innovation, Graduate School of Engineering; Kim, J. [Korea Inst. of Energy Research, Daejeon (Korea, Republic of)

    2009-07-01

    As a provider of national energy security, the Korean Institute of Energy Research is seeking to establish a long term strategic technology roadmap for a hydrogen-based economy. This paper addressed 5 criteria regarding the strategy, notably economic impact, commercial potential, inner capacity, technical spinoff, and development cost. The fuzzy AHP and DEA hybrid model were used in a two-stage multi-criteria decision making approach to evaluate the relative efficiency of hydrogen technologies for the hydrogen economy. The fuzzy analytic hierarchy process reflects the uncertainty of human thoughts with interval values instead of clear-cut numbers. It therefore allocates the relative importance of 4 criteria, notably economic impact, commercial potential, inner capacity and technical spin-off. The relative efficiency of hydrogen technologies for the hydrogen economy can be measured via data envelopment analysis. It was concluded that the scientific decision making approach can be used effectively to allocate research and development resources and activities.

  3. Research on Two-channel Interleaved Two-stage Paralleled Buck DC-DC Converter for Plasma Cutting Power Supply

    DEFF Research Database (Denmark)

    Yang, Xi-jun; Qu, Hao; Yao, Chen

    2014-01-01

    As for high power plasma power supply, due to high efficiency and flexibility, multi-channel interleaved multi-stage paralleled Buck DC-DC Converter becomes the first choice. In the paper, two-channel interleaved two- stage paralleled Buck DC-DC Converter powered by three-phase AC power supply...

  4. Impact of two-stage turbocharging architectures on pumping losses of automotive engines based on an analytical model

    International Nuclear Information System (INIS)

    Galindo, J.; Serrano, J.R.; Climent, H.; Varnier, O.

    2010-01-01

    Present work presents an analytical study of two-stage turbocharging configuration performance. The aim of this work is to understand the influence of different two-stage-architecture parameters to optimize the use of exhaust manifold gases energy and to aid decision making process. An analytical model giving the relationship between global compression ratio and global expansion ratio is developed as a function of basic engine and turbocharging system parameters. Having an analytical solution, the influence of different variables, such as expansion ratio between HP and LP turbine, intercooler efficiency, turbochargers efficiency, cooling fluid temperature and exhaust temperature are studied independently. Engine simulations with proposed analytical model have been performed to analyze the influence of these different parameters on brake thermal efficiency and pumping mean effective pressure. The results obtained show the overall performance of the two-stage system for the whole operative range and characterize the optimum control of the elements for each operative condition. The model was also used to compare single-stage and two-stage architectures performance for the same engine operative conditions. Benefits and limits in terms of breathing capabilities and brake thermal efficiency of each type of system have been presented and analyzed.

  5. Probabilistic Forecasting of Photovoltaic Generation: An Efficient Statistical Approach

    DEFF Research Database (Denmark)

    Wan, Can; Lin, Jin; Song, Yonghua

    2017-01-01

    This letter proposes a novel efficient probabilistic forecasting approach to accurately quantify the variability and uncertainty of the power production from photovoltaic (PV) systems. Distinguished from most existing models, a linear programming based prediction interval construction model for P...... power generation is proposed based on extreme learning machine and quantile regression, featuring high reliability and computational efficiency. The proposed approach is validated through the numerical studies on PV data from Denmark.......This letter proposes a novel efficient probabilistic forecasting approach to accurately quantify the variability and uncertainty of the power production from photovoltaic (PV) systems. Distinguished from most existing models, a linear programming based prediction interval construction model for PV...

  6. Study on a high capacity two-stage free piston Stirling cryocooler working around 30 K

    Science.gov (United States)

    Wang, Xiaotao; Zhu, Jian; Chen, Shuai; Dai, Wei; Li, Ke; Pang, Xiaomin; Yu, Guoyao; Luo, Ercang

    2016-12-01

    This paper presents a two-stage high-capacity free-piston Stirling cryocooler driven by a linear compressor to meet the requirement of the high temperature superconductor (HTS) motor applications. The cryocooler system comprises a single piston linear compressor, a two-stage free piston Stirling cryocooler and a passive oscillator. A single stepped displacer configuration was adopted. A numerical model based on the thermoacoustic theory was used to optimize the system operating and structure parameters. Distributions of pressure wave, phase differences between the pressure wave and the volume flow rate and different energy flows are presented for a better understanding of the system. Some characterizing experimental results are presented. Thus far, the cryocooler has reached a lowest cold-head temperature of 27.6 K and achieved a cooling power of 78 W at 40 K with an input electric power of 3.2 kW, which indicates a relative Carnot efficiency of 14.8%. When the cold-head temperature increased to 77 K, the cooling power reached 284 W with a relative Carnot efficiency of 25.9%. The influences of different parameters such as mean pressure, input electric power and cold-head temperature are also investigated.

  7. A two-stage inexact joint-probabilistic programming method for air quality management under uncertainty.

    Science.gov (United States)

    Lv, Y; Huang, G H; Li, Y P; Yang, Z F; Sun, W

    2011-03-01

    A two-stage inexact joint-probabilistic programming (TIJP) method is developed for planning a regional air quality management system with multiple pollutants and multiple sources. The TIJP method incorporates the techniques of two-stage stochastic programming, joint-probabilistic constraint programming and interval mathematical programming, where uncertainties expressed as probability distributions and interval values can be addressed. Moreover, it can not only examine the risk of violating joint-probability constraints, but also account for economic penalties as corrective measures against any infeasibility. The developed TIJP method is applied to a case study of a regional air pollution control problem, where the air quality index (AQI) is introduced for evaluation of the integrated air quality management system associated with multiple pollutants. The joint-probability exists in the environmental constraints for AQI, such that individual probabilistic constraints for each pollutant can be efficiently incorporated within the TIJP model. The results indicate that useful solutions for air quality management practices have been generated; they can help decision makers to identify desired pollution abatement strategies with minimized system cost and maximized environmental efficiency. Copyright © 2010 Elsevier Ltd. All rights reserved.

  8. An efficient and extensible approach for compressing phylogenetic trees

    KAUST Repository

    Matthews, Suzanne J; Williams, Tiffani L

    2011-01-01

    Background: Biologists require new algorithms to efficiently compress and store their large collections of phylogenetic trees. Our previous work showed that TreeZip is a promising approach for compressing phylogenetic trees. In this paper, we extend

  9. Performance analysis of a potassium-steam two stage vapour cycle

    International Nuclear Information System (INIS)

    Mitachi, Kohshi; Saito, Takeshi

    1983-01-01

    It is an important subject to raise the thermal efficiency in thermal power plants. In present thermal power plants which use steam cycle, the plant thermal efficiency has already reached 41 to 42 %, steam temperature being 839 K, and steam pressure being 24.2 MPa. That is, the thermal efficiency in a steam cycle is facing a limit. In this study, analysis was made on the performance of metal vapour/steam two-stage Rankine cycle obtained by combining a metal vapour cycle with a present steam cycle. Three different combinations using high temperature potassium regenerative cycle and low temperature steam regenerative cycle, potassium regenerative cycle and steam reheat and regenerative cycle, and potassium bleed cycle and steam reheat and regenerative cycle were systematically analyzed for the overall thermal efficiency, the output ratio and the flow rate ratio, when the inlet temperature of a potassium turbine, the temperature of a potassium condenser, and others were varied. Though the overall thermal efficiency was improved by lowering the condensing temperature of potassium vapour, it is limited by the construction because the specific volume of potassium in low pressure section increases greatly. In the combinatipn of potassium vapour regenerative cycle with steam regenerative cycle, the overall thermal efficiency can be 58.5 %, and also 60.2 % if steam reheat and regenerative cycle is employed. If a cycle to heat steam with the bled vapor out of a potassium vapour cycle is adopted, the overall thermal efficiency of 63.3 % is expected. (Wakatsuki, Y.)

  10. Design of a Two-stage High-capacity Stirling Cryocooler Operating below 30K

    Science.gov (United States)

    Wang, Xiaotao; Dai, Wei; Zhu, Jian; Chen, Shuai; Li, Haibing; Luo, Ercang

    The high capacity cryocooler working below 30K can find many applications such as superconducting motors, superconducting cables and cryopump. Compared to the GM cryocooler, the Stirling cryocooler can achieve higher efficiency and more compact structure. Because of these obvious advantages, we have designed a two stage free piston Stirling cryocooler system, which is driven by a moving magnet linear compressor with an operating frequency of 40 Hz and a maximum 5 kW input electric power. The first stage of the cryocooler is designed to operate in the liquid nitrogen temperature and output a cooling power of 100 W. And the second stage is expected to simultaneously provide a cooling power of 50 W below the temperature of 30 K. In order to achieve the best system efficiency, a numerical model based on the thermoacoustic model was developed to optimize the system operating and structure parameters.

  11. Development of advanced air-blown entrained-flow two-stage bituminous coal IGCC gasifier

    Directory of Open Access Journals (Sweden)

    Abaimov Nikolay A.

    2017-01-01

    Full Text Available Integrated gasification combined cycle (IGCC technology has two main advantages: high efficiency, and low levels of harmful emissions. Key element of IGCC is gasifier, which converts solid fuel into a combustible synthesis gas. One of the most promising gasifiers is air-blown entrained-flow two-stage bituminous coal gasifier developed by Mitsubishi Heavy Industries (MHI. The most obvious way to develop advanced gasifier is improvement of commercial-scale 1700 t/d MHI gasifier using the computational fluid dynamics (CFD method. Modernization of commercial-scale 1700 t/d MHI gasifier is made by changing the regime parameters in order to improve its cold gas efficiency (CGE and environmental performance, namely H2/CO ratio. The first change is supply of high temperature (900°C steam in gasifier second stage. And the second change is additional heating of blast air to 900°C.

  12. Validation of Continuous CHP Operation of a Two-Stage Biomass Gasifier

    DEFF Research Database (Denmark)

    Ahrenfeldt, Jesper; Henriksen, Ulrik Birk; Jensen, Torben Kvist

    2006-01-01

    The Viking gasification plant at the Technical University of Denmark was built to demonstrate a continuous combined heat and power operation of a two-stage gasifier fueled with wood chips. The nominal input of the gasifier is 75 kW thermal. To validate the continuous operation of the plant, a 9-day...... measurement campaign was performed. The campaign verified a stable operation of the plant, and the energy balance resulted in an overall fuel to gas efficiency of 93% and a wood to electricity efficiency of 25%. Very low tar content in the producer gas was observed: only 0.1 mg/Nm3 naphthalene could...... be measured in raw gas. A stable engine operation on the producer gas was observed, and very low emissions of aldehydes, N2O, and polycyclic aromatic hydrocarbons were measured....

  13. The Two-stage Constrained Equal Awards and Losses Rules for Multi-Issue Allocation Situation

    NARCIS (Netherlands)

    Lorenzo-Freire, S.; Casas-Mendez, B.; Hendrickx, R.L.P.

    2005-01-01

    This paper considers two-stage solutions for multi-issue allocation situations.Characterisations are provided for the two-stage constrained equal awards and constrained equal losses rules, based on the properties of composition and path independence.

  14. Two-Stage Design Method for Enhanced Inductive Energy Transmission with Q-Constrained Planar Square Loops.

    Directory of Open Access Journals (Sweden)

    Akaa Agbaeze Eteng

    Full Text Available Q-factor constraints are usually imposed on conductor loops employed as proximity range High Frequency Radio Frequency Identification (HF-RFID reader antennas to ensure adequate data bandwidth. However, pairing such low Q-factor loops in inductive energy transmission links restricts the link transmission performance. The contribution of this paper is to assess the improvement that is reached with a two-stage design method, concerning the transmission performance of a planar square loop relative to an initial design, without compromise to a Q-factor constraint. The first stage of the synthesis flow is analytical in approach, and determines the number and spacing of turns by which coupling between similar paired square loops can be enhanced with low deviation from the Q-factor limit presented by an initial design. The second stage applies full-wave electromagnetic simulations to determine more appropriate turn spacing and widths to match the Q-factor constraint, and achieve improved coupling relative to the initial design. Evaluating the design method in a test scenario yielded a more than 5% increase in link transmission efficiency, as well as an improvement in the link fractional bandwidth by more than 3%, without violating the loop Q-factor limit. These transmission performance enhancements are indicative of a potential for modifying proximity HF-RFID reader antennas for efficient inductive energy transfer and data telemetry links.

  15. A two stage data envelopment analysis model with undesirable output

    Science.gov (United States)

    Shariff Adli Aminuddin, Adam; Izzati Jaini, Nur; Mat Kasim, Maznah; Nawawi, Mohd Kamal Mohd

    2017-09-01

    The dependent relationship among the decision making units (DMU) is usually assumed to be non-existent in the development of Data Envelopment Analysis (DEA) model. The dependency can be represented by the multi-stage DEA model, where the outputs from the precedent stage will be the inputs for the latter stage. The multi-stage DEA model evaluate both the efficiency score for each stages and the overall efficiency of the whole process. The existing multi stage DEA models do not focus on the integration with the undesirable output, in which the higher input will generate lower output unlike the normal desirable output. This research attempts to address the inclusion of such undesirable output and investigate the theoretical implication and potential application towards the development of multi-stage DEA model.

  16. A TWO-STAGE MODEL OF RADIOLOGICAL INSPECTION: SPENDING TIME

    International Nuclear Information System (INIS)

    BROWN, W.S.

    2000-01-01

    The paper describes a model that visually portrays radiological survey performance as basic parameters (surveyor efficiency and criteria, duration of pause, and probe speed) are varied; field and laboratory tests provided typical parameter values. The model is used to illustrate how practical constraints on the time allotted to the task can affect radiological inspection performance. Similar analyses are applicable to a variety of other tasks (airport baggage inspection, and certain types of non-destructive testing) with similar characteristics and constraints

  17. Development of an innovative two-stage process, a combination of acidogenic hydrogenesis and methanogenesis

    Energy Technology Data Exchange (ETDEWEB)

    Han, S.K.; Shin, H.S. [Korea Advanced Inst. of Science and Technology, Daejeon (Korea, Republic of). Dept. of Civil and Enviromental Engineering

    2004-07-01

    Hydrogen produced from waste by means of fermentative bacteria is an attractive way to produce this fuel as an alternative to fossil fuels. It also helps treat the associated waste. The authors have undertaken to optimize acidogenic hydrogenesis and methanogenesis. Building on this, they then developed a two-stage process that produces both hydrogen and methane. Acidogenic hydrogenesis of food waste was investigated using a leaching bed reactor. The dilution rate was varied in order to maximize efficiency which was as high as 70.8 per cent. Further to this, an upflow anaerobic sludge blanket reactor converted the wastewater from acidogenic hydrogenesis into methane. Chemical oxygen demand (COD) removal rates exceeded 96 per cent up to a COD loading of 12.9 COD/l/d. After this, the authors devised a new two-stage process based on a combination of acidogenic hydrogenesis and methanogenesis. The authors report on results for this process using food waste as feedstock. 5 refs., 5 figs.

  18. Two stage bioethanol refining with multi litre stacked microbial fuel cell and microbial electrolysis cell.

    Science.gov (United States)

    Sugnaux, Marc; Happe, Manuel; Cachelin, Christian Pierre; Gloriod, Olivier; Huguenin, Gérald; Blatter, Maxime; Fischer, Fabian

    2016-12-01

    Ethanol, electricity, hydrogen and methane were produced in a two stage bioethanol refinery setup based on a 10L microbial fuel cell (MFC) and a 33L microbial electrolysis cell (MEC). The MFC was a triple stack for ethanol and electricity co-generation. The stack configuration produced more ethanol with faster glucose consumption the higher the stack potential. Under electrolytic conditions ethanol productivity outperformed standard conditions and reached 96.3% of the theoretically best case. At lower external loads currents and working potentials oscillated in a self-synchronized manner over all three MFC units in the stack. In the second refining stage, fermentation waste was converted into methane, using the scale up MEC stack. The bioelectric methanisation reached 91% efficiency at room temperature with an applied voltage of 1.5V using nickel cathodes. The two stage bioethanol refining process employing bioelectrochemical reactors produces more energy vectors than is possible with today's ethanol distilleries. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Two-stage heterotrophic and phototrophic culture strategy for algal biomass and lipid production.

    Science.gov (United States)

    Zheng, Yubin; Chi, Zhanyou; Lucker, Ben; Chen, Shulin

    2012-01-01

    A two-stage heterotrophic and phototrophic culture strategy for algal biomass and lipid production was studied, wherein high density heterotrophic cultures of Chlorellasorokiniana serve as seed for subsequent phototrophic growth. The data showed growth rate, cell density and productivity of heterotrophic C.sorokiniana were 3.0, 3.3 and 7.4 times higher than phototrophic counterpart, respectively. Hetero- and phototrophic algal seeds had similar biomass/lipid production and fatty acid profile when inoculated into phototrophic culture system. To expand the application, food waste and wastewater were tested as feedstock for heterotrophic growth, and supported cell growth successfully. These results demonstrated the advantages of using heterotrophic algae cells as seeds for open algae culture system. Additionally, high inoculation rate of heterotrophic algal seed can be utilized as an effective method for contamination control. This two-stage heterotrophic phototrophic process is promising to provide a more efficient way for large scale production of algal biomass and biofuels. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. Two stage, low temperature, catalyzed fluidized bed incineration with in situ neutralization for radioactive mixed wastes

    International Nuclear Information System (INIS)

    Wade, J.F.; Williams, P.M.

    1995-01-01

    A two stage, low temperature, catalyzed fluidized bed incineration process is proving successful at incinerating hazardous wastes containing nuclear material. The process operates at 550 degrees C and 650 degrees C in its two stages. Acid gas neutralization takes place in situ using sodium carbonate as a sorbent in the first stage bed. The feed material to the incinerator is hazardous waste-as defined by the Resource Conservation and Recovery Act-mixed with radioactive materials. The radioactive materials are plutonium, uranium, and americium that are byproducts of nuclear weapons production. Despite its low temperature operation, this system successfully destroyed poly-chlorinated biphenyls at a 99.99992% destruction and removal efficiency. Radionuclides and volatile heavy metals leave the fluidized beds and enter the air pollution control system in minimal amounts. Recently collected modeling and experimental data show the process minimizes dioxin and furan production. The report also discusses air pollution, ash solidification, and other data collected from pilot- and demonstration-scale testing. The testing took place at Rocky Flats Environmental Technology Site, a US Department of Energy facility, in the 1970s, 1980s, and 1990s

  1. On bi-criteria two-stage transportation problem: a case study

    Directory of Open Access Journals (Sweden)

    Ahmad MURAD

    2010-01-01

    Full Text Available The study of the optimum distribution of goods between sources and destinations is one of the important topics in projects economics. This importance comes as a result of minimizing the transportation cost, deterioration, time, etc. The classical transportation problem constitutes one of the major areas of application for linear programming. The aim of this problem is to obtain the optimum distribution of goods from different sources to different destinations which minimizes the total transportation cost. From the practical point of view, the transportation problems may differ from the classical form. It may contain one or more objective function, one or more stage to transport, one or more type of commodity with one or more means of transport. The aim of this paper is to construct an optimization model for transportation problem for one of mill-stones companies. The model is formulated as a bi-criteria two-stage transportation problem with a special structure depending on the capacities of suppliers, warehouses and requirements of the destinations. A solution algorithm is introduced to solve this class of bi-criteria two-stage transportation problem to obtain the set of non-dominated extreme points and the efficient solutions accompanied with each one that enables the decision maker to choose the best one. The solution algorithm mainly based on the fruitful application of the methods for treating transportation problems, theory of duality of linear programming and the methods of solving bi-criteria linear programming problems.

  2. Two Stages repair of proximal hypospadias: Review of 33 cases

    African Journals Online (AJOL)

    HussamHassan

    Background/Purpose: Proximal hypospadias with chordee is the most challenging variant of hypospadias to reconstruct. During the last 10 years, the approach to sever hypospadias has been controversial. Materials & Methods: During the period from June 2002 to December 2009, I performed 33 cases with proximal.

  3. Compact high-flux two-stage solar collectors based on tailored edge-ray concentrators

    Science.gov (United States)

    Friedman, Robert P.; Gordon, Jeffrey M.; Ries, Harald

    1995-08-01

    Using the recently-invented tailored edge-ray concentrator (TERC) approach for the design of compact two-stage high-flux solar collectors--a focusing primary reflector and a nonimaging TERC secondary reflector--we present: 1) a new primary reflector shape based on the TERC approach and a secondary TERC tailored to its particular flux map, such that more compact concentrators emerge at flux concentration levels in excess of 90% of the thermodynamic limit; and 2) calculations and raytrace simulations result which demonstrate the V-cone approximations to a wide variety of TERCs attain the concentration of the TERC to within a few percent, and hence represent practical secondary concentrators that may be superior to corresponding compound parabolic concentrator or trumpet secondaries.

  4. Two-Stage Surgery for a Large Cervical Dumbbell Tumour in Neurofibromatosis 1: A Case Report

    Directory of Open Access Journals (Sweden)

    Mohd Ariff S

    2011-11-01

    Full Text Available Spinal neurofibromas occur sporadically and typically occur in association with neurofibromatosis 1. Patients afflicted with neurofibromatosis 1 usually present with involvement of several nerve roots. This report describes the case of a 14- year-old child with a large intraspinal, but extradural tumour with paraspinal extension, dumbbell neurofibroma of the cervical region extending from the C2 to C4 vertebrae. The lesions were readily detected by MR imaging and were successfully resected in a two-stage surgery. The time interval between the first and second surgery was one month. We provide a brief review of the literature regarding various surgical approaches, emphasising the utility of anterior and posterior approaches.

  5. Evaluation of biological hydrogen sulfide oxidation coupled with two-stage upflow filtration for groundwater treatment.

    Science.gov (United States)

    Levine, Audrey D; Raymer, Blake J; Jahn, Johna

    2004-01-01

    Hydrogen sulfide in groundwater can be oxidized by aerobic bacteria to form elemental sulfur and biomass. While this treatment approach is effective for conversion of hydrogen sulfide, it is important to have adequate control of the biomass exiting the biological treatment system to prevent release of elemental sulfur into the distribution system. Pilot scale tests were conducted on a Florida groundwater to evaluate the use of two-stage upflow filtration downstream of biological sulfur oxidation. The combined biological and filtration process was capable of excellent removal of hydrogen sulfide and associated turbidity. Additional benefits of this treatment approach include elimination of odor generation, reduction of chlorine demand, and improved stability of the finished water.

  6. A two-stage stochastic programming model for the optimal design of distributed energy systems

    International Nuclear Information System (INIS)

    Zhou, Zhe; Zhang, Jianyun; Liu, Pei; Li, Zheng; Georgiadis, Michael C.; Pistikopoulos, Efstratios N.

    2013-01-01

    Highlights: ► The optimal design of distributed energy systems under uncertainty is studied. ► A stochastic model is developed using genetic algorithm and Monte Carlo method. ► The proposed system possesses inherent robustness under uncertainty. ► The inherent robustness is due to energy storage facilities and grid connection. -- Abstract: A distributed energy system is a multi-input and multi-output energy system with substantial energy, economic and environmental benefits. The optimal design of such a complex system under energy demand and supply uncertainty poses significant challenges in terms of both modelling and corresponding solution strategies. This paper proposes a two-stage stochastic programming model for the optimal design of distributed energy systems. A two-stage decomposition based solution strategy is used to solve the optimization problem with genetic algorithm performing the search on the first stage variables and a Monte Carlo method dealing with uncertainty in the second stage. The model is applied to the planning of a distributed energy system in a hotel. Detailed computational results are presented and compared with those generated by a deterministic model. The impacts of demand and supply uncertainty on the optimal design of distributed energy systems are systematically investigated using proposed modelling framework and solution approach.

  7. Two-stage stochastic programming model for the regional-scale electricity planning under demand uncertainty

    International Nuclear Information System (INIS)

    Huang, Yun-Hsun; Wu, Jung-Hua; Hsu, Yu-Ju

    2016-01-01

    Traditional electricity supply planning models regard the electricity demand as a deterministic parameter and require the total power output to satisfy the aggregate electricity demand. But in today's world, the electric system planners are facing tremendously complex environments full of uncertainties, where electricity demand is a key source of uncertainty. In addition, electricity demand patterns are considerably different for different regions. This paper developed a multi-region optimization model based on two-stage stochastic programming framework to incorporate the demand uncertainty. Furthermore, the decision tree method and Monte Carlo simulation approach are integrated into the model to simplify electricity demands in the form of nodes and determine the values and probabilities. The proposed model was successfully applied to a real case study (i.e. Taiwan's electricity sector) to show its applicability. Detail simulation results were presented and compared with those generated by a deterministic model. Finally, the long-term electricity development roadmap at a regional level could be provided on the basis of our simulation results. - Highlights: • A multi-region, two-stage stochastic programming model has been developed. • The decision tree and Monte Carlo simulation are integrated into the framework. • Taiwan's electricity sector is used to illustrate the applicability of the model. • The results under deterministic and stochastic cases are shown for comparison. • Optimal portfolios of regional generation technologies can be identified.

  8. Two-stage effects of awareness cascade on epidemic spreading in multiplex networks

    Science.gov (United States)

    Guo, Quantong; Jiang, Xin; Lei, Yanjun; Li, Meng; Ma, Yifang; Zheng, Zhiming

    2015-01-01

    Human awareness plays an important role in the spread of infectious diseases and the control of propagation patterns. The dynamic process with human awareness is called awareness cascade, during which individuals exhibit herd-like behavior because they are making decisions based on the actions of other individuals [Borge-Holthoefer et al., J. Complex Networks 1, 3 (2013), 10.1093/comnet/cnt006]. In this paper, to investigate the epidemic spreading with awareness cascade, we propose a local awareness controlled contagion spreading model on multiplex networks. By theoretical analysis using a microscopic Markov chain approach and numerical simulations, we find the emergence of an abrupt transition of epidemic threshold βc with the local awareness ratio α approximating 0.5 , which induces two-stage effects on epidemic threshold and the final epidemic size. These findings indicate that the increase of α can accelerate the outbreak of epidemics. Furthermore, a simple 1D lattice model is investigated to illustrate the two-stage-like sharp transition at αc≈0.5 . The results can give us a better understanding of why some epidemics cannot break out in reality and also provide a potential access to suppressing and controlling the awareness cascading systems.

  9. A Two-Stage Diagnosis Framework for Wind Turbine Gearbox Condition Monitoring

    Directory of Open Access Journals (Sweden)

    Janet M. Twomey

    2013-01-01

    Full Text Available Advances in high performance sensing technologies enable the development of wind turbine condition monitoring system to diagnose and predict the system-wide effects of failure events. This paper presents a vibration-based two stage fault detection framework for failure diagnosis of rotating components in wind turbines. The proposed framework integrates an analytical defect detection method and a graphical verification method together to ensure the diagnosis efficiency and accuracy. The efficacy of the proposed methodology is demonstrated with a case study with the gearbox condition monitoring Round Robin study dataset provided by the National Renewable Energy Laboratory (NREL. The developed methodology successfully picked five faults out of seven in total with accurate severity levels without producing any false alarm in the blind analysis. The case study results indicated that the developed fault detection framework is effective for analyzing gear and bearing faults in wind turbine drive train system based upon system vibration characteristics.

  10. Shaft Position Influence on Technical Characteristics of Universal Two-Stages Helical Speed Reducers

    Directory of Open Access Journals (Sweden)

    Мilan Rackov

    2005-10-01

    Full Text Available Purchasers of speed reducers decide on buying those reducers, that can the most approximately satisfy their demands with much smaller costs. Amount of used material, ie. mass and dimensions of gear unit influences on gear units price. Mass and dimensions of gear unit, besides output torque, gear unit ratio and efficiency, are the most important parameters of technical characteristics of gear units and their quality. Centre distance and position of shafts have significant influence on output torque, gear unit ratio and mass of gear unit through overall dimension of gear unit housing. Thus these characteristics are dependent on each other. This paper deals with analyzing of centre distance and shaft position influence on output torque and ratio of universal two stages gear units.

  11. Stepwise encapsulation and controlled two-stage release system for cis-Diamminediiodoplatinum.

    Science.gov (United States)

    Chen, Yun; Li, Qian; Wu, Qingsheng

    2014-01-01

    cis-Diamminediiodoplatinum (cis-DIDP) is a cisplatin-like anticancer drug with higher anticancer activity, but lower stability and price than cisplatin. In this study, a cis-DIDP carrier system based on micro-sized stearic acid was prepared by an emulsion solvent evaporation method. The maximum drug loading capacity of cis-DIDP-loaded solid lipid nanoparticles was 22.03%, and their encapsulation efficiency was 97.24%. In vitro drug release in phosphate-buffered saline (pH =7.4) at 37.5°C exhibited a unique two-stage process, which could prove beneficial for patients with tumors and malignancies. MTT (3-[4,5-dimethylthiazol-2-yl]-2, 5-diphenyltetrazolium bromide) assay results showed that cis-DIDP released from cis-DIDP-loaded solid lipid nanoparticles had better inhibition activity than cis-DIDP that had not been loaded.

  12. Measuring energy efficiency in economics: Shadow value approach

    Science.gov (United States)

    Khademvatani, Asgar

    For decades, academic scholars and policy makers have commonly applied a simple average measure, energy intensity, for studying energy efficiency. In contrast, we introduce a distinctive marginal measure called energy shadow value (SV) for modeling energy efficiency drawn on economic theory. This thesis demonstrates energy SV advantages, conceptually and empirically, over the average measure recognizing marginal technical energy efficiency and unveiling allocative energy efficiency (energy SV to energy price). Using a dual profit function, the study illustrates how treating energy as quasi-fixed factor called quasi-fixed approach offers modeling advantages and is appropriate in developing an explicit model for energy efficiency. We address fallacies and misleading results using average measure and demonstrate energy SV advantage in inter- and intra-country energy efficiency comparison. Energy efficiency dynamics and determination of efficient allocation of energy use are shown through factors impacting energy SV: capital, technology, and environmental obligations. To validate the energy SV, we applied a dual restricted cost model using KLEM dataset for the 35 US sectors stretching from 1958 to 2000 and selected a sample of the four sectors. Following the empirical results, predicted wedges between energy price and the SV growth indicate a misallocation of energy use in stone, clay and glass (SCG) and communications (Com) sectors with more evidence in the SCG compared to the Com sector, showing overshoot in energy use relative to optimal paths and cost increases from sub-optimal energy use. The results show that energy productivity is a measure of technical efficiency and is void of information on the economic efficiency of energy use. Decomposing energy SV reveals that energy, capital and technology played key roles in energy SV increases helping to consider and analyze policy implications of energy efficiency improvement. Applying the marginal measure, we also

  13. Stochastic Control of Energy Efficient Buildings: A Semidefinite Programming Approach

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Xiao [ORNL; Dong, Jin [ORNL; Djouadi, Seddik M [ORNL; Nutaro, James J [ORNL; Kuruganti, Teja [ORNL

    2015-01-01

    The key goal in energy efficient buildings is to reduce energy consumption of Heating, Ventilation, and Air- Conditioning (HVAC) systems while maintaining a comfortable temperature and humidity in the building. This paper proposes a novel stochastic control approach for achieving joint performance and power control of HVAC. We employ a constrained Stochastic Linear Quadratic Control (cSLQC) by minimizing a quadratic cost function with a disturbance assumed to be Gaussian. The problem is formulated to minimize the expected cost subject to a linear constraint and a probabilistic constraint. By using cSLQC, the problem is reduced to a semidefinite optimization problem, where the optimal control can be computed efficiently by Semidefinite programming (SDP). Simulation results are provided to demonstrate the effectiveness and power efficiency by utilizing the proposed control approach.

  14. Discovering the Network Topology: An Efficient Approach for SDN

    Directory of Open Access Journals (Sweden)

    Leonardo OCHOA-ADAY

    2016-11-01

    Full Text Available Network topology is a physical description of the overall resources in the network. Collecting this information using efficient mechanisms becomes a critical task for important network functions such as routing, network management, quality of service (QoS, among many others. Recent technologies like Software-Defined Networks (SDN have emerged as promising approaches for managing the next generation networks. In order to ensure a proficient topology discovery service in SDN, we propose a simple agents-based mechanism. This mechanism improves the overall efficiency of the topology discovery process. In this paper, an algorithm for a novel Topology Discovery Protocol (SD-TDP is described. This protocol will be implemented in each switch through a software agent. Thus, this approach will provide a distributed solution to solve the problem of network topology discovery in a more simple and efficient way.

  15. An Efficient PageRank Approach for Urban Traffic Optimization

    Directory of Open Access Journals (Sweden)

    Florin Pop

    2012-01-01

    to determine optimal decisions for each traffic light, based on the solution given by Larry Page for page ranking in Web environment (Page et al. (1999. Our approach is similar with work presented by Sheng-Chung et al. (2009 and Yousef et al. (2010. We consider that the traffic lights are controlled by servers and a score for each road is computed based on efficient PageRank approach and is used in cost function to determine optimal decisions. We demonstrate that the cumulative contribution of each car in the traffic respects the main constrain of PageRank approach, preserving all the properties of matrix consider in our model.

  16. Anti-kindling induced by two-stage coordinated reset stimulation with weak onset intensity

    Directory of Open Access Journals (Sweden)

    Magteld eZeitler

    2016-05-01

    Full Text Available Abnormal neuronal synchrony plays an important role in a number of brain diseases. To specifically counteract abnormal neuronal synchrony by desynchronization, Coordinated Reset (CR stimulation, a spatiotemporally patterned stimulation technique, was designed with computational means. In neuronal networks with spike timing–dependent plasticity CR stimulation causes a decrease of synaptic weights and finally anti-kindling, i.e. unlearning of abnormally strong synaptic connectivity and abnormal neuronal synchrony. Long-lasting desynchronizing aftereffects of CR stimulation have been verified in pre-clinical and clinical proof of concept studies. In general, for different neuromodulation approaches, both invasive and non-invasive, it is desirable to enable effective stimulation at reduced stimulation intensities, thereby avoiding side effects. For the first time, we here present a two-stage CR stimulation protocol, where two qualitatively different types of CR stimulation are delivered one after another, and the first stage comes at a particularly weak stimulation intensity. Numerical simulations show that a two-stage CR stimulation can induce the same degree of anti-kindling as a single-stage CR stimulation with intermediate stimulation intensity. This stimulation approach might be clinically beneficial in patients suffering from brain diseases characterized by abnormal neuronal synchrony where a first treatment stage should be performed at particularly weak stimulation intensities in order to avoid side effects. This might, e.g., be relevant in the context of acoustic CR stimulation in tinnitus patients with hyperacusis or in the case of electrical deep brain CR stimulation with sub-optimally positioned leads or side effects caused by stimulation of the target itself. We discuss how to apply our method in first in man and proof of concept studies.

  17. Two-Stage Latissimus Dorsi Flap with Implant for Unilateral Breast Reconstruction: Getting the Size Right

    Directory of Open Access Journals (Sweden)

    Jiajun Feng

    2016-03-01

    Full Text Available BackgroundThe aim of unilateral breast reconstruction after mastectomy is to craft a natural-looking breast with symmetry. The latissimus dorsi (LD flap with implant is an established technique for this purpose. However, it is challenging to obtain adequate volume and satisfactory aesthetic results using a one-stage operation when considering factors such as muscle atrophy, wound dehiscence and excessive scarring. The two-stage reconstruction addresses these difficulties by using a tissue expander to gradually enlarge the skin pocket which eventually holds an appropriately sized implant.MethodsWe analyzed nine patients who underwent unilateral two-stage LD reconstruction. In the first stage, an expander was placed along with the LD flap to reconstruct the mastectomy defect, followed by gradual tissue expansion to achieve overexpansion of the skin pocket. The final implant volume was determined by measuring the residual expander volume after aspirating the excess saline. Finally, the expander was replaced with the chosen implant.ResultsThe average volume of tissue expansion was 460 mL. The resultant expansion allowed an implant ranging in volume from 255 to 420 mL to be placed alongside the LD muscle. Seven patients scored less than six on the relative breast retraction assessment formula for breast symmetry, indicating excellent breast symmetry. The remaining two patients scored between six and eight, indicating good symmetry.ConclusionsThis approach allows the size of the eventual implant to be estimated after the skin pocket has healed completely and the LD muscle has undergone natural atrophy. Optimal reconstruction results were achieved using this approach.

  18. A high-power two stage traveling-wave tube amplifier

    International Nuclear Information System (INIS)

    Shiffler, D.; Nation, J.A.; Schachter, L.; Ivers, J.D.; Kerslick, G.S.

    1991-01-01

    Results are presented on the development of a two stage high-efficiency, high-power 8.76-GHz traveling-wave tube amplifier. The work presented augments previously reported data on a single stage amplifier and presents new data on the operational characteristics of two identical amplifiers operated in series and separated from each other by a sever. Peak powers of 410 MW have been obtained over the complete pulse duration of the device, with a conversion efficiency from the electron beam to microwave energy of 45%. In all operating conditions the severed amplifier showed a ''sideband''-like structure in the frequency spectrum of the microwave radiation. A similar structure was apparent at output powers in excess of 70 MW in the single stage device. The frequencies of the ''sidebands'' are not symmetric with respect to the center frequency. The maximum, single frequency, average output power was 210 MW corresponding to an amplifier efficiency of 24%. Simulation data is also presented that indicates that the short amplifiers used in this work exhibit significant differences in behavior from conventional low-power amplifiers. These include finite length effects on the gain characteristics, which may account for the observed narrow bandwidth of the amplifiers and for the appearance of the sidebands. It is also found that the bunching length for the beam may be a significant fraction of the total amplifier length

  19. NEW APPROACHES TO EFFICIENCY OF MASSIVE ONLINE COURSE

    Directory of Open Access Journals (Sweden)

    Liubov S. Lysitsina

    2014-09-01

    Full Text Available This paper is focused on efficiency of e-learning, in general, and massive online course in programming and information technology, in particular. Several innovative approaches and scenarios have been proposed, developed, implemented and verified by the authors, including 1 a new approach to organize and use automatic immediate feedback that significantly helps a learner to verify developed code and increases an efficiency of learning, 2 a new approach to construct learning interfaces – it is based on “develop a code – get a result – validate a code” technique, 3 three scenarios of visualization and verification of developed code, 4 a new multi-stage approach to solve complex programming assignments, 5 a new implementation of “perfectionism” game mechanics in a massive online course. Overall, due to implementation of proposed and developed approaches, the efficiency of massive online course has been considerably increased, particularly 1 the additional 27.9 % of students were able to complete successfully “Web design and development using HTML5 and CSS3” massive online course at ITMO University, and 2 based on feedback from 5588 students a “perfectionism” game mechanics noticeably improves students’ involvement into course activities and retention factor.

  20. Normal Theory Two-Stage ML Estimator When Data Are Missing at the Item Level.

    Science.gov (United States)

    Savalei, Victoria; Rhemtulla, Mijke

    2017-08-01

    In many modeling contexts, the variables in the model are linear composites of the raw items measured for each participant; for instance, regression and path analysis models rely on scale scores, and structural equation models often use parcels as indicators of latent constructs. Currently, no analytic estimation method exists to appropriately handle missing data at the item level. Item-level multiple imputation (MI), however, can handle such missing data straightforwardly. In this article, we develop an analytic approach for dealing with item-level missing data-that is, one that obtains a unique set of parameter estimates directly from the incomplete data set and does not require imputations. The proposed approach is a variant of the two-stage maximum likelihood (TSML) methodology, and it is the analytic equivalent of item-level MI. We compare the new TSML approach to three existing alternatives for handling item-level missing data: scale-level full information maximum likelihood, available-case maximum likelihood, and item-level MI. We find that the TSML approach is the best analytic approach, and its performance is similar to item-level MI. We recommend its implementation in popular software and its further study.

  1. Efficiency of supply chain management. Strategic and operational approach

    Directory of Open Access Journals (Sweden)

    Grzegorz Lichocik

    2013-06-01

    Full Text Available Background: One of the most important issues subject to theoretical considerations and empirical studies is the measurement of efficiency of activities in logistics and supply chain management. Simultaneously, efficiency is one of the terms interpreted in an ambiguous and multi-aspect manner, depending on the subject of a study. The multitude of analytical dimensions of this term results in the fact that, apart from economic efficiency being the basic study area, other dimensions perceived as an added value by different groups of supply chain participants become more and more important. Methods: The objective of this paper is to attempt to explain the problem of supply chain management efficiency in the context of general theoretical considerations relating to supply chain management. The authors have also highlighted determinants and practical implications of supply chain management efficiency in strategic and operational contexts. The study employs critical analyses of logistics literature and the free-form interview with top management representatives of a company operating in the TSL sector. Results: We must find a comprehensive approach to supply chain efficiency including all analytical dimensions connected with real goods and services flow. An effective supply chain must be cost-effective (ensuring economic efficiency of a chain, functional (reducing processes, lean, minimising the number of links in the chain to the necessary ones, adapting supply chain participants' internal processes to a common objective based on its efficiency and ensuring high quality of services (customer-oriented logistics systems. Conclusions: Efficiency of supply chains is not only a task for which a logistics department is responsible as it is a strategic decision taken by the management as regards the method of future company's operation. Correctly planned and fulfilled logistics tasks may result in improving performance of a company as well as the whole

  2. ALTERNATIVE APPROACHES TO EFFICIENCY EVALUATION OF HIGHER EDUCATION INSTITUTIONS

    Directory of Open Access Journals (Sweden)

    Furková, Andrea

    2013-09-01

    Full Text Available Evaluation of efficiency and ranking of higher education institutions is very popular and important topic of public policy. The assessment of the quality of higher education institutions can stimulate positive changes in higher education. In this study we focus on assessment and ranking of Slovak economic faculties. We try to apply two different quantitative approaches for evaluation Slovak economic faculties - Stochastic Frontier Analysis (SFA as an econometric approach and PROMETHEE II as multicriteria decision making method. Via SFA we examine faculties’ success from scientific point of view, i.e. their success in area of publications and citations. Next part of analysis deals with assessing of Slovak economic sciences faculties from overall point of view through the multicriteria decision making method. In the analysis we employ panel data covering 11 economic faculties observed over the period of 5 years. Our main aim is to point out other quantitative approaches to efficiency estimation of higher education institutions.

  3. Optics of two-stage photovoltaic concentrators with dielectric second stages

    Science.gov (United States)

    Ning, Xiaohui; O'Gallagher, Joseph; Winston, Roland

    1987-04-01

    Two-stage photovoltaic concentrators with Fresnel lenses as primaries and dielectric totally internally reflecting nonimaging concentrators as secondaries are discussed. The general design principles of such two-stage systems are given. Their optical properties are studied and analyzed in detail using computer ray trace procedures. It is found that the two-stage concentrator offers not only a higher concentration or increased acceptance angle, but also a more uniform flux distribution on the photovoltaic cell than the point focusing Fresnel lens alone. Experimental measurements with a two-stage prototype module are presented and compared to the analytical predictions.

  4. Two-stage model of development of heterogeneous uranium-lead systems in zircon

    International Nuclear Information System (INIS)

    Mel'nikov, N.N.; Zevchenkov, O.A.

    1985-01-01

    Behaviour of isotope systems of multiphase zircons at their two-stage distortion is considered. The results of calculations testify to the fact that linear correlations on the diagram with concordance can be explained including two-stage discovery of U-Pb systems of cogenetic zircons if zircon is considered physically heterogeneous and losing in its different part different ratios of accumulated radiogenic lead. ''Metamorphism ages'' obtained by these two-stage opening zircons are intermediate, and they not have geochronological significance while ''crystallization ages'' remain rather close to real ones. Two-stage opening zircons in some cases can be diagnosed by discordance of their crystal component

  5. Optics of two-stage photovoltaic concentrators with dielectric second stages.

    Science.gov (United States)

    Ning, X; O'Gallagher, J; Winston, R

    1987-04-01

    Two-stage photovoltaic concentrators with Fresnel lenses as primaries and dielectric totally internally reflecting nonimaging concentrators as secondaries are discussed. The general design principles of such two-stage systems are given. Their optical properties are studied and analyzed in detail using computer ray trace procedures. It is found that the two-stage concentrator offers not only a higher concentration or increased acceptance angle, but also a more uniform flux distribution on the photovoltaic cell than the point focusing Fresnel lens alone. Experimental measurements with a two-stage prototype module are presented and compared to the analytical predictions.

  6. Operation of a two-stage continuous fermentation process producing hydrogen and methane from artificial food wastes

    Energy Technology Data Exchange (ETDEWEB)

    Nagai, Kohki; Mizuno, Shiho; Umeda, Yoshito; Sakka, Makiko [Toho Gas Co., Ltd. (Japan); Osaka, Noriko [Tokyo Gas Co. Ltd. (Japan); Sakka, Kazuo [Mie Univ. (Japan)

    2010-07-01

    An anaerobic two-stage continuous fermentation process with combined thermophilic hydrogenogenic and methanogenic stages (two-stage fermentation process) was applied to artificial food wastes on a laboratory scale. In this report, organic loading rate (OLR) conditions for hydrogen fermentation were optimized before operating the two-stage fermentation process. The OLR was set at 11.2, 24.3, 35.2, 45.6, 56.1, and 67.3 g-COD{sub cr} L{sup -1} day{sup -1} with a temperature of 60 C, pH5.5 and 5.0% total solids. As a result, approximately 1.8-2.0 mol-H{sub 2} mol-hexose{sup -1} was obtained at the OLR of 11.2-56.1 g-COD{sub cr} L{sup -1} day{sup -1}. In contrast, it was inferred that the hydrogen yield at the OLR of 67.3 g-COD{sub cr} L{sup -1} day{sup -1} decreased because of an increase in lactate concentration in the culture medium. The performance of the two-stage fermentation process was also evaluated over three months. The hydraulic retention time (HRT) of methane fermentation was able to be shortened 5.0 days (under OLR 12.4 g-COD{sub cr} L{sup -1} day{sup -1} conditions) when the OLR of hydrogen fermentation was 44.0 g-COD{sub cr} L{sup -1} day{sup -1}, and the average gasification efficiency of the two-stage fermentation process was 81% at the time. (orig.)

  7. Combined two-stage xanthate processes for the treatment of copper-containing wastewater

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Y.K. [Department of Safety Health and Environmental Engineering, Central Taiwan University of Sciences and Technology, Taichung (Taiwan); Leu, M.H. [Department of Environmental Engineering, Kun Shan University of Technology, Yung-Kang City (Taiwan); Chang, J.E.; Lin, T.F.; Chen, T.C. [Department of Environmental Engineering, National Cheng Kung University, Tainan City (Taiwan); Chiang, L.C.; Shih, P.H. [Department of Environmental Engineering and Science, Fooyin University, Kaohsiung County (Taiwan)

    2007-02-15

    Heavy metal removal is mainly conducted by adjusting the wastewater pH to form metal hydroxide precipitates. However, in recent years, the xanthate process with a high metal removal efficiency, attracted attention due to its use of sorption/desorption of heavy metals from aqueous solutions. In this study, two kinds of agricultural xanthates, insoluble peanut-shell xanthate (IPX) and insoluble starch xanthate (ISX), were used as sorbents to treat the copper-containing wastewater (Cu concentration from 50 to 1,000 mg/L). The experimental results showed that the maximum Cu removal efficiency by IPX was 93.5 % in the case of high Cu concentrations, whereby 81.1 % of copper could rapidly be removed within one minute. Moreover, copper-containing wastewater could also be treated by ISX over a wide range (50 to 1,000 mg/L) to a level that meets the Taiwan EPA's effluent regulations (3 mg/L) within 20 minutes. Whereas IPX had a maximum binding capacity for copper of 185 mg/g IPX, the capacity for ISX was 120 mg/g ISX. IPX is cheaper than ISX, and has the benefits of a rapid reaction and a high copper binding capacity, however, it exhibits a lower copper removal efficiency. A sequential IPX and ISX treatment (i.e., two-stage xanthate processes) could therefore be an excellent alternative. The results obtained using the two-stage xanthate process revealed an effective copper treatment. The effluent (C{sub e}) was below 0.6 mg/L, compared to the influent (C{sub 0}) of 1,001 mg/L at pH = 4 and a dilution rate of 0.6 h{sup -1}. Furthermore, the Cu-ISX complex formed could meet the Taiwan TCLP regulations, and be classified as non-hazardous waste. The xanthatilization of agricultural wastes offers a comprehensive strategy for solving both agricultural waste disposal and metal-containing wastewater treatment problems. (Abstract Copyright [2007], Wiley Periodicals, Inc.)

  8. A stratified two-stage sampling design for digital soil mapping in a Mediterranean basin

    Science.gov (United States)

    Blaschek, Michael; Duttmann, Rainer

    2015-04-01

    The quality of environmental modelling results often depends on reliable soil information. In order to obtain soil data in an efficient manner, several sampling strategies are at hand depending on the level of prior knowledge and the overall objective of the planned survey. This study focuses on the collection of soil samples considering available continuous secondary information in an undulating, 16 km²-sized river catchment near Ussana in southern Sardinia (Italy). A design-based, stratified, two-stage sampling design has been applied aiming at the spatial prediction of soil property values at individual locations. The stratification based on quantiles from density functions of two land-surface parameters - topographic wetness index and potential incoming solar radiation - derived from a digital elevation model. Combined with four main geological units, the applied procedure led to 30 different classes in the given test site. Up to six polygons of each available class were selected randomly excluding those areas smaller than 1ha to avoid incorrect location of the points in the field. Further exclusion rules were applied before polygon selection masking out roads and buildings using a 20m buffer. The selection procedure was repeated ten times and the set of polygons with the best geographical spread were chosen. Finally, exact point locations were selected randomly from inside the chosen polygon features. A second selection based on the same stratification and following the same methodology (selecting one polygon instead of six) was made in order to create an appropriate validation set. Supplementary samples were obtained during a second survey focusing on polygons that have either not been considered during the first phase at all or were not adequately represented with respect to feature size. In total, both field campaigns produced an interpolation set of 156 samples and a validation set of 41 points. The selection of sample point locations has been done using

  9. Comparison of Microalgae Cultivation in Photobioreactor, Open Raceway Pond, and a Two-Stage Hybrid System

    Energy Technology Data Exchange (ETDEWEB)

    Narala, Rakesh R.; Garg, Sourabh; Sharma, Kalpesh K.; Thomas-Hall, Skye R.; Deme, Miklos; Li, Yan; Schenk, Peer M., E-mail: p.schenk@uq.edu.au [Algae Biotechnology Laboratory, School of Agriculture and Food Sciences, The University of Queensland, Brisbane, QLD (Australia)

    2016-08-02

    In the wake of intensive fossil fuel usage and CO{sub 2} accumulation in the environment, research is targeted toward sustainable alternate bioenergy that can suffice the growing need for fuel and also that leaves a minimal carbon footprint. Oil production from microalgae can potentially be carried out more efficiently, leaving a smaller footprint and without competing for arable land or biodiverse landscapes. However, current algae cultivation systems and lipid induction processes must be significantly improved and are threatened by contamination with other algae or algal grazers. To address this issue, we have developed an efficient two-stage cultivation system using the marine microalga Tetraselmis sp. M8. This hybrid system combines exponential biomass production in positive pressure air lift-driven bioreactors with a separate synchronized high-lipid induction phase in nutrient deplete open raceway ponds. A comparison to either bioreactor or open raceway pond cultivation system suggests that this process potentially leads to significantly higher productivity of algal lipids. Nutrients are only added to the closed bioreactors, while open raceway ponds have turnovers of only a few days, thus reducing the issue of microalgal grazers.

  10. [Comparison research on two-stage sequencing batch MBR and one-stage MBR].

    Science.gov (United States)

    Yuan, Xin-Yan; Shen, Heng-Gen; Sun, Lei; Wang, Lin; Li, Shi-Feng

    2011-01-01

    Aiming at resolving problems in MBR operation, like low nitrogen and phosphorous removal efficiency, severe membrane fouling and etc, comparison research on two-stage sequencing batch MBR (TSBMBR) and one-stage aerobic MBR has been done in this paper. The results indicated that TSBMBR owned advantages of SBR in removing nitrogen and phosphorous, which could make up the deficiency of traditional one-stage aerobic MBR in nitrogen and phosphorous removal. During steady operation period, effluent average NH4(+) -N, TN and TP concentration is 2.83, 12.20, 0.42 mg/L, which could reach domestic scenic environment use. From membrane fouling control point of view, TSBMBR has lower SMP in supernatant, specific trans-membrane flux deduction rate, membrane fouling resistant than one-stage aerobic MBR. The sedimentation and gel layer resistant of TSBMBR was only 6.5% and 33.12% of one-stage aerobic MBR. Besides high efficiency in removing nitrogen and phosphorous, TSBMBR could effectively reduce sedimentation and gel layer pollution on membrane surface. Comparing with one-stage MBR, TSBMBR could operate with higher trans-membrane flux, lower membrane fouling rate and better pollutants removal effects.

  11. Chromium (Ⅵ) removal from aqueous solutions through powdered activated carbon countercurrent two-stage adsorption.

    Science.gov (United States)

    Wang, Wenqiang

    2018-01-01

    To exploit the adsorption capacity of commercial powdered activated carbon (PAC) and to improve the efficiency of Cr(VI) removal from aqueous solutions, the adsorption of Cr(VI) by commercial PAC and the countercurrent two-stage adsorption (CTA) process was investigated. Different adsorption kinetics models and isotherms were compared, and the pseudo-second-order model and the Langmuir and Freundlich models fit the experimental data well. The Cr(VI) removal efficiency was >80% and was improved by 37% through the CTA process compared with the conventional single-stage adsorption process when the initial Cr(VI) concentration was 50 mg/L with a PAC dose of 1.250 g/L and a pH of 3. A calculation method for calculating the effluent Cr(VI) concentration and the PAC dose was developed for the CTA process, and the validity of the method was confirmed by a deviation of <5%. Copyright © 2017. Published by Elsevier Ltd.

  12. Comparison of microalgae cultivation in photobioreactor, open raceway pond, and a two-stage hybrid system

    Directory of Open Access Journals (Sweden)

    Rakesh R Narala

    2016-08-01

    Full Text Available In the wake of intensive fossil fuel usage and CO2 accumulation in the environment, research is targeted towards sustainable alternate bioenergy that can suffice the growing need for fuel and also that leaves a minimal carbon footprint. Oil production from microalgae can potentially be carried out more efficiently, leaving a smaller footprint and without competing for arable land or biodiverse landscapes. However, current algae cultivation systems and lipid induction processes must be significantly improved and are threatened by contamination with other algae or algal grazers. To address this issue, we have developed an efficient two-stage cultivation system using the marine microalga Tetraselmis sp. M8. This hybrid system combines exponential biomass production in positive pressure air lift-driven bioreactors with a separate synchronized high-lipid induction phase in nutrient deplete open raceway ponds. A comparison to either bioreactor or open raceway pond cultivation system suggests that this process potentially leads to significantly higher productivity of algal lipids. Nutrients are only added to the closed bioreactors while open raceway ponds have turnovers of only a few days, thus reducing the issue of microalgal grazers.

  13. Two-stage sparse coding of region covariance via Log-Euclidean kernels to detect saliency.

    Science.gov (United States)

    Zhang, Ying-Ying; Yang, Cai; Zhang, Ping

    2017-05-01

    In this paper, we present a novel bottom-up saliency detection algorithm from the perspective of covariance matrices on a Riemannian manifold. Each superpixel is described by a region covariance matrix on Riemannian Manifolds. We carry out a two-stage sparse coding scheme via Log-Euclidean kernels to extract salient objects efficiently. In the first stage, given background dictionary on image borders, sparse coding of each region covariance via Log-Euclidean kernels is performed. The reconstruction error on the background dictionary is regarded as the initial saliency of each superpixel. In the second stage, an improvement of the initial result is achieved by calculating reconstruction errors of the superpixels on foreground dictionary, which is extracted from the first stage saliency map. The sparse coding in the second stage is similar to the first stage, but is able to effectively highlight the salient objects uniformly from the background. Finally, three post-processing methods-highlight-inhibition function, context-based saliency weighting, and the graph cut-are adopted to further refine the saliency map. Experiments on four public benchmark datasets show that the proposed algorithm outperforms the state-of-the-art methods in terms of precision, recall and mean absolute error, and demonstrate the robustness and efficiency of the proposed method. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Comparison of Microalgae Cultivation in Photobioreactor, Open Raceway Pond, and a Two-Stage Hybrid System

    International Nuclear Information System (INIS)

    Narala, Rakesh R.; Garg, Sourabh; Sharma, Kalpesh K.; Thomas-Hall, Skye R.; Deme, Miklos; Li, Yan; Schenk, Peer M.

    2016-01-01

    In the wake of intensive fossil fuel usage and CO 2 accumulation in the environment, research is targeted toward sustainable alternate bioenergy that can suffice the growing need for fuel and also that leaves a minimal carbon footprint. Oil production from microalgae can potentially be carried out more efficiently, leaving a smaller footprint and without competing for arable land or biodiverse landscapes. However, current algae cultivation systems and lipid induction processes must be significantly improved and are threatened by contamination with other algae or algal grazers. To address this issue, we have developed an efficient two-stage cultivation system using the marine microalga Tetraselmis sp. M8. This hybrid system combines exponential biomass production in positive pressure air lift-driven bioreactors with a separate synchronized high-lipid induction phase in nutrient deplete open raceway ponds. A comparison to either bioreactor or open raceway pond cultivation system suggests that this process potentially leads to significantly higher productivity of algal lipids. Nutrients are only added to the closed bioreactors, while open raceway ponds have turnovers of only a few days, thus reducing the issue of microalgal grazers.

  15. Continuous production of biohythane from hydrothermal liquefied cornstalk biomass via two-stage high-rate anaerobic reactors.

    Science.gov (United States)

    Si, Bu-Chun; Li, Jia-Ming; Zhu, Zhang-Bing; Zhang, Yuan-Hui; Lu, Jian-Wen; Shen, Rui-Xia; Zhang, Chong; Xing, Xin-Hui; Liu, Zhidan

    2016-01-01

    Biohythane production via two-stage fermentation is a promising direction for sustainable energy recovery from lignocellulosic biomass. However, the utilization of lignocellulosic biomass suffers from specific natural recalcitrance. Hydrothermal liquefaction (HTL) is an emerging technology for the liquefaction of biomass, but there are still several challenges for the coupling of HTL and two-stage fermentation. One particular challenge is the limited efficiency of fermentation reactors at a high solid content of the treated feedstock. Another is the conversion of potential inhibitors during fermentation. Here, we report a novel strategy for the continuous production of biohythane from cornstalk through the integration of HTL and two-stage fermentation. Cornstalk was converted to solid and liquid via HTL, and the resulting liquid could be subsequently fed into the two-stage fermentation systems. The systems consisted of two typical high-rate reactors: an upflow anaerobic sludge blanket (UASB) and a packed bed reactor (PBR). The liquid could be efficiently converted into biohythane via the UASB and PBR with a high density of microbes at a high organic loading rate. Biohydrogen production decreased from 2.34 L/L/day in UASB (1.01 L/L/day in PBR) to 0 L/L/day as the organic loading rate (OLR) of the HTL liquid products increased to 16 g/L/day. The methane production rate achieved a value of 2.53 (UASB) and 2.54 L/L/day (PBR), respectively. The energy and carbon recovery of the integrated HTL and biohythane fermentation system reached up to 79.0 and 67.7%, respectively. The fermentation inhibitors, i.e., 5-hydroxymethyl furfural (41.4-41.9% of the initial quantity detected) and furfural (74.7-85.0% of the initial quantity detected), were degraded during hydrogen fermentation. Compared with single-stage fermentation, the methane process during two-stage fermentation had a more efficient methane production rate, acetogenesis, and COD removal. The microbial distribution

  16. Area Determination of Diabetic Foot Ulcer Images Using a Cascaded Two-Stage SVM-Based Classification.

    Science.gov (United States)

    Wang, Lei; Pedersen, Peder C; Agu, Emmanuel; Strong, Diane M; Tulu, Bengisu

    2017-09-01

    The standard chronic wound assessment method based on visual examination is potentially inaccurate and also represents a significant clinical workload. Hence, computer-based systems providing quantitative wound assessment may be valuable for accurately monitoring wound healing status, with the wound area the best suited for automated analysis. Here, we present a novel approach, using support vector machines (SVM) to determine the wound boundaries on foot ulcer images captured with an image capture box, which provides controlled lighting and range. After superpixel segmentation, a cascaded two-stage classifier operates as follows: in the first stage, a set of k binary SVM classifiers are trained and applied to different subsets of the entire training images dataset, and incorrectly classified instances are collected. In the second stage, another binary SVM classifier is trained on the incorrectly classified set. We extracted various color and texture descriptors from superpixels that are used as input for each stage in the classifier training. Specifically, color and bag-of-word representations of local dense scale invariant feature transformation features are descriptors for ruling out irrelevant regions, and color and wavelet-based features are descriptors for distinguishing healthy tissue from wound regions. Finally, the detected wound boundary is refined by applying the conditional random field method. We have implemented the wound classification on a Nexus 5 smartphone platform, except for training which was done offline. Results are compared with other classifiers and show that our approach provides high global performance rates (average sensitivity = 73.3%, specificity = 94.6%) and is sufficiently efficient for a smartphone-based image analysis.

  17. A Novel Energy-Efficient Approach for Human Activity Recognition.

    Science.gov (United States)

    Zheng, Lingxiang; Wu, Dihong; Ruan, Xiaoyang; Weng, Shaolin; Peng, Ao; Tang, Biyu; Lu, Hai; Shi, Haibin; Zheng, Huiru

    2017-09-08

    In this paper, we propose a novel energy-efficient approach for mobile activity recognition system (ARS) to detect human activities. The proposed energy-efficient ARS, using low sampling rates, can achieve high recognition accuracy and low energy consumption. A novel classifier that integrates hierarchical support vector machine and context-based classification (HSVMCC) is presented to achieve a high accuracy of activity recognition when the sampling rate is less than the activity frequency, i.e., the Nyquist sampling theorem is not satisfied. We tested the proposed energy-efficient approach with the data collected from 20 volunteers (14 males and six females) and the average recognition accuracy of around 96.0% was achieved. Results show that using a low sampling rate of 1Hz can save 17.3% and 59.6% of energy compared with the sampling rates of 5 Hz and 50 Hz. The proposed low sampling rate approach can greatly reduce the power consumption while maintaining high activity recognition accuracy. The composition of power consumption in online ARS is also investigated in this paper.

  18. A Two-Stage Framework for 3D Face Reconstruction from RGBD Images.

    Science.gov (United States)

    Wang, Kangkan; Wang, Xianwang; Pan, Zhigeng; Liu, Kai

    2014-08-01

    This paper proposes a new approach for 3D face reconstruction with RGBD images from an inexpensive commodity sensor. The challenges we face are: 1) substantial random noise and corruption are present in low-resolution depth maps; and 2) there is high degree of variability in pose and face expression. We develop a novel two-stage algorithm that effectively maps low-quality depth maps to realistic face models. Each stage is targeted toward a certain type of noise. The first stage extracts sparse errors from depth patches through the data-driven local sparse coding, while the second stage smooths noise on the boundaries between patches and reconstructs the global shape by combining local shapes using our template-based surface refinement. Our approach does not require any markers or user interaction. We perform quantitative and qualitative evaluations on both synthetic and real test sets. Experimental results show that the proposed approach is able to produce high-resolution 3D face models with high accuracy, even if inputs are of low quality, and have large variations in viewpoint and face expression.

  19. New Grapheme Generation Rules for Two-Stage Modelbased Grapheme-to-Phoneme Conversion

    Directory of Open Access Journals (Sweden)

    Seng Kheang

    2015-01-01

    Full Text Available The precise conversion of arbitrary text into its  corresponding phoneme sequence (grapheme-to-phoneme or G2P conversion is implemented in speech synthesis and recognition, pronunciation learning software, spoken term detection and spoken document retrieval systems. Because the quality of this module plays an important role in the performance of such systems and many problems regarding G2P conversion have been reported, we propose a novel two-stage model-based approach, which is implemented using an existing weighted finite-state transducer-based G2P conversion framework, to improve the performance of the G2P conversion model. The first-stage model is built for automatic conversion of words  to phonemes, while  the second-stage  model utilizes the input graphemes and output phonemes obtained from the first stage to determine the best final output phoneme sequence. Additionally, we designed new grapheme generation rules, which enable extra detail for the vowel and consonant graphemes appearing within a word. When compared with previous approaches, the evaluation results indicate that our approach using rules focusing on the vowel graphemes slightly improved the accuracy of the out-of-vocabulary dataset and consistently increased the accuracy of the in-vocabulary dataset.

  20. Fast and efficient indexing approach for object recognition

    Science.gov (United States)

    Hefnawy, Alaa; Mashali, Samia A.; Rashwan, Mohsen; Fikri, Magdi

    1999-08-01

    This paper introduces a fast and efficient indexing approach for both 2D and 3D model-based object recognition in the presence of rotation, translation, and scale variations of objects. The indexing entries are computed after preprocessing the data by Haar wavelet decomposition. The scheme is based on a unified image feature detection approach based on Zernike moments. A set of low level features, e.g. high precision edges, gray level corners, are estimated by a set of orthogonal Zernike moments, calculated locally around every image point. A high dimensional, highly descriptive indexing entries are then calculated based on the correlation of these local features and employed for fast access to the model database to generate hypotheses. A list of the most candidate models is then presented by evaluating the hypotheses. Experimental results are included to demonstrate the effectiveness of the proposed indexing approach.

  1. Efficient approach to compute melting properties fully from ab initio with application to Cu

    Science.gov (United States)

    Zhu, Li-Fang; Grabowski, Blazej; Neugebauer, Jörg

    2017-12-01

    Applying thermodynamic integration within an ab initio-based free-energy approach is a state-of-the-art method to calculate melting points of materials. However, the high computational cost and the reliance on a good reference system for calculating the liquid free energy have so far hindered a general application. To overcome these challenges, we propose the two-optimized references thermodynamic integration using Langevin dynamics (TOR-TILD) method in this work by extending the two-stage upsampled thermodynamic integration using Langevin dynamics (TU-TILD) method, which has been originally developed to obtain anharmonic free energies of solids, to the calculation of liquid free energies. The core idea of TOR-TILD is to fit two empirical potentials to the energies from density functional theory based molecular dynamics runs for the solid and the liquid phase and to use these potentials as reference systems for thermodynamic integration. Because the empirical potentials closely reproduce the ab initio system in the relevant part of the phase space the convergence of the thermodynamic integration is very rapid. Therefore, the proposed approach improves significantly the computational efficiency while preserving the required accuracy. As a test case, we apply TOR-TILD to fcc Cu computing not only the melting point but various other melting properties, such as the entropy and enthalpy of fusion and the volume change upon melting. The generalized gradient approximation (GGA) with the Perdew-Burke-Ernzerhof (PBE) exchange-correlation functional and the local-density approximation (LDA) are used. Using both functionals gives a reliable ab initio confidence interval for the melting point, the enthalpy of fusion, and entropy of fusion.

  2. Energy efficiency and the law: A multidisciplinary approach

    Directory of Open Access Journals (Sweden)

    Willemien du Plessis

    2015-01-01

    Full Text Available South Africa is an energy-intensive country. The inefficient use of, mostly, coal-generated energy is the cause of South Africa's per capita contribution to greenhouse gas emissions, pollution and environmental degradation and negative health impacts. The inefficient use of the country's energy also amounts to the injudicious use of natural resources. Improvements in energy efficiency are an important strategy to stabilise the country's energy crisis. Government responded to this challenge by introducing measures such as policies and legislation to change energy consumption patterns by, amongst others, incentivising the transition to improved energy efficiencies. A central tenet underpinning this review is that the law and energy nexus requires a multidisciplinary approach as well as a multi-pronged adoption of diverse policy instruments to effectively transform the country's energy use patterns. Numerous, innovative instruments are introduced by relevant legislation to encourage the transformation of energy generation and consumption patterns of South Africans. One such innovative instrument is the ISO 50001 energy management standard. It is a voluntary instrument, to plan for, measure and verify energy-efficiency improvements. These improvements may also trigger tax concessions. In this paper, the nature and extent of the various policy instruments and legislation that relate to energy efficiency are explored, while the interactions between the law and the voluntary ISO 50001 standard and between the law and the other academic disciplines are highlighted. The introduction of energy-efficiency measures into law requires a multidisciplinary approach, as lawyers may be challenged to address the scientific and technical elements that characterise these legal measures and instruments. Inputs by several other disciplines such as engineering, mathematics or statistics, accounting, environmental management and auditing may be needed. Law is often

  3. Hourly cooling load forecasting using time-indexed ARX models with two-stage weighted least squares regression

    International Nuclear Information System (INIS)

    Guo, Yin; Nazarian, Ehsan; Ko, Jeonghan; Rajurkar, Kamlakar

    2014-01-01

    Highlights: • Developed hourly-indexed ARX models for robust cooling-load forecasting. • Proposed a two-stage weighted least-squares regression approach. • Considered the effect of outliers as well as trend of cooling load and weather patterns. • Included higher order terms and day type patterns in the forecasting models. • Demonstrated better accuracy compared with some ARX and ANN models. - Abstract: This paper presents a robust hourly cooling-load forecasting method based on time-indexed autoregressive with exogenous inputs (ARX) models, in which the coefficients are estimated through a two-stage weighted least squares regression. The prediction method includes a combination of two separate time-indexed ARX models to improve prediction accuracy of the cooling load over different forecasting periods. The two-stage weighted least-squares regression approach in this study is robust to outliers and suitable for fast and adaptive coefficient estimation. The proposed method is tested on a large-scale central cooling system in an academic institution. The numerical case studies show the proposed prediction method performs better than some ANN and ARX forecasting models for the given test data set

  4. A two-stage optimal planning and design method for combined cooling, heat and power microgrid system

    International Nuclear Information System (INIS)

    Guo, Li; Liu, Wenjian; Cai, Jiejin; Hong, Bowen; Wang, Chengshan

    2013-01-01

    Highlights: • A two-stage optimal method is presented for CCHP microgrid system. • Economic and environmental performance are considered as assessment indicators. • Application case demonstrates its good economic and environmental performance. - Abstract: In this paper, a two-stage optimal planning and design method for combined cooling, heat and power (CCHP) microgrid system was presented. The optimal objective was to simultaneously minimize the total net present cost and carbon dioxide emission in life circle. On the first stage, multi-objective genetic algorithm based on non-dominated sorting genetic algorithm-II (NSGA-II) was applied to solve the optimal design problem including the optimization of equipment type and capacity. On the second stage, mixed-integer linear programming (MILP) algorithm was used to solve the optimal dispatch problem. The approach was applied to a typical CCHP microgrid system in a hospital as a case study, and the effectiveness of the proposed method was verified

  5. [Study on supply and demand relation based on two stages division of market of Chinese materia medica].

    Science.gov (United States)

    Yang, Guang; Guo, Lan-Ping; Wang, Nuo; Zeng, Yan; Huang, Lu-Qi

    2014-01-01

    The complex production processes and long industrial chain in traditional Chinese medicine (TCM) market result in difficulty in Chinese market microstructure research. Based on the defining the logical relationships among different concepts. This paper divides TCM market into two stages as Chinese materia medica resource market and traditional Chinese Patent Medicines market. Under this foundation, we investigated the supply capacity, approaching rules and motivation system of suppliers in TCM market, analyzed the demand situation in the perspective of demand side, and evaluated the purchasing power in terms of population profile, income, and insurance. Furthermore we also analyzed the price formation mechanism in two stages of TCM market. We hope this study can make a positive and promotion effect on TCM market related research.

  6. Two-stage Framework for a Topology-Based Projection and Visualization of Classified Document Collections

    Energy Technology Data Exchange (ETDEWEB)

    Oesterling, Patrick; Scheuermann, Gerik; Teresniak, Sven; Heyer, Gerhard; Koch, Steffen; Ertl, Thomas; Weber, Gunther H.

    2010-07-19

    During the last decades, electronic textual information has become the world's largest and most important information source available. People have added a variety of daily newspapers, books, scientific and governmental publications, blogs and private messages to this wellspring of endless information and knowledge. Since neither the existing nor the new information can be read in its entirety, computers are used to extract and visualize meaningful or interesting topics and documents from this huge information clutter. In this paper, we extend, improve and combine existing individual approaches into an overall framework that supports topological analysis of high dimensional document point clouds given by the well-known tf-idf document-term weighting method. We show that traditional distance-based approaches fail in very high dimensional spaces, and we describe an improved two-stage method for topology-based projections from the original high dimensional information space to both two dimensional (2-D) and three dimensional (3-D) visualizations. To show the accuracy and usability of this framework, we compare it to methods introduced recently and apply it to complex document and patent collections.

  7. SUCCESS FACTORS IN GROWING SMBs: A STUDY OF TWO INDUSTRIES AT TWO STAGES OF DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    Tor Jarl Trondsen

    2002-01-01

    Full Text Available The study attempts to identify factors for growing SMBs. An evolutionary phase approach has been used. The study also aims to find out if there are common and different denominators for newer and older firms that can affect their profitability. The study selects a sampling frame that isolates two groups of firms in two industries at two stages of development. A variety of organizational and structural data was collected and analyzed. Amongst the conclusions that may be drawn from the study are that it is not easy to find a common definition of success, it is important to stratify SMBs when studying them, an evolutionary stage approach helps to compare firms with roughly the same external and internal dynamics and each industry has its own set of success variables.The study has identified three success variables for older firms that reflect contemporary strategic thinking such as crafting a good strategy and changing it only incrementally, building core competencies and outsourcing the rest, and keeping up with innovation and honing competitive skills.

  8. Two-stage exchange knee arthroplasty: does resistance of the infecting organism influence the outcome?

    Science.gov (United States)

    Kurd, Mark F; Ghanem, Elie; Steinbrecher, Jill; Parvizi, Javad

    2010-08-01

    Periprosthetic joint infection after TKA is a challenging complication. Two-stage exchange arthroplasty is the accepted standard of care, but reported failure rates are increasing. It has been suggested this is due to the increased prevalence of methicillin-resistant infections. We asked the following questions: (1) What is the reinfection rate after two-stage exchange arthroplasty? (2) Which risk factors predict failure? (3) Which variables are associated with acquiring a resistant organism periprosthetic joint infection? This was a case-control study of 102 patients with infected TKA who underwent a two-stage exchange arthroplasty. Ninety-six patients were followed for a minimum of 2 years (mean, 34.5 months; range, 24-90.1 months). Cases were defined as failures of two-stage exchange arthroplasty. Two-stage exchange arthroplasty was successful in controlling the infection in 70 patients (73%). Patients who failed two-stage exchange arthroplasty were 3.37 times more likely to have been originally infected with a methicillin-resistant organism. Older age, higher body mass index, and history of thyroid disease were predisposing factors to infection with a methicillin-resistant organism. Innovative interventions are needed to improve the effectiveness of two-stage exchange arthroplasty for TKA infection with a methicillin-resistant organism as current treatment protocols may not be adequate for control of these virulent pathogens. Level IV, prognostic study. See Guidelines for Authors for a complete description of levels of evidence.

  9. Constellation modulation - an approach to increase spectral efficiency.

    Science.gov (United States)

    Dash, Soumya Sunder; Pythoud, Frederic; Hillerkuss, David; Baeuerle, Benedikt; Josten, Arne; Leuchtmann, Pascal; Leuthold, Juerg

    2017-07-10

    Constellation modulation (CM) is introduced as a new degree of freedom to increase the spectral efficiency and to further approach the Shannon limit. Constellation modulation is the art of encoding information not only in the symbols within a constellation but also by encoding information by selecting a constellation from a set of constellations that are switched from time to time. The set of constellations is not limited to sets of partitions from a given constellation but can e.g., be obtained from an existing constellation by applying geometrical transformations such as rotations, translations, scaling, or even more abstract transformations. The architecture of the transmitter and the receiver allows for constellation modulation to be used on top of existing modulations with little penalties on the bit-error ratio (BER) or on the required signal-to-noise ratio (SNR). The spectral bandwidth used by this modulation scheme is identical to the original modulation. Simulations demonstrate a particular advantage of the scheme for low SNR situations. So, for instance, it is demonstrated by simulation that a spectral efficiency increases by up to 33% and 20% can be obtained at a BER of 10 -3 and 2×10 -2 for a regular BPSK modulation format, respectively. Applying constellation modulation, we derive a most power efficient 4D-CM-BPSK modulation format that provides a spectral efficiency of 0.7 bit/s/Hz for an SNR of 0.2 dB at a BER of 2 × 10 -2 .

  10. Evaluating Efficiencies of Dual AAV Approaches for Retinal Targeting

    Directory of Open Access Journals (Sweden)

    Livia S. Carvalho

    2017-09-01

    Full Text Available Retinal gene therapy has come a long way in the last few decades and the development and improvement of new gene delivery technologies has been exponential. The recent promising results from the first clinical trials for inherited retinal degeneration due to mutations in RPE65 have provided a major breakthrough in the field and have helped cement the use of recombinant adeno-associated viruses (AAV as the major tool for retinal gene supplementation. One of the key problems of AAV however, is its limited capacity for packaging genomic information to a maximum of around 4.8 kb. Previous studies have demonstrated that homologous recombination and/or inverted terminal repeat (ITR mediated concatemerization of two overlapping AAV vectors can partially overcome the size limitation and help deliver larger transgenes. The aim of this study was to investigate and compare the use of different AAV dual-vector approaches in the mouse retina using a systematic approach comparing efficiencies in vitro and in vivo using a unique oversized reporter construct. We show that the hybrid approach relying on vector genome concatemerization by highly recombinogenic sequences and ITRs sequence overlap offers the best levels of reconstitution both in vitro and in vivo compared to trans-splicing and overlap strategies. Our data also demonstrate that dose and vector serotype do not affect reconstitution efficiency but a discrepancy between mRNA and protein expression data suggests a bottleneck affecting translation.

  11. Energy production from agricultural residues: High methane yields in pilot-scale two-stage anaerobic digestion

    International Nuclear Information System (INIS)

    Parawira, W.; Read, J.S.; Mattiasson, B.; Bjoernsson, L.

    2008-01-01

    There is a large, unutilised energy potential in agricultural waste fractions. In this pilot-scale study, the efficiency of a simple two-stage anaerobic digestion process was investigated for stabilisation and biomethanation of solid potato waste and sugar beet leaves, both separately and in co-digestion. A good phase separation between hydrolysis/acidification and methanogenesis was achieved, as indicated by the high carbon dioxide production, high volatile fatty acid concentration and low pH in the acidogenic reactors. Digestion of the individual substrates gave gross energy yields of 2.1-3.4 kWh/kg VS in the form of methane. Co-digestion, however, gave up to 60% higher methane yield, indicating that co-digestion resulted in improved methane production due to the positive synergism established in the digestion liquor. The integrity of the methane filters (MFs) was maintained throughout the period of operation, producing biogas with 60-78% methane content. A stable effluent pH showed that the methanogenic reactors had good ability to withstand the variations in load and volatile fatty acid concentrations that occurred in the two-stage process. The results of this pilot-scale study show that the two-stage anaerobic digestion system is suitable for effective conversion of semi-solid agricultural residues as potato waste and sugar beet leaves

  12. Enhanced nitrogen removal from electroplating tail wastewater through two-staged anoxic-oxic (A/O) process.

    Science.gov (United States)

    Yan, Xinmei; Zhu, Chunyan; Huang, Bin; Yan, Qun; Zhang, Guangsheng

    2018-01-01

    Consisted of anaerobic (ANA), anoxic-1 (AN1), aerobic-1 (AE1), anoxic-2 (AN2), aerobic-2 (AE2) reactors and sediment tank, the two-staged A/O process was applied for depth treatment of electroplating tail wastewater with high electrical conductivity and large amounts of ammonia nitrogen. It was found that the NH 4 + -N and COD removal efficiencies reached 97.11% and 83.00%, respectively. Besides, the short-term salinity shock of the control, AE1 and AE2 indicated that AE1 and AE2 have better resistance to high salinity when the concentration of NaCl ranged from 1 to 10g/L. Meanwhile, it was found through high-throughput sequencing that bacteria genus Nitrosomonas, Nitrospira and Thauera, which are capable of nitrogen removal, were enriched in the two-staged A/O process. Moreover, both salt-tolerant bacteria and halophili bacteria were also found in the combined process. Therefore, microbial community within the two-staged A/O process could be acclimated to high electrical conductivity, and adapted for electroplating tail wastewater treatment. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. The energy efficiency paradox revisited through a partial observability approach

    International Nuclear Information System (INIS)

    Kounetas, Kostas; Tsekouras, Kostas

    2008-01-01

    The present paper examines the energy efficiency paradox demonstrated in Greek manufacturing firms through a partial observability approach. The data set used has resulted from a survey carried out among 161 energy-saving technology firm adopters. Maximum likelihood estimates that arise from an incidental truncation model reveal that the adoption of the energy-saving technologies is indeed strongly correlated to the returns of assets that are required in order to undertake the corresponding investments. The source of the energy efficiency paradox lies within a wide range of factors. Policy schemes that aim to increase the adoption rate of energy-saving technologies within the field of manufacturing are significantly affected by differences in the size of firms. Finally, mixed policies seem to be more effective than policies that are only capital subsidy or regulation oriented

  14. Biogas production of Chicken Manure by Two-stage fermentation process

    Science.gov (United States)

    Liu, Xin Yuan; Wang, Jing Jing; Nie, Jia Min; Wu, Nan; Yang, Fang; Yang, Ren Jie

    2018-06-01

    This paper performs a batch experiment for pre-acidification treatment and methane production from chicken manure by the two-stage anaerobic fermentation process. Results shows that the acetate was the main component in volatile fatty acids produced at the end of pre-acidification stage, accounting for 68% of the total amount. The daily biogas production experienced three peak period in methane production stage, and the methane content reached 60% in the second period and then slowly reduced to 44.5% in the third period. The cumulative methane production was fitted by modified Gompertz equation, and the kinetic parameters of the methane production potential, the maximum methane production rate and lag phase time were 345.2 ml, 0.948 ml/h and 343.5 h, respectively. The methane yield of 183 ml-CH4/g-VSremoved during the methane production stage and VS removal efficiency of 52.7% for the whole fermentation process were achieved.

  15. A Two-Stage Queue Model to Optimize Layout of Urban Drainage System considering Extreme Rainstorms

    Directory of Open Access Journals (Sweden)

    Xinhua He

    2017-01-01

    Full Text Available Extreme rainstorm is a main factor to cause urban floods when urban drainage system cannot discharge stormwater successfully. This paper investigates distribution feature of rainstorms and draining process of urban drainage systems and uses a two-stage single-counter queue method M/M/1→M/D/1 to model urban drainage system. The model emphasizes randomness of extreme rainstorms, fuzziness of draining process, and construction and operation cost of drainage system. Its two objectives are total cost of construction and operation and overall sojourn time of stormwater. An improved genetic algorithm is redesigned to solve this complex nondeterministic problem, which incorporates with stochastic and fuzzy characteristics in whole drainage process. A numerical example in Shanghai illustrates how to implement the model, and comparisons with alternative algorithms show its performance in computational flexibility and efficiency. Discussions on sensitivity of four main parameters, that is, quantity of pump stations, drainage pipe diameter, rainstorm precipitation intensity, and confidence levels, are also presented to provide guidance for designing urban drainage system.

  16. Optimization of Boiling Water Reactor Loading Pattern Using Two-Stage Genetic Algorithm

    International Nuclear Information System (INIS)

    Kobayashi, Yoko; Aiyoshi, Eitaro

    2002-01-01

    A new two-stage optimization method based on genetic algorithms (GAs) using an if-then heuristic rule was developed to generate optimized boiling water reactor (BWR) loading patterns (LPs). In the first stage, the LP is optimized using an improved GA operator. In the second stage, an exposure-dependent control rod pattern (CRP) is sought using GA with an if-then heuristic rule. The procedure of the improved GA is based on deterministic operators that consist of crossover, mutation, and selection. The handling of the encoding technique and constraint conditions by that GA reflects the peculiar characteristics of the BWR. In addition, strategies such as elitism and self-reproduction are effectively used in order to improve the search speed. The LP evaluations were performed with a three-dimensional diffusion code that coupled neutronic and thermal-hydraulic models. Strong axial heterogeneities and constraints dependent on three dimensions have always necessitated the use of three-dimensional core simulators for BWRs, so that optimization of computational efficiency is required. The proposed algorithm is demonstrated by successfully generating LPs for an actual BWR plant in two phases. One phase is only LP optimization applying the Haling technique. The other phase is an LP optimization that considers the CRP during reactor operation. In test calculations, candidates that shuffled fresh and burned fuel assemblies within a reasonable computation time were obtained

  17. Stepwise encapsulation and controlled two-stage release system for cis-Diamminediiodoplatinum

    Directory of Open Access Journals (Sweden)

    Chen Y

    2014-06-01

    Full Text Available Yun Chen,1,* Qian Li,1,2,* Qingsheng Wu1 1Department of Chemistry, Key Laboratory of Yangtze River Water Environment, Ministry of Education, Tongji University, Shanghai; 2Shanghai Institute of Quality Inspection and Technical Research, Shanghai, People’s Republic of China *These authors contributed equally to this work Abstract: cis-Diamminediiodoplatinum (cis-DIDP is a cisplatin-like anticancer drug with higher anticancer activity, but lower stability and price than cisplatin. In this study, a cis-DIDP carrier system based on micro-sized stearic acid was prepared by an emulsion solvent evaporation method. The maximum drug loading capacity of cis-DIDP-loaded solid lipid nanoparticles was 22.03%, and their encapsulation efficiency was 97.24%. In vitro drug release in phosphate-buffered saline (pH =7.4 at 37.5°C exhibited a unique two-stage process, which could prove beneficial for patients with tumors and malignancies. MTT (3-[4,5-dimethylthiazol-2-yl]-2, 5-diphenyltetrazolium bromide assay results showed that cis-DIDP released from cis-DIDP-loaded solid lipid nanoparticles had better inhibition activity than cis-DIDP that had not been loaded. Keywords: stearic acid, emulsion solvent evaporation method, drug delivery, cis-DIDP, in vitro

  18. A two-stage biological gas to liquid transfer process to convert carbon dioxide into bioplastic

    KAUST Repository

    Al Rowaihi, Israa

    2018-03-06

    The fermentation of carbon dioxide (CO2) with hydrogen (H2) uses available low-cost gases to synthesis acetic acid. Here, we present a two-stage biological process that allows the gas to liquid transfer (Bio-GTL) of CO2 into the biopolymer polyhydroxybutyrate (PHB). Using the same medium in both stages, first, acetic acid is produced (3.2 g L−1) by Acetobacterium woodii from 5.2 L gas-mixture of CO2:H2 (15:85 v/v) under elevated pressure (≥2.0 bar) to increase H2-solubility in water. Second, acetic acid is converted to PHB (3 g L−1 acetate into 0.5 g L−1 PHB) by Ralstonia eutropha H16. The efficiencies and space-time yields were evaluated, and our data show the conversion of CO2 into PHB with a 33.3% microbial cell content (percentage of the ratio of PHB concentration to cell concentration) after 217 h. Collectively, our results provide a resourceful platform for future optimization and commercialization of a Bio-GTL for PHB production.

  19. TWO-STAGE REVISION HIP REPLACEMENT PATIENS WITH SEVERE ACETABULUM DEFECT (CASE REPORT

    Directory of Open Access Journals (Sweden)

    V. V. Pavlov

    2017-01-01

    Full Text Available Favorable short-term results of arthroplasty are observed in 80–90% of cases, however, over the longer follow up period the percentage of positive outcomes is gradually reduced. Need for revision of the prosthesis or it’s components increases in proportion to time elapsed from the surgery. In addition, such revision is accompanied with a need to substitute the bone defect of the acetabulum. As a solution the authors propose to replace pelvic defects in two stages. During the first stage the defect was filled with bone allograft with platelet-rich fibrin (allografting with the use of PRF technology. After the allograft remodeling during the second stage the revision surgery is performed by implanting standard prostheses. The authors present a clinical case of a female patient with aseptic loosening of acetabular component of prosthesis in the right hip joint, with failed hip function of stage 2, right limb shortening of 2 cm. Treatment results confirm the efficiency and rationality of the proposed bone grafting option. The authors conclude bone allograft in combination with the PRF technology proves to be an alternative to the implantation of massive metal implants in the acetabulum while it reduces the risk of implant-associated infection, of metallosis in surrounding tissues and expands further revision options.

  20. Two-Stage Chaos Optimization Search Application in Maximum Power Point Tracking of PV Array

    Directory of Open Access Journals (Sweden)

    Lihua Wang

    2014-01-01

    Full Text Available In order to deliver the maximum available power to the load under the condition of varying solar irradiation and environment temperature, maximum power point tracking (MPPT technologies have been used widely in PV systems. Among all the MPPT schemes, the chaos method is one of the hot topics in recent years. In this paper, a novel two-stage chaos optimization method is presented which can make search faster and more effective. In the process of proposed chaos search, the improved logistic mapping with the better ergodic is used as the first carrier process. After finding the current optimal solution in a certain guarantee, the power function carrier as the secondary carrier process is used to reduce the search space of optimized variables and eventually find the maximum power point. Comparing with the traditional chaos search method, the proposed method can track the change quickly and accurately and also has better optimization results. The proposed method provides a new efficient way to track the maximum power point of PV array.

  1. Design and Analysis of a Split Deswirl Vane in a Two-Stage Refrigeration Centrifugal Compressor

    Directory of Open Access Journals (Sweden)

    Jeng-Min Huang

    2014-09-01

    Full Text Available This study numerically investigated the influence of using the second row of a double-row deswirl vane as the inlet guide vane of the second stage on the performance of the first stage in a two-stage refrigeration centrifugal compressor. The working fluid was R134a, and the turbulence model was the Spalart-Allmaras model. The parameters discussed included the cutting position of the deswirl vane, the staggered angle of two rows of vane, and the rotation angle of the second row. The results showed that the performance of staggered angle 7.5° was better than that of 15° or 22.5°. When the staggered angle was 7.5°, the performance of cutting at 1/3 and 1/2 of the original deswirl vane length was slightly different from that of the original vane but obviously better than that of cutting at 2/3. When the staggered angle was 15°, the cutting position influenced the performance slightly. At a low flow rate prone to surge, when the second row at a staggered angle 7.5° cutting at the half of vane rotated 10°, the efficiency was reduced by only about 0.6%, and 10% of the swirl remained as the preswirl of the second stage, which is generally better than other designs.

  2. A cooperation model based on CVaR measure for a two-stage supply chain

    Science.gov (United States)

    Xu, Xinsheng; Meng, Zhiqing; Shen, Rui

    2015-07-01

    In this paper, we introduce a cooperation model (CM) for the two-stage supply chain consisting of a manufacturer and a retailer. In this model, it is supposed that the objective of the manufacturer is to maximise his/her profit while the objective of the retailer is to minimise his/her CVaR while controlling the risk originating from fluctuation in market demand. In reality, the manufacturer and the retailer would like to choose their own decisions as to wholesale price and order quantity to optimise their own objectives, resulting the fact that the expected decision of the manufacturer and that of the retailer may conflict with each other. Then, to achieve cooperation, the manufacturer and the retailer both need to give some concessions. The proposed model aims to coordinate the decisions of the manufacturer and the retailer, and balance the concessions of the two in their cooperation. We introduce an s* - optimal equilibrium solution in this model, which can decide the minimum concession that the manufacturer and the retailer need to give for their cooperation, and prove that the s* - optimal equilibrium solution can be obtained by solving a goal programming problem. Further, the case of different concessions made by the manufacturer and the retailer is also discussed. Numerical results show that the CM is efficient in dealing with the cooperations between the supplier and the retailer.

  3. Two-Stage Multiobjective Optimization for Emergency Supplies Allocation Problem under Integrated Uncertainty

    Directory of Open Access Journals (Sweden)

    Xuejie Bai

    2016-01-01

    Full Text Available This paper proposes a new two-stage optimization method for emergency supplies allocation problem with multisupplier, multiaffected area, multirelief, and multivehicle. The triplet of supply, demand, and the availability of path is unknown prior to the extraordinary event and is descriptive with fuzzy random variable. Considering the fairness, timeliness, and economical efficiency, a multiobjective expected value model is built for facility location, vehicle routing, and supply allocation decisions. The goals of proposed model aim to minimize the proportion of demand nonsatisfied and response time of emergency reliefs and the total cost of the whole process. When the demand and the availability of path are discrete, the expected values in the objective functions are converted into their equivalent forms. When the supply amount is continuous, the equilibrium chance in the constraint is transformed to its equivalent one. To overcome the computational difficulty caused by multiple objectives, a goal programming model is formulated to obtain a compromise solution. Finally, an example is presented to illustrate the validity of the proposed model and the effectiveness of the solution method.

  4. Design considerations for single-stage and two-stage pneumatic pellet injectors

    International Nuclear Information System (INIS)

    Gouge, M.J.; Combs, S.K.; Fisher, P.W.; Milora, S.L.

    1988-09-01

    Performance of single-stage pneumatic pellet injectors is compared with several models for one-dimensional, compressible fluid flow. Agreement is quite good for models that reflect actual breech chamber geometry and incorporate nonideal effects such as gas friction. Several methods of improving the performance of single-stage pneumatic pellet injectors in the near term are outlined. The design and performance of two-stage pneumatic pellet injectors are discussed, and initial data from the two-stage pneumatic pellet injector test facility at Oak Ridge National Laboratory are presented. Finally, a concept for a repeating two-stage pneumatic pellet injector is described. 27 refs., 8 figs., 3 tabs

  5. Hydrogen and methane production from condensed molasses fermentation soluble by a two-stage anaerobic process

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Chiu-Yue; Liang, You-Chyuan; Lay, Chyi-How [Feng Chia Univ., Taichung, Taiwan (China). Dept. of Environmental Engineering and Science; Chen, Chin-Chao [Chungchou Institute of Technology, Taiwan (China). Environmental Resources Lab.; Chang, Feng-Yuan [Feng Chia Univ., Taichung, Taiwan (China). Research Center for Energy and Resources

    2010-07-01

    The treatment of condensed molasses fermentation soluble (CMS) is a troublesome problem for glutamate manufacturing factory. However, CMS contains high carbohydrate and nutrient contents and is an attractive and commercially potential feedstock for bioenergy production. The aim of this paper is to produce hydrogen and methane by two-stage anaerobic fermentation process. The fermentative hydrogen production from CMS was conducted in a continuously-stirred tank bioreactor (working volume 4 L) which was operated at a hydraulic retention time (HRT) of 8 h, organic loading rate (OLR) of 120 kg COD/m{sup 3}-d, temperature of 35 C, pH 5.5 and sewage sludge as seed. The anaerobic methane production was conducted in an up-flow bioreactor (working volume 11 L) which was operated at a HRT of 24 -60 hrs, OLR of 4.0-10 kg COD/m{sup 3}-d, temperature of 35 C, pH 7.0 with using anaerobic granule sludge from fructose manufacturing factory as the seed and the effluent from hydrogen production process as the substrate. These two reactors have been operated successfully for more than 400 days. The steady-state hydrogen content, hydrogen production rate and hydrogen production yield in the hydrogen fermentation system were 37%, 169 mmol-H{sub 2}/L-d and 93 mmol-H{sub 2}/g carbohydrate{sub removed}, respectively. In the methane fermentation system, the peak methane content and methane production rate were 66.5 and 86.8 mmol-CH{sub 4}/L-d with methane production yield of 189.3 mmol-CH{sub 4}/g COD{sub removed} at an OLR 10 kg/m{sup 3}-d. The energy production rate was used to elucidate the energy efficiency for this two-stage process. The total energy production rate of 133.3 kJ/L/d was obtained with 5.5 kJ/L/d from hydrogen fermentation and 127.8 kJ/L/d from methane fermentation. (orig.)

  6. Hydrogen production from cellulose in a two-stage process combining fermentation and electrohydrogenesis

    KAUST Repository

    Lalaurette, Elodie

    2009-08-01

    A two-stage dark-fermentation and electrohydrogenesis process was used to convert the recalcitrant lignocellulosic materials into hydrogen gas at high yields and rates. Fermentation using Clostridium thermocellum produced 1.67 mol H2/mol-glucose at a rate of 0.25 L H2/L-d with a corn stover lignocellulose feed, and 1.64 mol H2/mol-glucose and 1.65 L H2/L-d with a cellobiose feed. The lignocelluose and cellobiose fermentation effluent consisted primarily of: acetic, lactic, succinic, and formic acids and ethanol. An additional 800 ± 290 mL H2/g-COD was produced from a synthetic effluent with a wastewater inoculum (fermentation effluent inoculum; FEI) by electrohydrogensis using microbial electrolysis cells (MECs). Hydrogen yields were increased to 980 ± 110 mL H2/g-COD with the synthetic effluent by combining in the inoculum samples from multiple microbial fuel cells (MFCs) each pre-acclimated to a single substrate (single substrate inocula; SSI). Hydrogen yields and production rates with SSI and the actual fermentation effluents were 980 ± 110 mL/g-COD and 1.11 ± 0.13 L/L-d (synthetic); 900 ± 140 mL/g-COD and 0.96 ± 0.16 L/L-d (cellobiose); and 750 ± 180 mL/g-COD and 1.00 ± 0.19 L/L-d (lignocellulose). A maximum hydrogen production rate of 1.11 ± 0.13 L H2/L reactor/d was produced with synthetic effluent. Energy efficiencies based on electricity needed for the MEC using SSI were 270 ± 20% for the synthetic effluent, 230 ± 50% for lignocellulose effluent and 220 ± 30% for the cellobiose effluent. COD removals were ∼90% for the synthetic effluents, and 70-85% based on VFA removal (65% COD removal) with the cellobiose and lignocellulose effluent. The overall hydrogen yield was 9.95 mol-H2/mol-glucose for the cellobiose. These results show that pre-acclimation of MFCs to single substrates improves performance with a complex mixture of substrates, and that high hydrogen yields and gas production rates can be achieved using a two-stage fermentation and MEC

  7. Efficient approach for reliability-based optimization based on weighted importance sampling approach

    International Nuclear Information System (INIS)

    Yuan, Xiukai; Lu, Zhenzhou

    2014-01-01

    An efficient methodology is presented to perform the reliability-based optimization (RBO). It is based on an efficient weighted approach for constructing an approximation of the failure probability as an explicit function of the design variables which is referred to as the ‘failure probability function (FPF)’. It expresses the FPF as a weighted sum of sample values obtained in the simulation-based reliability analysis. The required computational effort for decoupling in each iteration is just single reliability analysis. After the approximation of the FPF is established, the target RBO problem can be decoupled into a deterministic one. Meanwhile, the proposed weighted approach is combined with a decoupling approach and a sequential approximate optimization framework. Engineering examples are given to demonstrate the efficiency and accuracy of the presented methodology

  8. [A motivational approach of cognitive efficiency in nursing home residents].

    Science.gov (United States)

    Clément, Evelyne; Vivicorsi, Bruno; Altintas, Emin; Guerrien, Alain

    2014-06-01

    Despite a widespread concern with self-determined motivation (behavior is engaged in "out of pleasure" or "out of choice and valued as being important") and psychological adjustment in later life (well-being, satisfaction in life, meaning of life, or self-esteem), very little is known about the existence and nature of the links between self-determined motivation and cognitive efficiency. The aim of the present study was to investigate theses links in nursing home residents in the framework of the Self-determination theory (SDT) (Deci & Ryan, 2002), in which motivational profile of a person is determined by the combination of different kinds of motivation. We hypothesized that self-determined motivation would lead to higher cognitive efficiency. Participants. 39 (32 women and 7 men) elderly nursing home residents (m= 83.6 ± 9.3 year old) without any neurological or psychiatric disorders (DSM IV) or depression or anxiety (Hamilton depression rating scales) were included in the study. Methods. Cognitive efficiency was evaluated by two brief neuropsychological tests, the Mini mental state examination (MMSE) and the Frontal assessment battery (FAB). The motivational profile was assessed by the Elderly motivation scale (Vallerand & 0'Connor, 1991) which includes four subscales assessing self- and non-self determined motivation to engage oneself in different domains of daily life activity. Results. The neuropsychological scores were positively and significantly correlated to self-determined extrinsic motivation (behavior is engaged in "out of choice" and valued as being important), and the global self-determination index (self-determined motivational profile) was the best predictor of the cognitive efficiency. Conclusion. The results support the SDT interest for a qualitative assessment of the motivation of the elderly people and suggest that a motivational approach of cognitive efficiency could help to interpret cognitive performances exhibited during neuropsychological

  9. An efficient algebraic approach to observability analysis in state estimation

    Energy Technology Data Exchange (ETDEWEB)

    Pruneda, R.E.; Solares, C.; Conejo, A.J. [University of Castilla-La Mancha, 13071 Ciudad Real (Spain); Castillo, E. [University of Cantabria, 39005 Santander (Spain)

    2010-03-15

    An efficient and compact algebraic approach to state estimation observability is proposed. It is based on transferring rows to columns and vice versa in the Jacobian measurement matrix. The proposed methodology provides a unified approach to observability checking, critical measurement identification, determination of observable islands, and selection of pseudo-measurements to restore observability. Additionally, the observability information obtained from a given set of measurements can provide directly the observability obtained from any subset of measurements of the given set. Several examples are used to illustrate the capabilities of the proposed methodology, and results from a large case study are presented to demonstrate the appropriate computational behavior of the proposed algorithms. Finally, some conclusions are drawn. (author)

  10. Hydrogen production from cellulose in a two-stage process combining fermentation and electrohydrogenesis

    KAUST Repository

    Lalaurette, Elodie; Thammannagowda, Shivegowda; Mohagheghi, Ali; Maness, Pin-Ching; Logan, Bruce E.

    2009-01-01

    A two-stage dark-fermentation and electrohydrogenesis process was used to convert the recalcitrant lignocellulosic materials into hydrogen gas at high yields and rates. Fermentation using Clostridium thermocellum produced 1.67 mol H2/mol

  11. Lingual mucosal graft two-stage Bracka technique for redo hypospadias repair

    Directory of Open Access Journals (Sweden)

    Ahmed Sakr

    2017-09-01

    Conclusion: Lingual mucosa is a reliable and versatile graft material in the armamentarium of two-stage Bracka hypospadias repair with the merits of easy harvesting and minor donor-site complications.

  12. Comparative effectiveness of one-stage versus two-stage basilic vein transposition arteriovenous fistulas.

    Science.gov (United States)

    Ghaffarian, Amir A; Griffin, Claire L; Kraiss, Larry W; Sarfati, Mark R; Brooke, Benjamin S

    2018-02-01

    Basilic vein transposition (BVT) fistulas may be performed as either a one-stage or two-stage operation, although there is debate as to which technique is superior. This study was designed to evaluate the comparative clinical efficacy and cost-effectiveness of one-stage vs two-stage BVT. We identified all patients at a single large academic hospital who had undergone creation of either a one-stage or two-stage BVT between January 2007 and January 2015. Data evaluated included patient demographics, comorbidities, medication use, reasons for abandonment, and interventions performed to maintain patency. Costs were derived from the literature, and effectiveness was expressed in quality-adjusted life-years (QALYs). We analyzed primary and secondary functional patency outcomes as well as survival during follow-up between one-stage and two-stage BVT procedures using multivariate Cox proportional hazards models and Kaplan-Meier analysis with log-rank tests. The incremental cost-effectiveness ratio was used to determine cost savings. We identified 131 patients in whom 57 (44%) one-stage BVT and 74 (56%) two-stage BVT fistulas were created among 8 different vascular surgeons during the study period that each performed both procedures. There was no significant difference in the mean age, male gender, white race, diabetes, coronary disease, or medication profile among patients undergoing one- vs two-stage BVT. After fistula transposition, the median follow-up time was 8.3 months (interquartile range, 3-21 months). Primary patency rates of one-stage BVT were 56% at 12-month follow-up, whereas primary patency rates of two-stage BVT were 72% at 12-month follow-up. Patients undergoing two-stage BVT also had significantly higher rates of secondary functional patency at 12 months (57% for one-stage BVT vs 80% for two-stage BVT) and 24 months (44% for one-stage BVT vs 73% for two-stage BVT) of follow-up (P < .001 using log-rank test). However, there was no significant difference

  13. Cost-effectiveness Analysis of a Two-stage Screening Intervention for Hepatocellular Carcinoma in Taiwan

    Directory of Open Access Journals (Sweden)

    Sophy Ting-Fang Shih

    2010-01-01

    Conclusion: Screening the population of high-risk individuals for HCC with the two-stage screening intervention in Taiwan is considered potentially cost-effective compared with opportunistic screening in the target population of an HCC endemic area.

  14. Noncausal two-stage image filtration at presence of observations with anomalous errors

    OpenAIRE

    S. V. Vishnevyy; S. Ya. Zhuk; A. N. Pavliuchenkova

    2013-01-01

    Introduction. It is necessary to develop adaptive algorithms, which allow to detect such regions and to apply filter with respective parameters for suppression of anomalous noises for the purposes of image filtration, which consist of regions with anomalous errors. Development of adaptive algorithm for non-causal two-stage images filtration at pres-ence of observations with anomalous errors. The adaptive algorithm for noncausal two-stage filtration is developed. On the first stage the adaptiv...

  15. Energy Efficiency - Spectral Efficiency Trade-off: A Multiobjective Optimization Approach

    KAUST Repository

    Amin, Osama

    2015-04-23

    In this paper, we consider the resource allocation problem for energy efficiency (EE) - spectral efficiency (SE) trade-off. Unlike traditional research that uses the EE as an objective function and imposes constraints either on the SE or achievable rate, we propound a multiobjective optimization approach that can flexibly switch between the EE and SE functions or change the priority level of each function using a trade-off parameter. Our dynamic approach is more tractable than the conventional approaches and more convenient to realistic communication applications and scenarios. We prove that the multiobjective optimization of the EE and SE is equivalent to a simple problem that maximizes the achievable rate/SE and minimizes the total power consumption. Then we apply the generalized framework of the resource allocation for the EE-SE trade-off to optimally allocate the subcarriers’ power for orthogonal frequency division multiplexing (OFDM) with imperfect channel estimation. Finally, we use numerical results to discuss the choice of the trade-off parameter and study the effect of the estimation error, transmission power budget and channel-to-noise ratio on the multiobjective optimization.

  16. Alternative approach for Article 5. Energie Efficiency Directive; Alternatieve aanpak artikel 5. Energy Efficiency Directive

    Energy Technology Data Exchange (ETDEWEB)

    Menkveld, M.; Jablonska, B. [ECN Beleidsstudies, Petten (Netherlands)

    2013-05-15

    Article 5 of the Energy Efficiency Directive (EED) is an annual obligation to renovate 3% of the building stock of central government. After renovation the buildings will meet the minimum energy performance requirements laid down in Article 4 of the EPBD. The Directive gives room to an alternative approach to achieve the same savings. The Ministry of Interior Affairs has asked ECN to assist with this alternative approach. ECN calculated what saving are achieved with the 3% renovation obligation under the directive. Then ECN looked for the possibilities for an alternative approach to achieve the same savings [Dutch] In artikel 5 van de Energie Efficiency Directive (EED) staat een verplichting om jaarlijks 3% van de gebouwvoorraad van de centrale overheid te renoveren. Die 3% van de gebouwvoorraad moet na renovatie voldoen aan de minimum eisen inzake energieprestatie die door het betreffende lidstaat zijn vastgelegd op grond van artikel 4 in de EPBD. De verplichting betreft gebouwen die in bezit en in gebruik zijn van de rijksoverheid met een gebruiksoppervlakte groter dan 500 m{sup 2}, vanaf juli 2015 groter dan 250 m{sup 2}. De gebouwen die eigendom zijn van de Rijksgebouwendienst betreft kantoren van rijksdiensten, gerechtsgebouwen, gebouwen van douane en politie en gevangenissen. Van de gebouwen van Defensie hoeven alleen kantoren en legeringsgebouwen aan de verplichting te voldoen.

  17. Energy Efficiency - Spectral Efficiency Trade-off: A Multiobjective Optimization Approach

    KAUST Repository

    Amin, Osama; Bedeer, Ebrahim; Ahmed, Mohamed; Dobre, Octavia

    2015-01-01

    In this paper, we consider the resource allocation problem for energy efficiency (EE) - spectral efficiency (SE) trade-off. Unlike traditional research that uses the EE as an objective function and imposes constraints either on the SE or achievable rate, we propound a multiobjective optimization approach that can flexibly switch between the EE and SE functions or change the priority level of each function using a trade-off parameter. Our dynamic approach is more tractable than the conventional approaches and more convenient to realistic communication applications and scenarios. We prove that the multiobjective optimization of the EE and SE is equivalent to a simple problem that maximizes the achievable rate/SE and minimizes the total power consumption. Then we apply the generalized framework of the resource allocation for the EE-SE trade-off to optimally allocate the subcarriers’ power for orthogonal frequency division multiplexing (OFDM) with imperfect channel estimation. Finally, we use numerical results to discuss the choice of the trade-off parameter and study the effect of the estimation error, transmission power budget and channel-to-noise ratio on the multiobjective optimization.

  18. Adaptive Urban Stormwater Management Using a Two-stage Stochastic Optimization Model

    Science.gov (United States)

    Hung, F.; Hobbs, B. F.; McGarity, A. E.

    2014-12-01

    In many older cities, stormwater results in combined sewer overflows (CSOs) and consequent water quality impairments. Because of the expense of traditional approaches for controlling CSOs, cities are considering the use of green infrastructure (GI) to reduce runoff and pollutants. Examples of GI include tree trenches, rain gardens, green roofs, and rain barrels. However, the cost and effectiveness of GI are uncertain, especially at the watershed scale. We present a two-stage stochastic extension of the Stormwater Investment Strategy Evaluation (StormWISE) model (A. McGarity, JWRPM, 2012, 111-24) to explicitly model and optimize these uncertainties in an adaptive management framework. A two-stage model represents the immediate commitment of resources ("here & now") followed by later investment and adaptation decisions ("wait & see"). A case study is presented for Philadelphia, which intends to extensively deploy GI over the next two decades (PWD, "Green City, Clean Water - Implementation and Adaptive Management Plan," 2011). After first-stage decisions are made, the model updates the stochastic objective and constraints (learning). We model two types of "learning" about GI cost and performance. One assumes that learning occurs over time, is automatic, and does not depend on what has been done in stage one (basic model). The other considers learning resulting from active experimentation and learning-by-doing (advanced model). Both require expert probability elicitations, and learning from research and monitoring is modelled by Bayesian updating (as in S. Jacobi et al., JWRPM, 2013, 534-43). The model allocates limited financial resources to GI investments over time to achieve multiple objectives with a given reliability. Objectives include minimizing construction and O&M costs; achieving nutrient, sediment, and runoff volume targets; and community concerns, such as aesthetics, CO2 emissions, heat islands, and recreational values. CVaR (Conditional Value at Risk) and

  19. A farm-scale pilot plant for biohydrogen and biomethane production by two-stage fermentation

    Directory of Open Access Journals (Sweden)

    R. Oberti

    2013-09-01

    Full Text Available Hydrogen is considered one of the possible main energy carriers for the future, thanks to its unique environmental properties. Indeed, its energy content (120 MJ/kg can be exploited virtually without emitting any exhaust in the atmosphere except for water. Renewable production of hydrogen can be obtained through common biological processes on which relies anaerobic digestion, a well-established technology in use at farm-scale for treating different biomass and residues. Despite two-stage hydrogen and methane producing fermentation is a simple variant of the traditional anaerobic digestion, it is a relatively new approach mainly studied at laboratory scale. It is based on biomass fermentation in two separate, seuqential stages, each maintaining conditions optimized to promote specific bacterial consortia: in the first acidophilic reactorhydrogen is produced production, while volatile fatty acids-rich effluent is sent to the second reactor where traditional methane rich biogas production is accomplished. A two-stage pilot-scale plant was designed, manufactured and installed at the experimental farm of the University of Milano and operated using a biomass mixture of livestock effluents mixed with sugar/starch-rich residues (rotten fruits and potatoes and expired fruit juices, afeedstock mixture based on waste biomasses directly available in the rural area where plant is installed. The hydrogenic and the methanogenic reactors, both CSTR type, had a total volume of 0.7m3 and 3.8 m3 respectively, and were operated in thermophilic conditions (55 2 °C without any external pH control, and were fully automated. After a brief description of the requirements of the system, this contribution gives a detailed description of its components and of engineering solutions to the problems encountered during the plant realization and start-up. The paper also discusses the results obtained in a first experimental run which lead to production in the range of previous

  20. Energy Efficiency in Logistics: An Interactive Approach to Capacity Utilisation

    Directory of Open Access Journals (Sweden)

    Jessica Wehner

    2018-05-01

    Full Text Available Logistics operations are energy-consuming and impact the environment negatively. Improving energy efficiency in logistics is crucial for environmental sustainability and can be achieved by increasing the utilisation of capacity. This paper takes an interactive approach to capacity utilisation, to contribute to sustainable freight transport and logistics, by identifying its causes and mitigations. From literature, a conceptual framework was developed to highlight different system levels in the logistics system, in which the energy efficiency improvement potential can be found and that are summarised in the categories activities, actors, and areas. Through semi-structured interviews with representatives of nine companies, empirical data was collected to validate the framework of the causes of the unutilised capacity and proposed mitigations. The results suggest that activities, such as inflexibilities and limited information sharing as well as actors’ over-delivery of logistics services, incorrect price setting, and sales campaigns can cause unutilised capacity, and that problem areas include i.a. poor integration of reversed logistics and the last mile. The paper contributes by categorising causes of unutilised capacity and linking them to mitigations in a framework, providing a critical view towards fill rates, highlighting the need for a standardised approach to measure environmental impact that enables comparison between companies and underlining that costs are not an appropriate indicator for measuring environmental impact.

  1. Demonstration of an efficient cooling approach for SBIRS-Low

    Science.gov (United States)

    Nieczkoski, S. J.; Myers, E. A.

    2002-05-01

    The Space Based Infrared System-Low (SBIRS-Low) segment is a near-term Air Force program for developing and deploying a constellation of low-earth orbiting observation satellites with gimbaled optics cooled to cryogenic temperatures. The optical system design and requirements present unique challenges that make conventional cooling approaches both complicated and risky. The Cryocooler Interface System (CIS) provides a remote, efficient, and interference-free means of cooling the SBIRS-Low optics. Technology Applications Inc. (TAI), through a two-phase Small Business Innovative Research (SBIR) program with Air Force Research Laboratory (AFRL), has taken the CIS from initial concept feasibility through the design, build, and test of a prototype system. This paper presents the development and demonstration testing of the prototype CIS. Prototype system testing has demonstrated the high efficiency of this cooling approach, making it an attractive option for SBIRS-Low and other sensitive optical and detector systems that require low-impact cryogenic cooling.

  2. An Efficient Context-Aware Privacy Preserving Approach for Smartphones

    Directory of Open Access Journals (Sweden)

    Lichen Zhang

    2017-01-01

    Full Text Available With the proliferation of smartphones and the usage of the smartphone apps, privacy preservation has become an important issue. The existing privacy preservation approaches for smartphones usually have less efficiency due to the absent consideration of the active defense policies and temporal correlations between contexts related to users. In this paper, through modeling the temporal correlations among contexts, we formalize the privacy preservation problem to an optimization problem and prove its correctness and the optimality through theoretical analysis. To further speed up the running time, we transform the original optimization problem to an approximate optimal problem, a linear programming problem. By resolving the linear programming problem, an efficient context-aware privacy preserving algorithm (CAPP is designed, which adopts active defense policy and decides how to release the current context of a user to maximize the level of quality of service (QoS of context-aware apps with privacy preservation. The conducted extensive simulations on real dataset demonstrate the improved performance of CAPP over other traditional approaches.

  3. A two-stage flow-based intrusion detection model for next-generation networks.

    Science.gov (United States)

    Umer, Muhammad Fahad; Sher, Muhammad; Bi, Yaxin

    2018-01-01

    The next-generation network provides state-of-the-art access-independent services over converged mobile and fixed networks. Security in the converged network environment is a major challenge. Traditional packet and protocol-based intrusion detection techniques cannot be used in next-generation networks due to slow throughput, low accuracy and their inability to inspect encrypted payload. An alternative solution for protection of next-generation networks is to use network flow records for detection of malicious activity in the network traffic. The network flow records are independent of access networks and user applications. In this paper, we propose a two-stage flow-based intrusion detection system for next-generation networks. The first stage uses an enhanced unsupervised one-class support vector machine which separates malicious flows from normal network traffic. The second stage uses a self-organizing map which automatically groups malicious flows into different alert clusters. We validated the proposed approach on two flow-based datasets and obtained promising results.

  4. A Concept of Two-Stage-To-Orbit Reusable Launch Vehicle

    Science.gov (United States)

    Yang, Yong; Wang, Xiaojun; Tang, Yihua

    2002-01-01

    Reusable Launch Vehicle (RLV) has a capability of delivering a wide rang of payload to earth orbit with greater reliability, lower cost, more flexibility and operability than any of today's launch vehicles. It is the goal of future space transportation systems. Past experience on single stage to orbit (SSTO) RLVs, such as NASA's NASP project, which aims at developing an rocket-based combined-cycle (RBCC) airplane and X-33, which aims at developing a rocket RLV, indicates that SSTO RLV can not be realized in the next few years based on the state-of-the-art technologies. This paper presents a concept of all rocket two-stage-to-orbit (TSTO) reusable launch vehicle. The TSTO RLV comprises an orbiter and a booster stage. The orbiter is mounted on the top of the booster stage. The TSTO RLV takes off vertically. At the altitude about 50km the booster stage is separated from the orbiter, returns and lands by parachutes and airbags, or lands horizontally by means of its own propulsion system. The orbiter continues its ascent flight and delivers the payload into LEO orbit. After completing orbit mission, the orbiter will reenter into the atmosphere, automatically fly to the ground base and finally horizontally land on the runway. TSTO RLV has less technology difficulties and risk than SSTO, and maybe the practical approach to the RLV in the near future.

  5. Fleet Planning Decision-Making: Two-Stage Optimization with Slot Purchase

    Directory of Open Access Journals (Sweden)

    Lay Eng Teoh

    2016-01-01

    Full Text Available Essentially, strategic fleet planning is vital for airlines to yield a higher profit margin while providing a desired service frequency to meet stochastic demand. In contrast to most studies that did not consider slot purchase which would affect the service frequency determination of airlines, this paper proposes a novel approach to solve the fleet planning problem subject to various operational constraints. A two-stage fleet planning model is formulated in which the first stage selects the individual operating route that requires slot purchase for network expansions while the second stage, in the form of probabilistic dynamic programming model, determines the quantity and type of aircraft (with the corresponding service frequency to meet the demand profitably. By analyzing an illustrative case study (with 38 international routes, the results show that the incorporation of slot purchase in fleet planning is beneficial to airlines in achieving economic and social sustainability. The developed model is practically viable for airlines not only to provide a better service quality (via a higher service frequency to meet more demand but also to obtain a higher revenue and profit margin, by making an optimal slot purchase and fleet planning decision throughout the long-term planning horizon.

  6. Armature formation in a railgun using a two-stage light-gas gun injector

    International Nuclear Information System (INIS)

    Hawke, R.S.; Susoeff, A.R.; Asay, J.R.; Hall, C.A.; Konrad, C.H.; Hickman, R.J.; Sauve, J.L.

    1989-01-01

    During the past decade several research groups have tried to achieve reliable acceleration of projectiles to velocities in excess of 8 km/s by using a railgun. All attempts have met with difficulties. However, in the past four years the researchers have come to agree on the nature and causes of the difficulties. The consensus is that the hot plasma armature - used to commutate across the rails and to accelerate the projectile - causes ablation of the barrel wall; this ablation ultimately results in parasitic secondary arc formation through armature separation and/or restrike. The subsequence deprivation of current to the propulsion armature results in a limit to the achievable projectile velocity. Methods of mitigating the process are under study. One method uses a two-stage light-gas gun as a preaccelerator/injector to the railgun. The gas gun serves a double purpose: It quickly accelerates the projectile to a high velocity, and it fills the barrel behind the propulsive armature with insulating gas. While this approach is expected to improve railgun performance, it also requires development of techniques to form the propulsive armature behind the projectile in the high-velocity, high-pressure gas stream. This paper briefly summarizes the problems encountered in attempts to achieve hypervelocities with a railgun. Included is a description of the phenomenology and details of joint Sandia National Laboratories, Albuquerque/Lawrence Livermore National Laboratory (SNLA/LNLL) work at SNLA on a method for forming the needed plasma armature

  7. Computational Modelling of Large Scale Phage Production Using a Two-Stage Batch Process

    Directory of Open Access Journals (Sweden)

    Konrad Krysiak-Baltyn

    2018-04-01

    Full Text Available Cost effective and scalable methods for phage production are required to meet an increasing demand for phage, as an alternative to antibiotics. Computational models can assist the optimization of such production processes. A model is developed here that can simulate the dynamics of phage population growth and production in a two-stage, self-cycling process. The model incorporates variable infection parameters as a function of bacterial growth rate and employs ordinary differential equations, allowing application to a setup with multiple reactors. The model provides simple cost estimates as a function of key operational parameters including substrate concentration, feed volume and cycling times. For the phage and bacteria pairing examined, costs and productivity varied by three orders of magnitude, with the lowest cost found to be most sensitive to the influent substrate concentration and low level setting in the first vessel. An example case study of phage production is also presented, showing how parameter values affect the production costs and estimating production times. The approach presented is flexible and can be used to optimize phage production at laboratory or factory scale by minimizing costs or maximizing productivity.

  8. Assessing vanadium and arsenic exposure of people living near a petrochemical complex with two-stage dispersion models

    International Nuclear Information System (INIS)

    Chio, Chia-Pin; Yuan, Tzu-Hsuen; Shie, Ruei-Hao; Chan, Chang-Chuan

    2014-01-01

    Highlights: • Two-stage dispersion models can estimate exposures to hazardous air pollutants. • Spatial distribution of V levels is derived for sources without known emission rates. • A distance-to-source gradient is found for V levels from a petrochemical complex. • Two-stage dispersion is useful for modeling air pollution in resource-limited areas. - Abstract: The goal of this study is to demonstrate that it is possible to construct a two-stage dispersion model empirically for the purpose of estimating air pollution levels in the vicinity of petrochemical plants. We studied oil refineries and coal-fired power plants in the No. 6 Naphtha Cracking Complex, an area of 2,603-ha situated on the central west coast of Taiwan. The pollutants targeted were vanadium (V) from oil refineries and arsenic (As) from coal-fired power plants. We applied a backward fitting method to determine emission rates of V and As, with 192 PM 10 filters originally collected between 2009 and 2012. Our first-stage model estimated emission rates of V and As (median and 95% confidence intervals at 0.0202 (0.0040–0.1063) and 0.1368 (0.0398–0.4782) g/s, respectively. In our second stage model, the predicted zone-average concentrations showed a strong correlation with V, but a poor correlation with As. Our findings show that two-stage dispersion models are relatively precise for estimating V levels at residents’ addresses near the petrochemical complex, but they did not work as well for As levels. In conclusion, our model-based approach can be widely used for modeling exposure to air pollution from industrial areas in countries with limited resources

  9. Two-stage plasma gun based on a gas discharge with a self-heating hollow emitter.

    Science.gov (United States)

    Vizir, A V; Tyunkov, A V; Shandrikov, M V; Oks, E M

    2010-02-01

    The paper presents the results of tests of a new compact two-stage bulk gas plasma gun. The plasma gun is based on a nonself-sustained gas discharge with an electron emitter based on a discharge with a self-heating hollow cathode. The operating characteristics of the plasma gun are investigated. The discharge system makes it possible to produce uniform and stable gas plasma in the dc mode with a plasma density up to 3x10(9) cm(-3) at an operating gas pressure in the vacuum chamber of less than 2x10(-2) Pa. The device features high power efficiency, design simplicity, and compactness.

  10. Two-stage plasma gun based on a gas discharge with a self-heating hollow emitter

    International Nuclear Information System (INIS)

    Vizir, A. V.; Tyunkov, A. V.; Shandrikov, M. V.; Oks, E. M.

    2010-01-01

    The paper presents the results of tests of a new compact two-stage bulk gas plasma gun. The plasma gun is based on a nonself-sustained gas discharge with an electron emitter based on a discharge with a self-heating hollow cathode. The operating characteristics of the plasma gun are investigated. The discharge system makes it possible to produce uniform and stable gas plasma in the dc mode with a plasma density up to 3x10 9 cm -3 at an operating gas pressure in the vacuum chamber of less than 2x10 -2 Pa. The device features high power efficiency, design simplicity, and compactness.

  11. Bridge approach slabs for Missouri DOT field evaluation of alternative and cost efficient bridge approach slabs.

    Science.gov (United States)

    2013-05-01

    Based on a recent study on cost efficient alternative bridge approach slab (BAS) designs (Thiagarajan et : al. 2010) has recommended three new BAS designs for possible implementation by MoDOT namely a) 20 feet cast-inplace : slab with sleeper slab (C...

  12. An efficient numerical approach to electrostatic microelectromechanical system simulation

    International Nuclear Information System (INIS)

    Pu, Li

    2009-01-01

    Computational analysis of electrostatic microelectromechanical systems (MEMS) requires an electrostatic analysis to compute the electrostatic forces acting on micromechanical structures and a mechanical analysis to compute the deformation of micromechanical structures. Typically, the mechanical analysis is performed on an undeformed geometry. However, the electrostatic analysis is performed on the deformed position of microstructures. In this paper, a new efficient approach to self-consistent analysis of electrostatic MEMS in the small deformation case is presented. In this approach, when the microstructures undergo small deformations, the surface charge densities on the deformed geometry can be computed without updating the geometry of the microstructures. This algorithm is based on the linear mode shapes of a microstructure as basis functions. A boundary integral equation for the electrostatic problem is expanded into a Taylor series around the undeformed configuration, and a new coupled-field equation is presented. This approach is validated by comparing its results with the results available in the literature and ANSYS solutions, and shows attractive features comparable to ANSYS. (general)

  13. Separating environmental efficiency into production and abatement efficiency. A nonparametric model with application to U.S. power plants

    Energy Technology Data Exchange (ETDEWEB)

    Hampf, Benjamin

    2011-08-15

    In this paper we present a new approach to evaluate the environmental efficiency of decision making units. We propose a model that describes a two-stage process consisting of a production and an end-of-pipe abatement stage with the environmental efficiency being determined by the efficiency of both stages. Taking the dependencies between the two stages into account, we show how nonparametric methods can be used to measure environmental efficiency and to decompose it into production and abatement efficiency. For an empirical illustration we apply our model to an analysis of U.S. power plants.

  14. Auditing energy use -a systematic approach for enhancing energy efficiency

    International Nuclear Information System (INIS)

    Ardhapnrkar, P.M.; Mahalle, A.M.

    2005-01-01

    Energy management is a critical activity in the developing as well as developed countries owing to constraints in the availability of primary energy resources and the increasing demand for energy from the industrial and non-industrial users. Energy consumption is a vital parameter that determines the economic growth of any country. An energy management system (EMS) can save money by allowing greater control over energy consuming equipment. The foundation for the energy program is the energy audit, which is the systematic study of factory or building to determine where and how well energy is being used. It is the nucleus of any successful energy saving program -it is tool, not a solution. Conventional energy conservation methods are mostly sporadic and lack a coordinated plan of action. Consequently only apparent systems are treated without the analysis of system interaction. Energy audit on the other hand, involves total system approach and aims at optimizing energy use efficiently for the entire plant. In the present paper a new approach to pursue energy conservation techniques is being discussed. The focus is mainly on the methodology of energy audit, energy use analysis, relating energy with the production, and reducing energy losses, etc. It is observe that with this systematic approach, if adopted, which consists of three essential segments namely capacity utilization fine-tuning of the equipment and technology up-gradation can result in phenomenal savings in the energy, building competitive edge for the industry. This approach along with commitment can provide the right impetus to reap the benefits of energy conservation on a sustained basis. (author)

  15. From properties to materials: An efficient and simple approach.

    Science.gov (United States)

    Huwig, Kai; Fan, Chencheng; Springborg, Michael

    2017-12-21

    We present an inverse-design method, the poor man's materials optimization, that is designed to identify materials within a very large class with optimized values for a pre-chosen property. The method combines an efficient genetic-algorithm-based optimization, an automatic approach for generating modified molecules, a simple approach for calculating the property of interest, and a mathematical formulation of the quantity whose value shall be optimized. In order to illustrate the performance of our approach, we study the properties of organic molecules related to those used in dye-sensitized solar cells, whereby we, for the sake of proof of principle, consider benzene as a simple test system. Using a genetic algorithm, the substituents attached to the organic backbone are varied and the best performing molecules are identified. We consider several properties to describe the performance of organic molecules, including the HOMO-LUMO gap, the sunlight absorption, the spatial distance of the orbitals, and the reorganisation energy. The results show that our method is able to identify a large number of good candidate structures within a short time. In some cases, chemical/physical intuition can be used to rationalize the substitution pattern of the best structures, although this is not always possible. The present investigations provide a solid foundation for dealing with more complex and technically relevant systems such as porphyrins. Furthermore, our "properties first, materials second" approach is not limited to solar-energy harvesting but can be applied to many other fields, as briefly is discussed in the paper.

  16. From properties to materials: An efficient and simple approach

    Science.gov (United States)

    Huwig, Kai; Fan, Chencheng; Springborg, Michael

    2017-12-01

    We present an inverse-design method, the poor man's materials optimization, that is designed to identify materials within a very large class with optimized values for a pre-chosen property. The method combines an efficient genetic-algorithm-based optimization, an automatic approach for generating modified molecules, a simple approach for calculating the property of interest, and a mathematical formulation of the quantity whose value shall be optimized. In order to illustrate the performance of our approach, we study the properties of organic molecules related to those used in dye-sensitized solar cells, whereby we, for the sake of proof of principle, consider benzene as a simple test system. Using a genetic algorithm, the substituents attached to the organic backbone are varied and the best performing molecules are identified. We consider several properties to describe the performance of organic molecules, including the HOMO-LUMO gap, the sunlight absorption, the spatial distance of the orbitals, and the reorganisation energy. The results show that our method is able to identify a large number of good candidate structures within a short time. In some cases, chemical/physical intuition can be used to rationalize the substitution pattern of the best structures, although this is not always possible. The present investigations provide a solid foundation for dealing with more complex and technically relevant systems such as porphyrins. Furthermore, our "properties first, materials second" approach is not limited to solar-energy harvesting but can be applied to many other fields, as briefly is discussed in the paper.

  17. An Efficient Approach for Identifying Stable Lobes with Discretization Method

    Directory of Open Access Journals (Sweden)

    Baohai Wu

    2013-01-01

    Full Text Available This paper presents a new approach for quick identification of chatter stability lobes with discretization method. Firstly, three different kinds of stability regions are defined: absolute stable region, valid region, and invalid region. Secondly, while identifying the chatter stability lobes, three different regions within the chatter stability lobes are identified with relatively large time intervals. Thirdly, stability boundary within the valid regions is finely calculated to get exact chatter stability lobes. The proposed method only needs to test a small portion of spindle speed and cutting depth set; about 89% computation time is savedcompared with full discretization method. It spends only about10 minutes to get exact chatter stability lobes. Since, based on discretization method, the proposed method can be used for different immersion cutting including low immersion cutting process, the proposed method can be directly implemented in the workshop to promote machining parameters selection efficiency.

  18. An efficient and extensible approach for compressing phylogenetic trees

    KAUST Repository

    Matthews, Suzanne J

    2011-01-01

    Background: Biologists require new algorithms to efficiently compress and store their large collections of phylogenetic trees. Our previous work showed that TreeZip is a promising approach for compressing phylogenetic trees. In this paper, we extend our TreeZip algorithm by handling trees with weighted branches. Furthermore, by using the compressed TreeZip file as input, we have designed an extensible decompressor that can extract subcollections of trees, compute majority and strict consensus trees, and merge tree collections using set operations such as union, intersection, and set difference.Results: On unweighted phylogenetic trees, TreeZip is able to compress Newick files in excess of 98%. On weighted phylogenetic trees, TreeZip is able to compress a Newick file by at least 73%. TreeZip can be combined with 7zip with little overhead, allowing space savings in excess of 99% (unweighted) and 92%(weighted). Unlike TreeZip, 7zip is not immune to branch rotations, and performs worse as the level of variability in the Newick string representation increases. Finally, since the TreeZip compressed text (TRZ) file contains all the semantic information in a collection of trees, we can easily filter and decompress a subset of trees of interest (such as the set of unique trees), or build the resulting consensus tree in a matter of seconds. We also show the ease of which set operations can be performed on TRZ files, at speeds quicker than those performed on Newick or 7zip compressed Newick files, and without loss of space savings.Conclusions: TreeZip is an efficient approach for compressing large collections of phylogenetic trees. The semantic and compact nature of the TRZ file allow it to be operated upon directly and quickly, without a need to decompress the original Newick file. We believe that TreeZip will be vital for compressing and archiving trees in the biological community. © 2011 Matthews and Williams; licensee BioMed Central Ltd.

  19. An efficient and extensible approach for compressing phylogenetic trees.

    Science.gov (United States)

    Matthews, Suzanne J; Williams, Tiffani L

    2011-10-18

    Biologists require new algorithms to efficiently compress and store their large collections of phylogenetic trees. Our previous work showed that TreeZip is a promising approach for compressing phylogenetic trees. In this paper, we extend our TreeZip algorithm by handling trees with weighted branches. Furthermore, by using the compressed TreeZip file as input, we have designed an extensible decompressor that can extract subcollections of trees, compute majority and strict consensus trees, and merge tree collections using set operations such as union, intersection, and set difference. On unweighted phylogenetic trees, TreeZip is able to compress Newick files in excess of 98%. On weighted phylogenetic trees, TreeZip is able to compress a Newick file by at least 73%. TreeZip can be combined with 7zip with little overhead, allowing space savings in excess of 99% (unweighted) and 92%(weighted). Unlike TreeZip, 7zip is not immune to branch rotations, and performs worse as the level of variability in the Newick string representation increases. Finally, since the TreeZip compressed text (TRZ) file contains all the semantic information in a collection of trees, we can easily filter and decompress a subset of trees of interest (such as the set of unique trees), or build the resulting consensus tree in a matter of seconds. We also show the ease of which set operations can be performed on TRZ files, at speeds quicker than those performed on Newick or 7zip compressed Newick files, and without loss of space savings. TreeZip is an efficient approach for compressing large collections of phylogenetic trees. The semantic and compact nature of the TRZ file allow it to be operated upon directly and quickly, without a need to decompress the original Newick file. We believe that TreeZip will be vital for compressing and archiving trees in the biological community.

  20. Simulation-based power calculations for planning a two-stage individual participant data meta-analysis.

    Science.gov (United States)

    Ensor, Joie; Burke, Danielle L; Snell, Kym I E; Hemming, Karla; Riley, Richard D

    2018-05-18

    Researchers and funders should consider the statistical power of planned Individual Participant Data (IPD) meta-analysis projects, as they are often time-consuming and costly. We propose simulation-based power calculations utilising a two-stage framework, and illustrate the approach for a planned IPD meta-analysis of randomised trials with continuous outcomes where the aim is to identify treatment-covariate interactions. The simulation approach has four steps: (i) specify an underlying (data generating) statistical model for trials in the IPD meta-analysis; (ii) use readily available information (e.g. from publications) and prior knowledge (e.g. number of studies promising IPD) to specify model parameter values (e.g. control group mean, intervention effect, treatment-covariate interaction); (iii) simulate an IPD meta-analysis dataset of a particular size from the model, and apply a two-stage IPD meta-analysis to obtain the summary estimate of interest (e.g. interaction effect) and its associated p-value; (iv) repeat the previous step (e.g. thousands of times), then estimate the power to detect a genuine effect by the proportion of summary estimates with a significant p-value. In a planned IPD meta-analysis of lifestyle interventions to reduce weight gain in pregnancy, 14 trials (1183 patients) promised their IPD to examine a treatment-BMI interaction (i.e. whether baseline BMI modifies intervention effect on weight gain). Using our simulation-based approach, a two-stage IPD meta-analysis has meta-analysis was appropriate. Pre-specified adjustment for prognostic factors would increase power further. Incorrect dichotomisation of BMI would reduce power by over 20%, similar to immediately throwing away IPD from ten trials. Simulation-based power calculations could inform the planning and funding of IPD projects, and should be used routinely.

  1. METHODOLOGY AND RESULTS OF MOBILE OBJECT PURSUIT PROBLEM SOLUTION WITH TWO-STAGE DYNAMIC SYSTEM

    Directory of Open Access Journals (Sweden)

    A. Kiselev Mikhail

    2017-01-01

    Full Text Available The experience of developing unmanned fighting vehicles indicates that the main challenge in this field reduces itself to creating the systems which can replace the pilot both as a sensor and as the operator of the flight. This problem can be partial- ly solved by introducing remote control, but there are certain flight segments where it can only be executed under fully inde- pendent control and data support due to various reasons, such as tight time, short duration, lack of robust communication, etc. Such stages also include close-range air combat maneuvering (CRACM - a key flight segment as far as the fighter's purpose is concerned, which also places the highest demands on the fighter's design. Until recently the creation of an unmanned fighter airplane has been a fundamentally impossible task due to the absence of sensors able to provide the necessary data support to control the fighter during CRACM. However, the development prospects of aircraft hardware (passive type flush antennae, op- tico-locating panoramic view stations are indicative of producing possible solutions to this problem in the nearest future. There- fore, presently the only fundamental impediment on the way to developing an unmanned fighting aircraft is the problem of cre- ating algorithms for automatic trajectory control during CRACM. This paper presents the strategy of automatic trajectory con- trol synthesis by a two-stage dynamic system aiming to reach the conditions specified with respect to an object in pursuit. It contains certain results of control algorithm parameters impact assessment in regards to the pursuit mission effectiveness. Based on the obtained results a deduction is drawn pertaining to the efficiency of the offered method and its possible utilization in au- tomated control of an unmanned fighting aerial vehicle as well as organizing group interaction during CRACM.

  2. Single-stage Acetabular Revision During Two-stage THA Revision for Infection is Effective in Selected Patients.

    Science.gov (United States)

    Fink, Bernd; Schlumberger, Michael; Oremek, Damian

    2017-08-01

    The treatment of periprosthetic infections of hip arthroplasties typically involves use of either a single- or two-stage (with implantation of a temporary spacer) revision surgery. In patients with severe acetabular bone deficiencies, either already present or after component removal, spacers cannot be safely implanted. In such hips where it is impossible to use spacers and yet a two-stage revision of the prosthetic stem is recommended, we have combined a two-stage revision of the stem with a single revision of the cup. To our knowledge, this approach has not been reported before. (1) What proportion of patients treated with single-stage acetabular reconstruction as part of a two-stage revision for an infected THA remain free from infection at 2 or more years? (2) What are the Harris hip scores after the first stage and at 2 years or more after the definitive reimplantation? Between June 2009 and June 2014, we treated all patients undergoing surgical treatment for an infected THA using a single-stage acetabular revision as part of a two-stage THA exchange if the acetabular defect classification was Paprosky Types 2B, 2C, 3A, 3B, or pelvic discontinuity and a two-stage procedure was preferred for the femur. The procedure included removal of all components, joint débridement, definitive acetabular reconstruction (with a cage to bridge the defect, and a cemented socket), and a temporary cemented femoral component at the first stage; the second stage consisted of repeat joint and femoral débridement and exchange of the femoral component to a cementless device. During the period noted, 35 patients met those definitions and were treated with this approach. No patients were lost to followup before 2 years; mean followup was 42 months (range, 24-84 months). The clinical evaluation was performed with the Harris hip scores and resolution of infection was assessed by the absence of clinical signs of infection and a C-reactive protein level less than 10 mg/L. All

  3. Accuracy of the One-Stage and Two-Stage Impression Techniques: A Comparative Analysis.

    Science.gov (United States)

    Jamshidy, Ladan; Mozaffari, Hamid Reza; Faraji, Payam; Sharifi, Roohollah

    2016-01-01

    Introduction . One of the main steps of impression is the selection and preparation of an appropriate tray. Hence, the present study aimed to analyze and compare the accuracy of one- and two-stage impression techniques. Materials and Methods . A resin laboratory-made model, as the first molar, was prepared by standard method for full crowns with processed preparation finish line of 1 mm depth and convergence angle of 3-4°. Impression was made 20 times with one-stage technique and 20 times with two-stage technique using an appropriate tray. To measure the marginal gap, the distance between the restoration margin and preparation finish line of plaster dies was vertically determined in mid mesial, distal, buccal, and lingual (MDBL) regions by a stereomicroscope using a standard method. Results . The results of independent test showed that the mean value of the marginal gap obtained by one-stage impression technique was higher than that of two-stage impression technique. Further, there was no significant difference between one- and two-stage impression techniques in mid buccal region, but a significant difference was reported between the two impression techniques in MDL regions and in general. Conclusion . The findings of the present study indicated higher accuracy for two-stage impression technique than for the one-stage impression technique.

  4. Accuracy of the One-Stage and Two-Stage Impression Techniques: A Comparative Analysis

    Directory of Open Access Journals (Sweden)

    Ladan Jamshidy

    2016-01-01

    Full Text Available Introduction. One of the main steps of impression is the selection and preparation of an appropriate tray. Hence, the present study aimed to analyze and compare the accuracy of one- and two-stage impression techniques. Materials and Methods. A resin laboratory-made model, as the first molar, was prepared by standard method for full crowns with processed preparation finish line of 1 mm depth and convergence angle of 3-4°. Impression was made 20 times with one-stage technique and 20 times with two-stage technique using an appropriate tray. To measure the marginal gap, the distance between the restoration margin and preparation finish line of plaster dies was vertically determined in mid mesial, distal, buccal, and lingual (MDBL regions by a stereomicroscope using a standard method. Results. The results of independent test showed that the mean value of the marginal gap obtained by one-stage impression technique was higher than that of two-stage impression technique. Further, there was no significant difference between one- and two-stage impression techniques in mid buccal region, but a significant difference was reported between the two impression techniques in MDL regions and in general. Conclusion. The findings of the present study indicated higher accuracy for two-stage impression technique than for the one-stage impression technique.

  5. A Compact Two-Stage 120 W GaN High Power Amplifier for SweepSAR Radar Systems

    Science.gov (United States)

    Thrivikraman, Tushar; Horst, Stephen; Price, Douglas; Hoffman, James; Veilleux, Louise

    2014-01-01

    This work presents the design and measured results of a fully integrated switched power two-stage GaN HEMT high-power amplifier (HPA) achieving 60% power-added efficiency at over 120Woutput power. This high-efficiency GaN HEMT HPA is an enabling technology for L-band SweepSAR interferometric instruments that enable frequent repeat intervals and high-resolution imagery. The L-band HPA was designed using space-qualified state-of-the-art GaN HEMT technology. The amplifier exhibits over 34 dB of power gain at 51 dBm of output power across an 80 MHz bandwidth. The HPA is divided into two stages, an 8 W driver stage and 120 W output stage. The amplifier is designed for pulsed operation, with a high-speed DC drain switch operating at the pulsed-repetition interval and settles within 200 ns. In addition to the electrical design, a thermally optimized package was designed, that allows for direct thermal radiation to maintain low-junction temperatures for the GaN parts maximizing long-term reliability. Lastly, real radar waveforms are characterized and analysis of amplitude and phase stability over temperature demonstrate ultra-stable operation over temperature using integrated bias compensation circuitry allowing less than 0.2 dB amplitude variation and 2 deg phase variation over a 70 C range.

  6. An Efficient Approach to Screening Epigenome-Wide Data

    Directory of Open Access Journals (Sweden)

    Meredith A. Ray

    2016-01-01

    Full Text Available Screening cytosine-phosphate-guanine dinucleotide (CpG DNA methylation sites in association with some covariate(s is desired due to high dimensionality. We incorporate surrogate variable analyses (SVAs into (ordinary or robust linear regressions and utilize training and testing samples for nested validation to screen CpG sites. SVA is to account for variations in the methylation not explained by the specified covariate(s and adjust for confounding effects. To make it easier to users, this screening method is built into a user-friendly R package, ttScreening, with efficient algorithms implemented. Various simulations were implemented to examine the robustness and sensitivity of the method compared to the classical approaches controlling for multiple testing: the false discovery rates-based (FDR-based and the Bonferroni-based methods. The proposed approach in general performs better and has the potential to control both types I and II errors. We applied ttScreening to 383,998 CpG sites in association with maternal smoking, one of the leading factors for cancer risk.

  7. A Two-Stage Approach to the Orienteering Problem with Stochastic Weights

    NARCIS (Netherlands)

    Evers, L.; Glorie, K.; Ster, S. van der; Barros, A.I.; Monsuur, H.

    2014-01-01

    The Orienteering Problem (OP) is a routing problem which has many interesting applications in logistics, tourism and defense. The aim of the OP is to find a maximum profit path or tour, which is feasible with respect to a capacity constraint on the total weight of the selected arcs. In this paper we

  8. A two-stage approach to the orienteering problem with stochastic weights

    NARCIS (Netherlands)

    Evers, L.; Glorie, K.M.; van der Ster, S.L.; Barros, A.I.; Monsuur, H.

    2014-01-01

    The Orienteering Problem (OP) is a routing problem which has many interesting applications in logistics, tourism and defense. The aim of the OP is to find a maximum profit path or tour, which is feasible with respect to a capacity constraint on the total weight of the selected arcs. In this paper we

  9. A primal-dual decomposition based interior point approach to two-stage stochastic linear programming

    NARCIS (Netherlands)

    A.B. Berkelaar (Arjan); C.L. Dert (Cees); K.P.B. Oldenkamp; S. Zhang (Shuzhong)

    1999-01-01

    textabstractDecision making under uncertainty is a challenge faced by many decision makers. Stochastic programming is a major tool developed to deal with optimization with uncertainties that has found applications in, e.g. finance, such as asset-liability and bond-portfolio management.

  10. Two-Stage Approach to Image Classification by Deep Neural Networks

    Directory of Open Access Journals (Sweden)

    Ososkov Gennady

    2018-01-01

    Full Text Available The paper demonstrates the advantages of the deep learning networks over the ordinary neural networks on their comparative applications to image classifying. An autoassociative neural network is used as a standalone autoencoder for prior extraction of the most informative features of the input data for neural networks to be compared further as classifiers. The main efforts to deal with deep learning networks are spent for a quite painstaking work of optimizing the structures of those networks and their components, as activation functions, weights, as well as the procedures of minimizing their loss function to improve their performances and speed up their learning time. It is also shown that the deep autoencoders develop the remarkable ability for denoising images after being specially trained. Convolutional Neural Networks are also used to solve a quite actual problem of protein genetics on the example of the durum wheat classification. Results of our comparative study demonstrate the undoubted advantage of the deep networks, as well as the denoising power of the autoencoders. In our work we use both GPU and cloud services to speed up the calculations.

  11. A Two-Stage Stochastic Mixed-Integer Programming Approach to the Smart House Scheduling Problem

    Science.gov (United States)

    Ozoe, Shunsuke; Tanaka, Yoichi; Fukushima, Masao

    A “Smart House” is a highly energy-optimized house equipped with photovoltaic systems (PV systems), electric battery systems, fuel cell cogeneration systems (FC systems), electric vehicles (EVs) and so on. Smart houses are attracting much attention recently thanks to their enhanced ability to save energy by making full use of renewable energy and by achieving power grid stability despite an increased power draw for installed PV systems. Yet running a smart house's power system, with its multiple power sources and power storages, is no simple task. In this paper, we consider the problem of power scheduling for a smart house with a PV system, an FC system and an EV. We formulate the problem as a mixed integer programming problem, and then extend it to a stochastic programming problem involving recourse costs to cope with uncertain electricity demand, heat demand and PV power generation. Using our method, we seek to achieve the optimal power schedule running at the minimum expected operation cost. We present some results of numerical experiments with data on real-life demands and PV power generation to show the effectiveness of our method.

  12. Two-Stage Approach to Image Classification by Deep Neural Networks

    Science.gov (United States)

    Ososkov, Gennady; Goncharov, Pavel

    2018-02-01

    The paper demonstrates the advantages of the deep learning networks over the ordinary neural networks on their comparative applications to image classifying. An autoassociative neural network is used as a standalone autoencoder for prior extraction of the most informative features of the input data for neural networks to be compared further as classifiers. The main efforts to deal with deep learning networks are spent for a quite painstaking work of optimizing the structures of those networks and their components, as activation functions, weights, as well as the procedures of minimizing their loss function to improve their performances and speed up their learning time. It is also shown that the deep autoencoders develop the remarkable ability for denoising images after being specially trained. Convolutional Neural Networks are also used to solve a quite actual problem of protein genetics on the example of the durum wheat classification. Results of our comparative study demonstrate the undoubted advantage of the deep networks, as well as the denoising power of the autoencoders. In our work we use both GPU and cloud services to speed up the calculations.

  13. A Two-Stage Approach to Synthesizing Covariance Matrices in Meta-Analytic Structural Equation Modeling

    Science.gov (United States)

    Cheung, Mike W. L.; Chan, Wai

    2009-01-01

    Structural equation modeling (SEM) is widely used as a statistical framework to test complex models in behavioral and social sciences. When the number of publications increases, there is a need to systematically synthesize them. Methodology of synthesizing findings in the context of SEM is known as meta-analytic SEM (MASEM). Although correlation…

  14. Stochastic Real-World Drive Cycle Generation Based on a Two Stage Markov Chain Approach

    NARCIS (Netherlands)

    Balau, A.E.; Kooijman, D.; Vazquez Rodarte, I.; Ligterink, N.

    2015-01-01

    This paper presents a methodology and tool that stochastically generates drive cycles based on measured data, with the purpose of testing and benchmarking light duty vehicles in a simulation environment or on a test-bench. The WLTP database, containing real world driving measurements, was used as

  15. Design and construction of the X-2 two-stage free piston driven expansion tube

    Science.gov (United States)

    Doolan, Con

    1995-01-01

    This report outlines the design and construction of the X-2 two-stage free piston driven expansion tube. The project has completed its construction phase and the facility has been installed in the new impulsive research laboratory where commissioning is about to take place. The X-2 uses a unique, two-stage driver design which allows a more compact and lower overall cost free piston compressor. The new facility has been constructed in order to examine the performance envelope of the two-stage driver and how well it couple to sub-orbital and super-orbital expansion tubes. Data obtained from these experiments will be used for the design of a much larger facility, X-3, utilizing the same free piston driver concept.

  16. Two-Stage orders sequencing system for mixed-model assembly

    Science.gov (United States)

    Zemczak, M.; Skolud, B.; Krenczyk, D.

    2015-11-01

    In the paper, the authors focus on the NP-hard problem of orders sequencing, formulated similarly to Car Sequencing Problem (CSP). The object of the research is the assembly line in an automotive industry company, on which few different models of products, each in a certain number of versions, are assembled on the shared resources, set in a line. Such production type is usually determined as a mixed-model production, and arose from the necessity of manufacturing customized products on the basis of very specific orders from single clients. The producers are nowadays obliged to provide each client the possibility to determine a huge amount of the features of the product they are willing to buy, as the competition in the automotive market is large. Due to the previously mentioned nature of the problem (NP-hard), in the given time period only satisfactory solutions are sought, as the optimal solution method has not yet been found. Most of the researchers that implemented inaccurate methods (e.g. evolutionary algorithms) to solving sequencing problems dropped the research after testing phase, as they were not able to obtain reproducible results, and met problems while determining the quality of the received solutions. Therefore a new approach to solving the problem, presented in this paper as a sequencing system is being developed. The sequencing system consists of a set of determined rules, implemented into computer environment. The system itself works in two stages. First of them is connected with the determination of a place in the storage buffer to which certain production orders should be sent. In the second stage of functioning, precise sets of sequences are determined and evaluated for certain parts of the storage buffer under certain criteria.

  17. Kinetics analysis of two-stage austenitization in supermartensitic stainless steel

    DEFF Research Database (Denmark)

    Nießen, Frank; Villa, Matteo; Hald, John

    2017-01-01

    The martensite-to-austenite transformation in X4CrNiMo16-5-1 supermartensitic stainless steel was followed in-situ during isochronal heating at 2, 6 and 18 K min−1 applying energy-dispersive synchrotron X-ray diffraction at the BESSY II facility. Austenitization occurred in two stages, separated...... that the austenitization kinetics is governed by Ni-diffusion and that slow transformation kinetics separating the two stages is caused by soft impingement in the martensite phase. Increasing the lath width in the kinetics model had a similar effect on the austenitization kinetics as increasing the heating-rate....

  18. One-stage and two-stage penile buccal mucosa urethroplasty

    Directory of Open Access Journals (Sweden)

    G. Barbagli

    2016-03-01

    Full Text Available The paper provides the reader with the detailed description of current techniques of one-stage and two-stage penile buccal mucosa urethroplasty. The paper provides the reader with the preoperative patient evaluation paying attention to the use of diagnostic tools. The one-stage penile urethroplasty using buccal mucosa graft with the application of glue is preliminary showed and discussed. Two-stage penile urethroplasty is then reported. A detailed description of first-stage urethroplasty according Johanson technique is reported. A second-stage urethroplasty using buccal mucosa graft and glue is presented. Finally postoperative course and follow-up are addressed.

  19. Prediction of Protein Thermostability by an Efficient Neural Network Approach

    Directory of Open Access Journals (Sweden)

    Jalal Rezaeenour

    2016-10-01

    significantly improves the accuracy of ELM in prediction of thermostable enzymes. ELM tends to require more neurons in the hidden-layer than conventional tuning-based learning algorithms. To overcome these, the proposed approach uses a GA which optimizes the structure and the parameters of the ELM. In summary, optimization of ELM with GA results in an efficient prediction method; numerical experiments proved that our approach yields excellent results.

  20. Numerical simulations for the coal/oxidant distribution effects between two-stages for multi opposite burners (MOB) gasifier

    International Nuclear Information System (INIS)

    Unar, Imran Nazir; Wang, Lijun; Pathan, Abdul Ghani; Mahar, Rasool Bux; Li, Rundong; Uqaili, M. Aslam

    2014-01-01

    Highlights: • We simulated a double stage 3D entrained flow coal gasifier with multi-opposite burners. • The various reaction mechanisms have evaluated with experimental results. • The effects of coal and oxygen distribution between two stages on the performance of gasifier have investigated. • The local coal to oxygen ratio is affecting the overall efficiency of gasifier. - Abstract: A 3D CFD model for two-stage entrained flow dry feed coal gasifier with multi opposite burners (MOB) has been developed in this paper. At each stage two opposite nozzles are impinging whereas the two other opposite nozzles are slightly tangential. Various numerical simulations were carried out in standard CFD software to investigate the impacts of coal and oxidant distributions between the two stages of the gasifier. Chemical process was described by Finite Rate/Eddy Dissipation model. Heterogeneous and homogeneous reactions were defined using the published kinetic data and realizable k–ε turbulent model was used to solve the turbulence equations. Gas–solid interaction was defined by Euler–Lagrangian frame work. Different reaction mechanism were investigated first for the validation of the model from published experimental results. Then further investigations were made through the validated model for important parameters like species concentrations in syngas, char conversion, maximum inside temperature and syngas exit temperature. The analysis of the results from various simulated cases shows that coal/oxidant distribution between the stages has great influence on the overall performance of gasifier. The maximum char conversion was found 99.79% with coal 60% and oxygen 50% of upper level of injection. The minimum char conversion was observed 95.45% at 30% coal with 40% oxygen at same level. In general with oxygen and coal above or equal to 50% of total at upper injection level has shown an optimized performance

  1. A simulation-based interval two-stage stochastic model for agricultural nonpoint source pollution control through land retirement

    International Nuclear Information System (INIS)

    Luo, B.; Li, J.B.; Huang, G.H.; Li, H.L.

    2006-01-01

    This study presents a simulation-based interval two-stage stochastic programming (SITSP) model for agricultural nonpoint source (NPS) pollution control through land retirement under uncertain conditions. The modeling framework was established by the development of an interval two-stage stochastic program, with its random parameters being provided by the statistical analysis of the simulation outcomes of a distributed water quality approach. The developed model can deal with the tradeoff between agricultural revenue and 'off-site' water quality concern under random effluent discharge for a land retirement scheme through minimizing the expected value of long-term total economic and environmental cost. In addition, the uncertainties presented as interval numbers in the agriculture-water system can be effectively quantified with the interval programming. By subdividing the whole agricultural watershed into different zones, the most pollution-related sensitive cropland can be identified and an optimal land retirement scheme can be obtained through the modeling approach. The developed method was applied to the Swift Current Creek watershed in Canada for soil erosion control through land retirement. The Hydrological Simulation Program-FORTRAN (HSPF) was used to simulate the sediment information for this case study. Obtained results indicate that the total economic and environmental cost of the entire agriculture-water system can be limited within an interval value for the optimal land retirement schemes. Meanwhile, a best and worst land retirement scheme was obtained for the study watershed under various uncertainties

  2. Measuring highway efficiency : A DEA approach and the Malquist index

    NARCIS (Netherlands)

    Sarmento, Joaquim Miranda; Renneboog, Luc; Verga-Matos, Pedro

    A growing concern exists regarding the efficiency of public resources spent in transport infrastructures. In this paper, we measure the efficiency of seven highway projects in Portugal over the past decade by means of a data envelopment analysis and the Malmquist productivity and efficiency indices.

  3. A two-stage biological gas to liquid transfer process to convert carbon dioxide into bioplastic

    KAUST Repository

    Al Rowaihi, Israa; Kick, Benjamin; Grö tzinger, Stefan W.; Burger, Christian; Karan, Ram; Weuster-Botz, Dirk; Eppinger, Jö rg; Arold, Stefan T.

    2018-01-01

    The fermentation of carbon dioxide (CO2) with hydrogen (H2) uses available low-cost gases to synthesis acetic acid. Here, we present a two-stage biological process that allows the gas to liquid transfer (Bio-GTL) of CO2 into the biopolymer

  4. Treatment of corn ethanol distillery wastewater using two-stage anaerobic digestion.

    Science.gov (United States)

    Ráduly, B; Gyenge, L; Szilveszter, Sz; Kedves, A; Crognale, S

    In this study the mesophilic two-stage anaerobic digestion (AD) of corn bioethanol distillery wastewater is investigated in laboratory-scale reactors. Two-stage AD technology separates the different sub-processes of the AD in two distinct reactors, enabling the use of optimal conditions for the different microbial consortia involved in the different process phases, and thus allowing for higher applicable organic loading rates (OLRs), shorter hydraulic retention times (HRTs) and better conversion rates of the organic matter, as well as higher methane content of the produced biogas. In our experiments the reactors have been operated in semi-continuous phase-separated mode. A specific methane production of 1,092 mL/(L·d) has been reached at an OLR of 6.5 g TCOD/(L·d) (TCOD: total chemical oxygen demand) and a total HRT of 21 days (5.7 days in the first-stage, and 15.3 days in the second-stage reactor). Nonetheless the methane concentration in the second-stage reactor was very high (78.9%); the two-stage AD outperformed the reference single-stage AD (conducted at the same reactor loading rate and retention time) by only a small margin in terms of volumetric methane production rate. This makes questionable whether the higher methane content of the biogas counterbalances the added complexity of the two-stage digestion.

  5. On response time and cycle time distributions in a two-stage cyclic queue

    NARCIS (Netherlands)

    Boxma, O.J.; Donk, P.

    1982-01-01

    We consider a two-stage closed cyclic queueing model. For the case of an exponential server at each queue we derive the joint distribution of the successive response times of a custumer at both queues, using a reversibility argument. This joint distribution turns out to have a product form. The

  6. Simultaneous versus sequential pharmacokinetic-pharmacodynamic population analysis using an iterative two-stage Bayesian technique

    NARCIS (Netherlands)

    Proost, Johannes H.; Schiere, Sjouke; Eleveld, Douglas J.; Wierda, J. Mark K. H.

    A method for simultaneous pharmacokinetic-pharmacodynamic (PK-PD) population analysis using an Iterative Two-Stage Bayesian (ITSB) algorithm was developed. The method was evaluated using clinical data and Monte Carlo simulations. Data from a clinical study with rocuronium in nine anesthetized

  7. One-stage and two-stage penile buccal mucosa urethroplasty

    African Journals Online (AJOL)

    G. Barbagli

    2015-12-02

    Dec 2, 2015 ... there also seems to be a trend of decreasing urethritis and an increase of instrumentation and catheter related strictures in these countries as well [4–6]. The repair of penile urethral strictures may require one- or two- stage urethroplasty [7–10]. Certainly, sexual function can be placed at risk by any surgery ...

  8. Numerical simulation of brain tumor growth model using two-stage ...

    African Journals Online (AJOL)

    In the recent years, the study of glioma growth to be an active field of research Mathematical models that describe the proliferation and diffusion properties of the growth have been developed by many researchers. In this work, the performance analysis of two-stage Gauss-Seidel (TSGS) method to solve the glioma growth ...

  9. Two-stage bargaining with coverage extension in a dual labour market

    DEFF Research Database (Denmark)

    Roberts, Mark A.; Stæhr, Karsten; Tranæs, Torben

    2000-01-01

    This paper studies coverage extension in a simple general equilibrium model with a dual labour market. The union sector is characterized by two-stage bargaining whereas the firms set wages in the non-union sector. In this model firms and unions of the union sector have a commonality of interest...

  10. Design and construction of a two-stage centrifugal pump | Nordiana ...

    African Journals Online (AJOL)

    Centrifugal pumps are widely used in moving liquids from one location to another in homes, offices and industries. Due to the ever increasing demand for centrifugal pumps it became necessary to design and construction of a two-stage centrifugal pump. The pump consisted of an electric motor, a shaft, two rotating impellers ...

  11. Some design aspects of a two-stage rail-to-rail CMOS op amp

    NARCIS (Netherlands)

    Gierkink, Sander L.J.; Holzmann, Peter J.; Wiegerink, Remco J.; Wassenaar, R.F.

    1999-01-01

    A two-stage low-voltage CMOS op amp with rail-to-rail input and output voltage ranges is presented. The circuit uses complementary differential input pairs to achieve the rail-to-rail common-mode input voltage range. The differential pairs operate in strong inversion, and the constant

  12. Insufficient sensitivity of joint aspiration during the two-stage exchange of the hip with spacers.

    Science.gov (United States)

    Boelch, Sebastian Philipp; Weissenberger, Manuel; Spohn, Frederik; Rudert, Maximilian; Luedemann, Martin

    2018-01-10

    Evaluation of infection persistence during the two-stage exchange of the hip is challenging. Joint aspiration before reconstruction is supposed to rule out infection persistence. Sensitivity and specificity of synovial fluid culture and synovial leucocyte count for detecting infection persistence during the two-stage exchange of the hip were evaluated. Ninety-two aspirations before planned joint reconstruction during the two-stage exchange with spacers of the hip were retrospectively analyzed. The sensitivity and specificity of synovial fluid culture was 4.6 and 94.3%. The sensitivity and specificity of synovial leucocyte count at a cut-off value of 2000 cells/μl was 25.0 and 96.9%. C-reactive protein (CRP) and erythrocyte sedimentation rate (ESR) values were significantly higher before prosthesis removal and reconstruction or spacer exchange (p = 0.00; p = 0.013 and p = 0.039; p = 0.002) in the infection persistence group. Receiver operating characteristic area under the curve values before prosthesis removal and reconstruction or spacer exchange for ESR were lower (0.516 and 0.635) than for CRP (0.720 and 0.671). Synovial fluid culture and leucocyte count cannot rule out infection persistence during the two-stage exchange of the hip.

  13. The RTD measurement of two stage anaerobic digester using radiotracer in WWTP

    International Nuclear Information System (INIS)

    Jin-Seop, Kim; Jong-Bum, Kim; Sung-Hee, Jung

    2006-01-01

    The aims of this study are to assess the existence and location of the stagnant zone by estimating the MRT (mean residence time) on the two stage anaerobic digester, with the results to be used as informative clue for its better operation

  14. A two-stage meta-analysis identifies several new loci for Parkinson's disease.

    NARCIS (Netherlands)

    Plagnol, V.; Nalls, M.A.; Bras, J.M.; Hernandez, D.; Sharma, M.; Sheerin, U.M.; Saad, M.; Simon-Sanchez, J.; Schulte, C.; Lesage, S.; Sveinbjornsdottir, S.; Amouyel, P.; Arepalli, S.; Band, G.; Barker, R.A.; Bellinguez, C.; Ben-Shlomo, Y.; Berendse, H.W.; Berg, D; Bhatia, K.P.; Bie, R.M. de; Biffi, A.; Bloem, B.R.; Bochdanovits, Z.; Bonin, M.; Brockmann, K.; Brooks, J.; Burn, D.J.; Charlesworth, G.; Chen, H.; Chinnery, P.F.; Chong, S.; Clarke, C.E.; Cookson, M.R.; Cooper, J.M.; Corvol, J.C.; Counsell, J.; Damier, P.; Dartigues, J.F.; Deloukas, P.; Deuschl, G.; Dexter, D.T.; Dijk, K.D. van; Dillman, A.; Durif, F.; Durr, A.; Edkins, S.; Evans, J.R.; Foltynie, T.; Freeman, C.; Gao, J.; Gardner, M.; Gibbs, J.R.; Goate, A.; Gray, E.; Guerreiro, R.; Gustafsson, O.; Harris, C.; Hellenthal, G.; Hilten, J.J. van; Hofman, A.; Hollenbeck, A.; Holton, J.L.; Hu, M.; Huang, X.; Huber, H; Hudson, G.; Hunt, S.E.; Huttenlocher, J.; Illig, T.; Jonsson, P.V.; Langford, C.; Lees, A.J.; Lichtner, P.; Limousin, P.; Lopez, G.; McNeill, A.; Moorby, C.; Moore, M.; Morris, H.A.; Morrison, K.E.; Mudanohwo, E.; O'Sullivan, S.S; Pearson, J.; Pearson, R.; Perlmutter, J.; Petursson, H.; Pirinen, M.; Polnak, P.; Post, B.; Potter, S.C.; Ravina, B.; Revesz, T.; Riess, O.; Rivadeneira, F.; Rizzu, P.; Ryten, M.; Sawcer, S.J.; Schapira, A.; Scheffer, H.; Shaw, K.; Shoulson, I.; Sidransky, E.; Silva, R. de; Smith, C.; Spencer, C.C.; Stefansson, H.; Steinberg, S.; Stockton, J.D.; Strange, A.; Su, Z.; Talbot, K.; Tanner, C.M.; Tashakkori-Ghanbaria, A.; Tison, F.; Trabzuni, D.; Traynor, B.J.; Uitterlinden, A.G.; Vandrovcova, J.; Velseboer, D.; Vidailhet, M.; Vukcevic, D.; Walker, R.; Warrenburg, B.P.C. van de; Weale, M.E.; Wickremaratchi, M.; Williams, N.; Williams-Gray, C.H.; Winder-Rhodes, S.; Stefansson, K.; Martinez, M.; Donnelly, P.; Singleton, A.B.; Hardy, J.; Heutink, P.; Brice, A.; Gasser, T.; Wood, N.W.

    2011-01-01

    A previous genome-wide association (GWA) meta-analysis of 12,386 PD cases and 21,026 controls conducted by the International Parkinson's Disease Genomics Consortium (IPDGC) discovered or confirmed 11 Parkinson's disease (PD) loci. This first analysis of the two-stage IPDGC study

  15. Two-Stage MAS Technique for Analysis of DRA Elements and Arrays on Finite Ground Planes

    DEFF Research Database (Denmark)

    Larsen, Niels Vesterdal; Breinbjerg, Olav

    2007-01-01

    A two-stage Method of Auxiliary Sources (MAS) technique is proposed for analysis of dielectric resonator antenna (DRA) elements and arrays on finite ground planes (FGPs). The problem is solved by first analysing the DRA on an infinite ground plane (IGP) and then using this solution to model the FGP...

  16. Wide-bandwidth bilateral control using two-stage actuator system

    International Nuclear Information System (INIS)

    Kokuryu, Saori; Izutsu, Masaki; Kamamichi, Norihiro; Ishikawa, Jun

    2015-01-01

    This paper proposes a two-stage actuator system that consists of a coarse actuator driven by a ball screw with an AC motor (the first stage) and a fine actuator driven by a voice coil motor (the second stage). The proposed two-stage actuator system is applied to make a wide-bandwidth bilateral control system without needing expensive high-performance actuators. In the proposed system, the first stage has a wide moving range with a narrow control bandwidth, and the second stage has a narrow moving range with a wide control bandwidth. By consolidating these two inexpensive actuators with different control bandwidths in a complementary manner, a wide bandwidth bilateral control system can be constructed based on a mechanical impedance control. To show the validity of the proposed method, a prototype of the two-stage actuator system has been developed and basic performance was evaluated by experiment. The experimental results showed that a light mechanical impedance with a mass of 10 g and a damping coefficient of 2.5 N/(m/s) that is an important factor to establish good transparency in bilateral control has been successfully achieved and also showed that a better force and position responses between a master and slave is achieved by using the proposed two-stage actuator system compared with a narrow bandwidth case using a single ball screw system. (author)

  17. A Two-Stage Meta-Analysis Identifies Several New Loci for Parkinson's Disease

    NARCIS (Netherlands)

    Plagnol, Vincent; Nalls, Michael A.; Bras, Jose M.; Hernandez, Dena G.; Sharma, Manu; Sheerin, Una-Marie; Saad, Mohamad; Simon-Sanchez, Javier; Schulte, Claudia; Lesage, Suzanne; Sveinbjornsdottir, Sigurlaug; Amouyel, Philippe; Arepalli, Sampath; Band, Gavin; Barker, Roger A.; Bellinguez, Celine; Ben-Shlomo, Yoav; Berendse, Henk W.; Berg, Daniela; Bhatia, Kailash; de Bie, Rob M. A.; Biffi, Alessandro; Bloem, Bas; Bochdanovits, Zoltan; Bonin, Michael; Brockmann, Kathrin; Brooks, Janet; Burn, David J.; Charlesworth, Gavin; Chen, Honglei; Chinnery, Patrick F.; Chong, Sean; Clarke, Carl E.; Cookson, Mark R.; Cooper, J. Mark; Corvol, Jean Christophe; Counsell, Carl; Damier, Philippe; Dartigues, Jean-Francois; Deloukas, Panos; Deuschl, Guenther; Dexter, David T.; van Dijk, Karin D.; Dillman, Allissa; Durif, Frank; Duerr, Alexandra; Edkins, Sarah; Evans, Jonathan R.; Foltynie, Thomas; Freeman, Colin; Gao, Jianjun; Gardner, Michelle; Gibbs, J. Raphael; Goate, Alison; Gray, Emma; Guerreiro, Rita; Gustafsson, Omar; Harris, Clare; Hellenthal, Garrett; van Hilten, Jacobus J.; Hofman, Albert; Hollenbeck, Albert; Holton, Janice; Hu, Michele; Huang, Xuemei; Huber, Heiko; Hudson, Gavin; Hunt, Sarah E.; Huttenlocher, Johanna; Illig, Thomas; Jonsson, Palmi V.; Langford, Cordelia; Lees, Andrew; Lichtner, Peter; Limousin, Patricia; Lopez, Grisel; Lorenz, Delia; McNeill, Alisdair; Moorby, Catriona; Moore, Matthew; Morris, Huw; Morrison, Karen E.; Mudanohwo, Ese; O'Sullivan, Sean S.; Pearson, Justin; Pearson, Richard; Perlmutter, Joel S.; Petursson, Hjoervar; Pirinen, Matti; Pollak, Pierre; Post, Bart; Potter, Simon; Ravina, Bernard; Revesz, Tamas; Riess, Olaf; Rivadeneira, Fernando; Rizzu, Patrizia; Ryten, Mina; Sawcer, Stephen; Schapira, Anthony; Scheffer, Hans; Shaw, Karen; Shoulson, Ira; Sidransky, Ellen; de Silva, Rohan; Smith, Colin; Spencer, Chris C. A.; Stefansson, Hreinn; Steinberg, Stacy; Stockton, Joanna D.; Strange, Amy; Su, Zhan; Talbot, Kevin; Tanner, Carlie M.; Tashakkori-Ghanbaria, Avazeh; Tison, Francois; Trabzuni, Daniah; Traynor, Bryan J.; Uitterlinden, Andre G.; Vandrovcova, Jana; Velseboer, Daan; Vidailhet, Marie; Vukcevic, Damjan; Walker, Robert; van de Warrenburg, Bart; Weale, Michael E.; Wickremaratchi, Mirdhu; Williams, Nigel; Williams-Gray, Caroline H.; Winder-Rhodes, Sophie; Stefansson, Kari; Martinez, Maria; Donnelly, Peter; Singleton, Andrew B.; Hardy, John; Heutink, Peter; Brice, Alexis; Gasser, Thomas; Wood, Nicholas W.

    2011-01-01

    A previous genome-wide association (GWA) meta-analysis of 12,386 PD cases and 21,026 controls conducted by the International Parkinson's Disease Genomics Consortium (IPDGC) discovered or confirmed 11 Parkinson's disease (PD) loci. This first analysis of the two-stage IPDGC study focused on the set

  18. On A Two-Stage Supply Chain Model In The Manufacturing Industry ...

    African Journals Online (AJOL)

    We model a two-stage supply chain where the upstream stage (stage 2) always meet demand from the downstream stage (stage 1).Demand is stochastic hence shortages will occasionally occur at stage 2. Stage 2 must fill these shortages by expediting using overtime production and/or backordering. We derive optimal ...

  19. Research on the Power Recovery of Diesel Engines with Regulated Two-Stage Turbocharging System at Different Altitudes

    Directory of Open Access Journals (Sweden)

    Hualei Li

    2014-01-01

    Full Text Available Recovering the boost pressure is very important in improving the dynamic performance of diesel engines at high altitudes. A regulated two-stage turbocharging system is an adequate solution for power recovery of diesel engines. In the present study, the change of boost pressure and engine power at different altitudes was investigated, and a regulated two-stage turbocharging system was constructed with an original turbocharger and a matched low pressure turbocharger. The valve control strategies for boost pressure recovery, which formed the basis of the power recovery method, are presented here. The simulation results showed that this system was effective in recovering the boost pressure at different speeds and various altitudes. The turbine bypass valve and compressor bypass valve had different modes to adapt to changes in operating conditions. The boost pressure recovery could not ensure power recovery over the entire operating range of the diesel engine, because of variation in overall turbocharger efficiency. The fuel-injection compensation method along with the valve control strategies for boost pressure recovery was able to reach the power recovery target.

  20. Palm oil mill effluent treatment using a two-stage microbial fuel cells system integrated with immobilized biological aerated filters.

    Science.gov (United States)

    Cheng, Jia; Zhu, Xiuping; Ni, Jinren; Borthwick, Alistair

    2010-04-01

    An integrated system of two-stage microbial fuel cells (MFCs) and immobilized biological aerated filters (I-BAFs) was used to treat palm oil mill effluent (POME) at laboratory scale. By replacing the conventional two-stage up-flow anaerobic sludge blanket (UASB) with a newly proposed upflow membrane-less microbial fuel cell (UML-MFC) in the integrated system, significant improvements on NH(3)-N removal were observed and direct electricity generation implemented in both MFC1 and MFC2. Moreover, the coupled iron-carbon micro-electrolysis in the cathode of MFC2 further enhanced treatment efficiency of organic compounds. The I-BAFs played a major role in further removal of NH(3)-N and COD. For influent COD and NH(3)-N of 10,000 and 125 mg/L, respectively, the final effluents COD and NH(3)-N were below 350 and 8 mg/L, with removal rates higher than 96.5% and 93.6%. The GC-MS analysis indicated that most of the contaminants were satisfactorily biodegraded by the integrated system. Copyright 2009 Elsevier Ltd. All rights reserved.

  1. Two Stage Anaerobic Reactor Design and Treatment To Produce Biogas From Mixed Liquor of Vegetable Waste

    Science.gov (United States)

    Budiastuti, H.; Ghozali, M.; Wicaksono, H. K.; Hadiansyah, R.

    2018-01-01

    Municipal solid waste has become a common challenged problem to be solved for developing countries including Indonesia. Municipal solid waste generating is always bigger than its treatment to reduce affect of environmental pollution. This research tries to contribute to provide an alternative solution to treat municipal solid waste to produce biogas. Vegetable waste was obtained from Gedebage Market, Bandung and starter as a source of anaerobic microorganisms was cow dung obtained from a cow farm in Lembang. A two stage anaerobic reactor was designed and built to treat the vegetable waste in a batch run. The capacity of each reactor is 20 liters but its active volume in each reactor is 15 liters. Reactor 1 (R1) was fed up with mixture of filtered blended vegetable waste and water at ratio of 1:1 whereas Reactor 2 (R2) was filled with filtered mixed liquor of cow dung and water at ratio of 1:1. Both mixtures were left overnight before use. Into R1 it was added EM-4 at concentration of 10%. pH in R1 was maintained at 5 - 6.5 whereas pH in R1 was maintained at 6.5 - 7.5. Temperature of reactors was not maintained to imitate the real environmental temperature. Parameters taken during experiment were pH, temperature, COD, MLVSS, and composition of biogas. The performance of reactor built was shown from COD efficiencies reduction obtained of about 60% both in R1 and R2, pH average in R1 of 4.5 ± 1 and R2 of 7 ± 0.6, average temperature in both reactors of 25 ± 2°C. About 1L gas produced was obtained during the last 6 days of experiment in which CH4 obtained was 8.951 ppm and CO2 of 1.087 ppm. The maximum increase of MLVSS in R1 reached 156% and R2 reached 89%.

  2. Removal Natural Organic Matter (NOM in Peat Water from Wetland Area by Coagulation-Ultrafiltration Hybrid Process with Pretreatment Two-Stage Coagulation

    Directory of Open Access Journals (Sweden)

    Mahmud Mahmud

    2013-11-01

    Full Text Available The primary problem encountered in the application of membrane technology was membrane fouling. During this time, hybrid process by coagulation-ultrafiltration in drinking water treatment that has been conducted by some research, using by one-stage coagulation. The goal of this research was to investigate the effect of two-stage coagulation as a pretreatment towards performance of the coagulation-ultrafiltration hybrid process for removal NOM in the peat water. Coagulation process, either with the one-stage or two-stage coagulation was very good in removing charge hydrophilic fraction, i.e. more than 98%. NOM fractions of the peat water, from the most easily removed by the two-stage coagulation and one-stage coagulation process was charged hydrophilic>strongly hydrophobic>weakly hydrophobic>neutral hydrophilic. The two-stage coagulation process could removed UV254 and colors with a little better than the one-stage coagulation at the optimum coagulant dose. Neutral hydrophilic fraction of peat water NOM was the most influential fraction of UF membrane fouling. The two-stage coagulation process better in removing the neutral hidrophilic fraction, while removing of the charged hydrophilic, strongly hydrophobic and weakly hydrophobic similar to the one-stage coagulation. Hybrid process by pretreatment with two-stage coagulation, beside can increased removal efficiency of UV254 and color, also can reduced fouling rate of the ultrafiltration membraneIt must not exceed 250 words, contains a brief summary of the text, covering the whole manuscript without being too elaborate on every section. Avoid any abbreviation, unless it is a common knowledge or has been previously stated.

  3. Removal Natural Organic Matter (NOM in Peat Water from Wetland Area by Coagulation-Ultrafiltration Hybrid Process with Pretreatment Two-Stage Coagulation

    Directory of Open Access Journals (Sweden)

    Mahmud Mahmud

    2016-06-01

    Full Text Available The primary problem encountered in the application of membrane technology was membrane fouling. During this time, hybrid process by coagulation-ultrafiltration in drinking water treatment that has been conducted by some research, using by one-stage coagulation. The goal of this research was to investigate the effect of two-stage coagulation as a pretreatment towards performance of the coagulation-ultrafiltration hybrid process for removal NOM in the peat water. Coagulation process, either with the one-stage or two-stage coagulation was very good in removing charge hydrophilic fraction, i.e. more than 98%. NOM fractions of the peat water, from the most easily removed by the two-stage coagulation and one-stage coagulation process was charged hydrophilic>strongly hydrophobic>weakly hydrophobic>neutral hydrophilic. The two-stage coagulation process could removed UV254 and colors with a little better than the one-stage coagulation at the optimum coagulant dose. Neutral hydrophilic fraction of peat water NOM was the most influential fraction of UF membrane fouling. The two-stage coagulation process better in removing the neutral hidrophilic fraction, while removing of the charged hydrophilic, strongly hydrophobic and weakly hydrophobic similar to the one-stage coagulation. Hybrid process by pretreatment with two-stage coagulation, beside can increased removal efficiency of UV254 and color, also can reduced fouling rate of the ultrafiltration membraneIt must not exceed 250 words, contains a brief summary of the text, covering the whole manuscript without being too elaborate on every section. Avoid any abbreviation, unless it is a common knowledge or has been previously stated.

  4. Hospital efficiency and transaction costs: a stochastic frontier approach.

    Science.gov (United States)

    Ludwig, Martijn; Groot, Wim; Van Merode, Frits

    2009-07-01

    The make-or-buy decision of organizations is an important issue in the transaction cost theory, but is usually not analyzed from an efficiency perspective. Hospitals frequently have to decide whether to outsource or not. The main question we address is: Is the make-or-buy decision affected by the efficiency of hospitals? A one-stage stochastic cost frontier equation is estimated for Dutch hospitals. The make-or-buy decisions of ten different hospital services are used as explanatory variables to explain efficiency of hospitals. It is found that for most services the make-or-buy decision is not related to efficiency. Kitchen services are an important exception to this. Large hospitals tend to outsource less, which is supported by efficiency reasons. For most hospital services, outsourcing does not significantly affect the efficiency of hospitals. The focus on the make-or-buy decision may therefore be less important than often assumed.

  5. Efficiency analysis of Chinese industry: A directional distance function approach

    International Nuclear Information System (INIS)

    Watanabe, Michio; Tanaka, Katsuya

    2007-01-01

    Two efficiency measures of Chinese industry were estimated at the provincial level from 1994 to 2002, using a directional output distance function. One is a traditional efficiency measure that considers only desirable output, while the other considers both desirable and undesirable outputs simultaneously. A comparison of the two measures revealed that efficiency levels are biased only if desirable output is considered. Five coastal provinces/municipalities that have attracted a large amount of foreign direct investment are found to be the most efficient when only desirable output is considered, and also when both desirable and undesirable outputs are considered. However, omitting undesirable output tends to lead to an overestimate of industrial efficiency levels in Shandong, Sichuan, and Hebei provinces. We also found that a province's industrial structure has significant effects on its efficiency levels

  6. New approach for calibration the efficiency of HPGe detectors

    International Nuclear Information System (INIS)

    Alnour, I.A.; Wagiran, H.; Suhaimi Hamzah; Siong, W.B.; Mohd Suhaimi Elias

    2013-01-01

    Full-text: This work evaluates the efficiency calibrating of HPGe detector coupled with Canberra GC3018 with Genie 2000 software and Ortec GEM25-76-XLB-C with Gamma Vision software; available at Neutron activation analysis laboratory in Malaysian Nuclear Agency (NM). The efficiency calibration curve was constructed from measurement of an IAEA, standard gamma point sources set composed by 214 Am, 57 Co, 133 Ba, 152 Eu, 137 Cs and 60 Co. The efficiency calibrations were performed for three different geometries: 5, 10 and 15 cm distances from the end cap detector. The polynomial parameters functions were simulated through a computer program, MATLAB in order to find an accurate fit to the experimental data points. The efficiency equation was established from the known fitted parameters which allow for the efficiency evaluation at particular energy of interest. The study shows that significant deviations in the efficiency, depending on the source-detector distance and photon energy. (author)

  7. Two-Stage Method Based on Local Polynomial Fitting for a Linear Heteroscedastic Regression Model and Its Application in Economics

    Directory of Open Access Journals (Sweden)

    Liyun Su

    2012-01-01

    Full Text Available We introduce the extension of local polynomial fitting to the linear heteroscedastic regression model. Firstly, the local polynomial fitting is applied to estimate heteroscedastic function, then the coefficients of regression model are obtained by using generalized least squares method. One noteworthy feature of our approach is that we avoid the testing for heteroscedasticity by improving the traditional two-stage method. Due to nonparametric technique of local polynomial estimation, we do not need to know the heteroscedastic function. Therefore, we can improve the estimation precision, when the heteroscedastic function is unknown. Furthermore, we focus on comparison of parameters and reach an optimal fitting. Besides, we verify the asymptotic normality of parameters based on numerical simulations. Finally, this approach is applied to a case of economics, and it indicates that our method is surely effective in finite-sample situations.

  8. A Smoothing Algorithm for a New Two-Stage Stochastic Model of Supply Chain Based on Sample Average Approximation

    Directory of Open Access Journals (Sweden)

    Liu Yang

    2017-01-01

    Full Text Available We construct a new two-stage stochastic model of supply chain with multiple factories and distributors for perishable product. By introducing a second-order stochastic dominance (SSD constraint, we can describe the preference consistency of the risk taker while minimizing the expected cost of company. To solve this problem, we convert it into a one-stage stochastic model equivalently; then we use sample average approximation (SAA method to approximate the expected values of the underlying random functions. A smoothing approach is proposed with which we can get the global solution and avoid introducing new variables and constraints. Meanwhile, we investigate the convergence of an optimal value from solving the transformed model and show that, with probability approaching one at exponential rate, the optimal value converges to its counterpart as the sample size increases. Numerical results show the effectiveness of the proposed algorithm and analysis.

  9. Exploitation of algal-bacterial associations in a two-stage biohydrogen and biogas generation process.

    Science.gov (United States)

    Wirth, Roland; Lakatos, Gergely; Maróti, Gergely; Bagi, Zoltán; Minárovics, János; Nagy, Katalin; Kondorosi, Éva; Rákhely, Gábor; Kovács, Kornél L

    2015-01-01

    The growing concern regarding the use of agricultural land for the production of biomass for food/feed or energy is dictating the search for alternative biomass sources. Photosynthetic microorganisms grown on marginal or deserted land present a promising alternative to the cultivation of energy plants and thereby may dampen the 'food or fuel' dispute. Microalgae offer diverse utilization routes. A two-stage energetic utilization, using a natural mixed population of algae (Chlamydomonas sp. and Scenedesmus sp.) and mutualistic bacteria (primarily Rhizobium sp.), was tested for coupled biohydrogen and biogas production. The microalgal-bacterial biomass generated hydrogen without sulfur deprivation. Algal hydrogen production in the mixed population started earlier but lasted for a shorter period relative to the benchmark approach. The residual biomass after hydrogen production was used for biogas generation and was compared with the biogas production from maize silage. The gas evolved from the microbial biomass was enriched in methane, but the specific gas production was lower than that of maize silage. Sustainable biogas production from the microbial biomass proceeded without noticeable difficulties in continuously stirred fed-batch laboratory-size reactors for an extended period of time. Co-fermentation of the microbial biomass and maize silage improved the biogas production: The metagenomic results indicated that pronounced changes took place in the domain Bacteria, primarily due to the introduction of a considerable bacterial biomass into the system with the substrate; this effect was partially compensated in the case of co-fermentation. The bacteria living in syntrophy with the algae apparently persisted in the anaerobic reactor and predominated in the bacterial population. The Archaea community remained virtually unaffected by the changes in the substrate biomass composition. Through elimination of cost- and labor-demanding sulfur deprivation, sustainable

  10. An inexact two-stage stochastic robust programming for residential micro-grid management-based on random demand

    International Nuclear Information System (INIS)

    Ji, L.; Niu, D.X.; Huang, G.H.

    2014-01-01

    In this paper a stochastic robust optimization problem of residential micro-grid energy management is presented. Combined cooling, heating and electricity technology (CCHP) is introduced to satisfy various energy demands. Two-stage programming is utilized to find the optimal installed capacity investment and operation control of CCHP (combined cooling heating and power). Moreover, interval programming and robust stochastic optimization methods are exploited to gain interval robust solutions under different robustness levels which are feasible for uncertain data. The obtained results can help micro-grid managers minimizing the investment and operation cost with lower system failure risk when facing fluctuant energy market and uncertain technology parameters. The different robustness levels reflect the risk preference of micro-grid manager. The proposed approach is applied to residential area energy management in North China. Detailed computational results under different robustness level are presented and analyzed for providing investment decision and operation strategies. - Highlights: • An inexact two-stage stochastic robust programming model for CCHP management. • The energy market and technical parameters uncertainties were considered. • Investment decision, operation cost, and system safety were analyzed. • Uncertainties expressed as discrete intervals and probability distributions

  11. Productive efficiency of tea industry: A stochastic frontier approach ...

    African Journals Online (AJOL)

    In an economy where recourses are scarce and opportunities for a new technology are lacking, studies will be able to show the possibility of raising productivity by improving the industry's efficiency. This study attempts to measure the status of technical efficiency of tea-producing industry for panel data in Bangladesh using ...

  12. Falling Leaves Inspired ZnO Nanorods-Nanoslices Hierarchical Structure for Implant Surface Modification with Two Stage Releasing Features.

    Science.gov (United States)

    Liao, Hang; Miao, Xinxin; Ye, Jing; Wu, Tianlong; Deng, Zhongbo; Li, Chen; Jia, Jingyu; Cheng, Xigao; Wang, Xiaolei

    2017-04-19

    Inspired from falling leaves, ZnO nanorods-nanoslices hierarchical structure (NHS) was constructed to modify the surfaces of two widely used implant materials: titanium (Ti) and tantalum (Ta), respectively. By which means, two-stage release of antibacterial active substances were realized to address the clinical importance of long-term broad-spectrum antibacterial activity. At early stages (within 48 h), the NHS exhibited a rapid releasing to kill the bacteria around the implant immediately. At a second stage (over 2 weeks), the NHS exhibited a slow releasing to realize long-term inhibition. The excellent antibacterial activity of ZnO NHS was confirmed once again by animal test in vivo. According to the subsequent experiments, the ZnO NHS coating exhibited the great advantage of high efficiency, low toxicity, and long-term durability, which could be a feasible manner to prevent the abuse of antibiotics on implant-related surgery.

  13. A Two-Stage Robust Optimization for Centralized-Optimal Dispatch of Photovoltaic Inverters in Active Distribution Networks

    DEFF Research Database (Denmark)

    Ding, Tao; Li, Cheng; Yang, Yongheng

    2017-01-01

    Optimally dispatching Photovoltaic (PV) inverters is an efficient way to avoid overvoltage in active distribution networks, which may occur in the case of PV generation surplus load demand. Typically, the dispatching optimization objective is to identify critical PV inverters that have the most...... nature of solar PV energy may affect the selection of the critical PV inverters and also the final optimal objective value. In order to address this issue, a two-stage robust optimization model is proposed in this paper to achieve a robust optimal solution to the PV inverter dispatch, which can hedge...... against any possible realization within the uncertain PV outputs. In addition, the conic relaxation-based branch flow formulation and second-order cone programming based column-and-constraint generation algorithm are employed to deal with the proposed robust optimization model. Case studies on a 33-bus...

  14. Production of long chain alkyl esters from carbon dioxide and electricity by a two-stage bacterial process.

    Science.gov (United States)

    Lehtinen, Tapio; Efimova, Elena; Tremblay, Pier-Luc; Santala, Suvi; Zhang, Tian; Santala, Ville

    2017-11-01

    Microbial electrosynthesis (MES) is a promising technology for the reduction of carbon dioxide into value-added multicarbon molecules. In order to broaden the product profile of MES processes, we developed a two-stage process for microbial conversion of carbon dioxide and electricity into long chain alkyl esters. In the first stage, the carbon dioxide is reduced to organic compounds, mainly acetate, in a MES process by Sporomusa ovata. In the second stage, the liquid end-products of the MES process are converted to the final product by a second microorganism, Acinetobacter baylyi in an aerobic bioprocess. In this proof-of-principle study, we demonstrate for the first time the bacterial production of long alkyl esters (wax esters) from carbon dioxide and electricity as the sole sources of carbon and energy. The process holds potential for the efficient production of carbon-neutral chemicals or biofuels. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Biogas Upgrading via Hydrogenotrophic Methanogenesis in Two-Stage Continuous Stirred Tank Reactors at Mesophilic and Thermophilic Conditions

    DEFF Research Database (Denmark)

    Bassani, Ilaria; Kougias, Panagiotis; Treu, Laura

    2015-01-01

    This study proposes an innovative setup composed by two stage reactors to achieve biogas upgrading coupling the CO2 in the biogas with external H2 and subsequent conversion into CH4 by hydrogenotrophic methanogenesis. In this configuration, the biogas produced in the first reactor was transferred...... production and CO2 conversion was recorded. The consequent increase of pH did not inhibit the process indicating adaptation of microorganisms to higher pH levels. The effects of H2 on the microbial community were studied using high-throughput Illumina random sequences and full-length 16S rRNA genes extracted...... to the second one, where H2 was injected. This configuration was tested at both mesophilic and thermophilic conditions. After H2 addition, the produced biogas was upgraded to average CH4 content of 89% in the mesophilic reactor and 85% in the thermophilic. At thermophilic conditions, a higher efficiency of CH4...

  16. A CFD Analysis of Steam Flow in the Two-Stage Experimental Impulse Turbine with the Drum Rotor Arrangement

    Directory of Open Access Journals (Sweden)

    Yun Kukchol

    2016-01-01

    Full Text Available The aim of the paper is to present the CFD analysis of the steam flow in the two-stage turbine with a drum rotor and balancing slots. The balancing slot is a part of every rotor blade and it can be used in the same way as balancing holes on the classical rotor disc. The main attention is focused on the explanation of the experimental knowledge about the impact of the slot covering and uncovering on the efficiency of the individual stages and the entire turbine. The pressure and temperature fields and the mass steam flows through the shaft seals, slots and blade cascades are calculated. The impact of the balancing slots covering or uncovering on the reaction and velocity conditions in the stages is evaluated according to the pressure and temperature fields. We have also concentrated on the analysis of the seal steam flow through the balancing slots. The optimized design of the balancing slots has been suggested.

  17. Minimizing makespan in a two-stage flow shop with parallel batch-processing machines and re-entrant jobs

    Science.gov (United States)

    Huang, J. D.; Liu, J. J.; Chen, Q. X.; Mao, N.

    2017-06-01

    Against a background of heat-treatment operations in mould manufacturing, a two-stage flow-shop scheduling problem is described for minimizing makespan with parallel batch-processing machines and re-entrant jobs. The weights and release dates of jobs are non-identical, but job processing times are equal. A mixed-integer linear programming model is developed and tested with small-scale scenarios. Given that the problem is NP hard, three heuristic construction methods with polynomial complexity are proposed. The worst case of the new constructive heuristic is analysed in detail. A method for computing lower bounds is proposed to test heuristic performance. Heuristic efficiency is tested with sets of scenarios. Compared with the two improved heuristics, the performance of the new constructive heuristic is superior.

  18. A Two-stage Improvement Method for Robot Based 3D Surface Scanning

    Science.gov (United States)

    He, F. B.; Liang, Y. D.; Wang, R. F.; Lin, Y. S.

    2018-03-01

    As known that the surface of unknown object was difficult to measure or recognize precisely, hence the 3D laser scanning technology was introduced and used properly in surface reconstruction. Usually, the surface scanning speed was slower and the scanning quality would be better, while the speed was faster and the quality would be worse. In this case, the paper presented a new two-stage scanning method in order to pursuit the quality of surface scanning in a faster speed. The first stage was rough scanning to get general point cloud data of object’s surface, and then the second stage was specific scanning to repair missing regions which were determined by chord length discrete method. Meanwhile, a system containing a robotic manipulator and a handy scanner was also developed to implement the two-stage scanning method, and relevant paths were planned according to minimum enclosing ball and regional coverage theories.

  19. Sample size reassessment for a two-stage design controlling the false discovery rate.

    Science.gov (United States)

    Zehetmayer, Sonja; Graf, Alexandra C; Posch, Martin

    2015-11-01

    Sample size calculations for gene expression microarray and NGS-RNA-Seq experiments are challenging because the overall power depends on unknown quantities as the proportion of true null hypotheses and the distribution of the effect sizes under the alternative. We propose a two-stage design with an adaptive interim analysis where these quantities are estimated from the interim data. The second stage sample size is chosen based on these estimates to achieve a specific overall power. The proposed procedure controls the power in all considered scenarios except for very low first stage sample sizes. The false discovery rate (FDR) is controlled despite of the data dependent choice of sample size. The two-stage design can be a useful tool to determine the sample size of high-dimensional studies if in the planning phase there is high uncertainty regarding the expected effect sizes and variability.

  20. Target tracking system based on preliminary and precise two-stage compound cameras

    Science.gov (United States)

    Shen, Yiyan; Hu, Ruolan; She, Jun; Luo, Yiming; Zhou, Jie

    2018-02-01

    Early detection of goals and high-precision of target tracking is two important performance indicators which need to be balanced in actual target search tracking system. This paper proposed a target tracking system with preliminary and precise two - stage compound. This system using a large field of view to achieve the target search. After the target was searched and confirmed, switch into a small field of view for two field of view target tracking. In this system, an appropriate filed switching strategy is the key to achieve tracking. At the same time, two groups PID parameters are add into the system to reduce tracking error. This combination way with preliminary and precise two-stage compound can extend the scope of the target and improve the target tracking accuracy and this method has practical value.

  1. Influence of capacity- and time-constrained intermediate storage in two-stage food production systems

    DEFF Research Database (Denmark)

    Akkerman, Renzo; van Donk, Dirk Pieter; Gaalman, Gerard

    2007-01-01

    In food processing, two-stage production systems with a batch processor in the first stage and packaging lines in the second stage are common and mostly separated by capacity- and time-constrained intermediate storage. This combination of constraints is common in practice, but the literature hardly...... of systems like this. Contrary to the common sense in operations management, the LPT rule is able to maximize the total production volume per day. Furthermore, we show that adding one tank has considerable effects. Finally, we conclude that the optimal setup frequency for batches in the first stage...... pays any attention to this. In this paper, we show how various capacity and time constraints influence the performance of a specific two-stage system. We study the effects of several basic scheduling and sequencing rules in the presence of these constraints in order to learn the characteristics...

  2. A simple two stage optimization algorithm for constrained power economic dispatch

    International Nuclear Information System (INIS)

    Huang, G.; Song, K.

    1994-01-01

    A simple two stage optimization algorithm is proposed and investigated for fast computation of constrained power economic dispatch control problems. The method is a simple demonstration of the hierarchical aggregation-disaggregation (HAD) concept. The algorithm first solves an aggregated problem to obtain an initial solution. This aggregated problem turns out to be classical economic dispatch formulation, and it can be solved in 1% of overall computation time. In the second stage, linear programming method finds optimal solution which satisfies power balance constraints, generation and transmission inequality constraints and security constraints. Implementation of the algorithm for IEEE systems and EPRI Scenario systems shows that the two stage method obtains average speedup ratio 10.64 as compared to classical LP-based method

  3. Two-stage combustion for reducing pollutant emissions from gas turbine combustors

    Science.gov (United States)

    Clayton, R. M.; Lewis, D. H.

    1981-01-01

    Combustion and emission results are presented for a premix combustor fueled with admixtures of JP5 with neat H2 and of JP5 with simulated partial-oxidation product gas. The combustor was operated with inlet-air state conditions typical of cruise power for high performance aviation engines. Ultralow NOx, CO and HC emissions and extended lean burning limits were achieved simultaneously. Laboratory scale studies of the non-catalyzed rich-burning characteristics of several paraffin-series hydrocarbon fuels and of JP5 showed sooting limits at equivalence ratios of about 2.0 and that in order to achieve very rich sootless burning it is necessary to premix the reactants thoroughly and to use high levels of air preheat. The application of two-stage combustion for the reduction of fuel NOx was reviewed. An experimental combustor designed and constructed for two-stage combustion experiments is described.

  4. A Sensorless Power Reserve Control Strategy for Two-Stage Grid-Connected PV Systems

    DEFF Research Database (Denmark)

    Sangwongwanich, Ariya; Yang, Yongheng; Blaabjerg, Frede

    2017-01-01

    Due to the still increasing penetration of grid-connected Photovoltaic (PV) systems, advanced active power control functionalities have been introduced in grid regulations. A power reserve control, where namely the active power from the PV panels is reserved during operation, is required for grid...... support. In this paper, a cost-effective solution to realize the power reserve for two-stage grid-connected PV systems is proposed. The proposed solution routinely employs a Maximum Power Point Tracking (MPPT) control to estimate the available PV power and a Constant Power Generation (CPG) control...... performed on a 3-kW two-stage single-phase grid-connected PV system, where the power reserve control is achieved upon demands....

  5. A two staged condensation of vapors of an isobutane tower in installations for sulfuric acid alkylation

    Energy Technology Data Exchange (ETDEWEB)

    Smirnov, N.P.; Feyzkhanov, R.I.; Idrisov, A.D.; Navalikhin, P.G.; Sakharov, V.D.

    1983-01-01

    In order to increase the concentration of isobutane to greater than 72 to 76 percent in an installation for sulfuric acid alkylation, a system of two staged condensation of vapors from an isobutane tower is placed into operation. The first stage condenses the heavier part of the upper distillate of the tower, which is achieved through somewhat of an increase in the condensate temperature. The product which is condensed in the first stage is completely returned to the tower as a live irrigation. The vapors of the isobutane fraction which did not condense in the first stage are sent to two newly installed condensers, from which the product after condensation passes through intermediate tanks to further depropanization. The two staged condensation of vapors of the isobutane tower reduces the content of the inert diluents, the propane and n-butane in the upper distillate of the isobutane tower and creates more favorable conditions for the operation of the isobutane and propane tower.

  6. Optimising the refrigeration cycle with a two-stage centrifugal compressor and a flash intercooler

    Energy Technology Data Exchange (ETDEWEB)

    Roeyttae, Pekka; Turunen-Saaresti, Teemu; Honkatukia, Juha [Lappeenranta University of Technology, Laboratory of Energy and Environmental Technology, PO Box 20, 53851 Lappeenranta (Finland)

    2009-09-15

    The optimisation of a refrigeration process with a two-stage centrifugal compressor and flash intercooler is presented in this paper. The two-stage centrifugal compressor stages are on the same shaft and the electric motor is cooled with the refrigerant. The performance of the centrifugal compressor is evaluated based on semi-empirical specific-speed curves and the effect of the Reynolds number, surface roughness and tip clearance have also been taken into account. The thermodynamic and transport properties of the working fluids are modelled with a real-gas model. The condensing and evaporation temperatures, the temperature after the flash intercooler, and cooling power have been chosen as fixed values in the process. The aim is to gain a maximum coefficient of performance (COP). The method of optimisation, the operation of the compressor and flash intercooler, and the method for estimating the electric motor cooling are also discussed in the article. (author)

  7. Development of a low-temperature two-stage fluidized bed incinerator for controlling heavy-metal emission in flue gases

    International Nuclear Information System (INIS)

    Peng, Tzu-Huan; Lin, Chiou-Liang; Wey, Ming-Yen

    2014-01-01

    This study develops a low-temperature two-stage fluidized bed system for treating municipal solid waste. This new system can decrease the emission of heavy metals, has low construction costs, and can save energy owing to its lower operating temperature. To confirm the treatment efficiency of this system, the combustion efficiency and heavy-metal emission were determined. An artificial waste containing heavy metals (chromium, lead, and cadmium) was used in this study. The tested parameters included first-stage temperature and system gas velocity. Results obtained using a thermogravimetric analyzer with a differential scanning calorimeter indicated that the first-stage temperature should be controlled to at least 400 °C. Although, a large amount of carbon monoxide was emitted after the first stage, it was efficiently consumed in the second. Loss of the ignition values of ash residues were between 0.005% and 0.166%, and they exhibited a negative correlation with temperature and gas velocity. Furthermore, the emission concentration of heavy metals in the two-stage system was lower than that of the traditional one-stage fluidized bed system. The heavy-metal emissions can be decreased by between 16% and 82% using the low-temperature operating process, silica sand adsorption, and the filtration of the secondary stage. -- Graphical abstract: Heavy-metal emission concentrations in flue gases under different temperatures and gas velocities (dashed line: average of the heavy-metal emission in flue gases in the one-stage fluidized-bed incinerator). Highlights: • Low temperature two-stage system is developed to control heavy metal. • The different first-stage temperatures affect the combustion efficiency. • Surplus CO was destroyed efficiently by the secondary fluidized bed combustor. • Metal emission in two-stage system is lower than in the traditional system. • Temperature, bed adsorption, and filtration are the main control mechanisms

  8. Multiple heavy metals extraction and recovery from hazardous electroplating sludge waste via ultrasonically enhanced two-stage acid leaching.

    Science.gov (United States)

    Li, Chuncheng; Xie, Fengchun; Ma, Yang; Cai, Tingting; Li, Haiying; Huang, Zhiyuan; Yuan, Gaoqing

    2010-06-15

    An ultrasonically enhanced two-stage acid leaching process on extracting and recovering multiple heavy metals from actual electroplating sludge was studied in lab tests. It provided an effective technique for separation of valuable metals (Cu, Ni and Zn) from less valuable metals (Fe and Cr) in electroplating sludge. The efficiency of the process had been measured with the leaching efficiencies and recovery rates of the metals. Enhanced by ultrasonic power, the first-stage acid leaching demonstrated leaching rates of 96.72%, 97.77%, 98.00%, 53.03%, and 0.44% for Cu, Ni, Zn, Cr, and Fe respectively, effectively separated half of Cr and almost all of Fe from mixed metals. The subsequent second-stage leaching achieved leaching rates of 75.03%, 81.05%, 81.39%, 1.02%, and 0% for Cu, Ni, Zn, Cr, and Fe that further separated Cu, Ni, and Zn from mixed metals. With the stabilized two-stage ultrasonically enhanced leaching, the resulting over all recovery rates of Cu, Ni, Zn, Cr and Fe from electroplating sludge could be achieved at 97.42%, 98.46%, 98.63%, 98.32% and 100% respectively, with Cr and Fe in solids and the rest of the metals in an aqueous solution discharged from the leaching system. The process performance parameters studied were pH, ultrasonic power, and contact time. The results were also confirmed in an industrial pilot-scale test, and same high metal recoveries were performed. Copyright 2010 Elsevier B.V. All rights reserved.

  9. A Two-Stage Queue Model to Optimize Layout of Urban Drainage System considering Extreme Rainstorms

    OpenAIRE

    He, Xinhua; Hu, Wenfa

    2017-01-01

    Extreme rainstorm is a main factor to cause urban floods when urban drainage system cannot discharge stormwater successfully. This paper investigates distribution feature of rainstorms and draining process of urban drainage systems and uses a two-stage single-counter queue method M/M/1→M/D/1 to model urban drainage system. The model emphasizes randomness of extreme rainstorms, fuzziness of draining process, and construction and operation cost of drainage system. Its two objectives are total c...

  10. Generation of dense, pulsed beams of refractory metal atoms using two-stage laser ablation

    International Nuclear Information System (INIS)

    Kadar-Kallen, M.A.; Bonin, K.D.

    1994-01-01

    We report a technique for generating a dense, pulsed beam of refractory metal atoms using two-stage laser ablation. An atomic beam of uranium was produced with a peak, ground-state number density of 1x10 12 cm -3 at a distance of z=27 cm from the source. This density can be scaled as 1/z 3 to estimate the density at other distances which are also far from the source

  11. Two-stage hepatectomy: who will not jump over the second hurdle?

    Science.gov (United States)

    Turrini, O; Ewald, J; Viret, F; Sarran, A; Goncalves, A; Delpero, J-R

    2012-03-01

    Two-stage hepatectomy uses compensatory liver regeneration after a first noncurative hepatectomy to enable a second curative resection in patients with bilobar colorectal liver metastasis (CLM). To determine the predictive factors of failure of two-stage hepatectomy. Between 2000 and 2010, 48 patients with irresectable CLM were eligible for two-stage hepatectomy. The planned strategy was a) cleaning of the left hepatic lobe (first hepatectomy), b) right portal vein embolisation and c) right hepatectomy (second hepatectomy). Six patients had occult CLM (n = 5) or extra-hepatic disease (n = 1), which was discovered during the first hepatectomy. Thus, 42 patients completed the first hepatectomy and underwent portal vein embolisation in order to receive the second hepatectomy. Eight patients did not undergo a second hepatectomy due to disease progression. Upon univariate analysis, two factors were identified that precluded patients from having the second hepatectomy: the combined resection of a primary tumour during the first hepatectomy (p = 0.01) and administration of chemotherapy between the two hepatectomies (p = 0.03). An independent association with impairment to perform the two-stage strategy was demonstrated by multivariate analysis for only the combined resection of the primary colorectal cancer during the first hepatectomy (p = 0.04). Due to the small number of patients and the absence of equivalent conclusions in other studies, we cannot recommend performance of an isolated colorectal resection prior to chemotherapy. However, resection of an asymptomatic primary tumour before chemotherapy should not be considered as an outdated procedure. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. Single-stage-to-orbit versus two-stage-two-orbit: A cost perspective

    Science.gov (United States)

    Hamaker, Joseph W.

    1996-03-01

    This paper considers the possible life-cycle costs of single-stage-to-orbit (SSTO) and two-stage-to-orbit (TSTO) reusable launch vehicles (RLV's). The analysis parametrically addresses the issue such that the preferred economic choice comes down to the relative complexity of the TSTO compared to the SSTO. The analysis defines the boundary complexity conditions at which the two configurations have equal life-cycle costs, and finally, makes a case for the economic preference of SSTO over TSTO.

  13. Control strategy research of two stage topology for pulsed power supply

    International Nuclear Information System (INIS)

    Shi Chunfeng; Wang Rongkun; Huang Yuzhen; Chen Youxin; Yan Hongbin; Gao Daqing

    2013-01-01

    A kind of pulsed power supply of HIRFL-CSR was introduced, the ripple and the current error of the topological structure of the power in the operation process were analyzed, and two stage topology of pulsed power supply was given. The control strategy was simulated and the experiment was done in digital power platform. The results show that the main circuit structure and control method are feasible. (authors)

  14. A novel flow sensor based on resonant sensing with two-stage microleverage mechanism

    Science.gov (United States)

    Yang, B.; Guo, X.; Wang, Q. H.; Lu, C. F.; Hu, D.

    2018-04-01

    The design, simulation, fabrication, and experiments of a novel flow sensor based on resonant sensing with a two-stage microleverage mechanism are presented in this paper. Different from the conventional detection methods for flow sensors, two differential resonators are adopted to implement air flow rate transformation through two-stage leverage magnification. The proposed flow sensor has a high sensitivity since the adopted two-stage microleverage mechanism possesses a higher amplification factor than a single-stage microleverage mechanism. The modal distribution and geometric dimension of the two-stage leverage mechanism and hair are analyzed and optimized by Ansys simulation. A digital closed-loop driving technique with a phase frequency detector-based coordinate rotation digital computer algorithm is implemented for the detection and locking of resonance frequency. The sensor fabricated by the standard deep dry silicon on a glass process has a device dimension of 5100 μm (length) × 5100 μm (width) × 100 μm (height) with a hair diameter of 1000 μm. The preliminary experimental results demonstrate that the maximal mechanical sensitivity of the flow sensor is approximately 7.41 Hz/(m/s)2 at a resonant frequency of 22 kHz for the hair height of 9 mm and increases by 2.42 times as hair height extends from 3 mm to 9 mm. Simultaneously, a detection-limit of 3.23 mm/s air flow amplitude at 60 Hz is confirmed. The proposed flow sensor has great application prospects in the micro-autonomous system and technology, self-stabilizing micro-air vehicles, and environmental monitoring.

  15. Two Stage Fuzzy Methodology to Evaluate the Credit Risks of Investment Projects

    OpenAIRE

    O. Badagadze; G. Sirbiladze; I. Khutsishvili

    2014-01-01

    The work proposes a decision support methodology for the credit risk minimization in selection of investment projects. The methodology provides two stages of projects’ evaluation. Preliminary selection of projects with minor credit risks is made using the Expertons Method. The second stage makes ranking of chosen projects using the Possibilistic Discrimination Analysis Method. The latter is a new modification of a well-known Method of Fuzzy Discrimination Analysis.

  16. A Two-Stage Rural Household Demand Analysis: Microdata Evidence from Jiangsu Province, China

    OpenAIRE

    X.M. Gao; Eric J. Wailes; Gail L. Cramer

    1996-01-01

    In this paper we evaluate economic and demographic effects on China's rural household demand for nine food commodities: vegetables, pork, beef and lamb, poultry, eggs, fish, sugar, fruit, and grain; and five nonfood commodity groups: clothing, fuel, stimulants, housing, and durables. A two-stage budgeting allocation procedure is used to obtain an empirically tractable amalgamative demand system for food commodities which combine an upper-level AIDS model and a lower-level GLES as a modeling f...

  17. Latent Inhibition as a Function of US Intensity in a Two-Stage CER Procedure

    Science.gov (United States)

    Rodriguez, Gabriel; Alonso, Gumersinda

    2004-01-01

    An experiment is reported in which the effect of unconditioned stimulus (US) intensity on latent inhibition (LI) was examined, using a two-stage conditioned emotional response (CER) procedure in rats. A tone was used as the pre-exposed and conditioned stimulus (CS), and a foot-shock of either a low (0.3 mA) or high (0.7 mA) intensity was used as…

  18. Two-stage meta-analysis of survival data from individual participants using percentile ratios

    Science.gov (United States)

    Barrett, Jessica K; Farewell, Vern T; Siannis, Fotios; Tierney, Jayne; Higgins, Julian P T

    2012-01-01

    Methods for individual participant data meta-analysis of survival outcomes commonly focus on the hazard ratio as a measure of treatment effect. Recently, Siannis et al. (2010, Statistics in Medicine 29:3030–3045) proposed the use of percentile ratios as an alternative to hazard ratios. We describe a novel two-stage method for the meta-analysis of percentile ratios that avoids distributional assumptions at the study level. Copyright © 2012 John Wiley & Sons, Ltd. PMID:22825835

  19. Two-staged management for all types of congenital pouch colon

    Directory of Open Access Journals (Sweden)

    Rajendra K Ghritlaharey

    2013-01-01

    Full Text Available Background: The aim of this study was to review our experience with two-staged management for all types of congenital pouch colon (CPC. Patients and Methods: This retrospective study included CPC cases that were managed with two-staged procedures in the Department of Paediatric Surgery, over a period of 12 years from 1 January 2000 to 31 December 2011. Results: CPC comprised of 13.71% (97 of 707 of all anorectal malformations (ARM and 28.19% (97 of 344 of high ARM. Eleven CPC cases (all males were managed with two-staged procedures. Distribution of cases (Narsimha Rao et al.′s classification into types I, II, III, and IV were 1, 2, 6, and 2, respectively. Initial operative procedures performed were window colostomy (n = 6, colostomy proximal to pouch (n = 4, and ligation of colovesical fistula and end colostomy (n = 1. As definitive procedures, pouch excision with abdomino-perineal pull through (APPT of colon in eight, and pouch excision with APPT of ileum in three were performed. The mean age at the time of definitive procedures was 15.6 months (ranges from 3 to 53 months and the mean weight was 7.5 kg (ranges from 4 to 11 kg. Good fecal continence was observed in six and fair in two cases in follow-up periods, while three of our cases lost to follow up. There was no mortality following definitive procedures amongst above 11 cases. Conclusions: Two-staged procedures for all types of CPC can also be performed safely with good results. The most important fact that the definitive procedure is being done without protective stoma and therefore, it avoids stoma closure, stoma-related complications, related cost of stoma closure and hospital stay.

  20. Modelling of an air-cooled two-stage Rankine cycle for electricity production

    International Nuclear Information System (INIS)

    Liu, Bo

    2014-01-01

    This work considers a two stage Rankine cycle architecture slightly different from a standard Rankine cycle for electricity generation. Instead of expanding the steam to extremely low pressure, the vapor leaves the turbine at a higher pressure then having a much smaller specific volume. It is thus possible to greatly reduce the size of the steam turbine. The remaining energy is recovered by a bottoming cycle using a working fluid which has a much higher density than the water steam. Thus, the turbines and heat exchangers are more compact; the turbine exhaust velocity loss is lower. This configuration enables to largely reduce the global size of the steam water turbine and facilitate the use of a dry cooling system. The main advantage of such an air cooled two stage Rankine cycle is the possibility to choose the installation site of a large or medium power plant without the need of a large and constantly available water source; in addition, as compared to water cooled cycles, the risk regarding future operations is reduced (climate conditions may affect water availability or temperature, and imply changes in the water supply regulatory rules). The concept has been investigated by EDF R and D. A 22 MW prototype was developed in the 1970's using ammonia as the working fluid of the bottoming cycle for its high density and high latent heat. However, this fluid is toxic. In order to search more suitable working fluids for the two stage Rankine cycle application and to identify the optimal cycle configuration, we have established a working fluid selection methodology. Some potential candidates have been identified. We have evaluated the performances of the two stage Rankine cycles operating with different working fluids in both design and off design conditions. For the most acceptable working fluids, components of the cycle have been sized. The power plant concept can then be evaluated on a life cycle cost basis. (author)

  1. A Sensorless Power Reserve Control Strategy for Two-Stage Grid-Connected PV Systems

    OpenAIRE

    Sangwongwanich, Ariya; Yang, Yongheng; Blaabjerg, Frede

    2017-01-01

    Due to the still increasing penetration of grid-connected Photovoltaic (PV) systems, advanced active power control functionalities have been introduced in grid regulations. A power reserve control, where namely the active power from the PV panels is reserved during operation, is required for grid support. In this paper, a cost-effective solution to realize the power reserve for two-stage grid-connected PV systems is proposed. The proposed solution routinely employs a Maximum Power Point Track...

  2. Actuator Fault Diagnosis in a Boeing 747 Model via Adaptive Modified Two-Stage Kalman Filter

    Directory of Open Access Journals (Sweden)

    Fikret Caliskan

    2014-01-01

    Full Text Available An adaptive modified two-stage linear Kalman filtering algorithm is utilized to identify the loss of control effectiveness and the magnitude of low degree of stuck faults in a closed-loop nonlinear B747 aircraft. Control effectiveness factors and stuck magnitudes are used to quantify faults entering control systems through actuators. Pseudorandom excitation inputs are used to help distinguish partial loss and stuck faults. The partial loss and stuck faults in the stabilizer are isolated and identified successfully.

  3. Two-stage residual inclusion estimation: addressing endogeneity in health econometric modeling.

    Science.gov (United States)

    Terza, Joseph V; Basu, Anirban; Rathouz, Paul J

    2008-05-01

    The paper focuses on two estimation methods that have been widely used to address endogeneity in empirical research in health economics and health services research-two-stage predictor substitution (2SPS) and two-stage residual inclusion (2SRI). 2SPS is the rote extension (to nonlinear models) of the popular linear two-stage least squares estimator. The 2SRI estimator is similar except that in the second-stage regression, the endogenous variables are not replaced by first-stage predictors. Instead, first-stage residuals are included as additional regressors. In a generic parametric framework, we show that 2SRI is consistent and 2SPS is not. Results from a simulation study and an illustrative example also recommend against 2SPS and favor 2SRI. Our findings are important given that there are many prominent examples of the application of inconsistent 2SPS in the recent literature. This study can be used as a guide by future researchers in health economics who are confronted with endogeneity in their empirical work.

  4. Production of endo-pectate lyase by two stage cultivation of Erwinia carotovora

    Energy Technology Data Exchange (ETDEWEB)

    Fukuoka, Satoshi; Kobayashi, Yoshiaki

    1987-02-26

    The productivity of endo-pectate lyase from Erwinia carotovora GIR 1044 was found to be greatly improved by two stage cultivation: in the first stage the bacterium was grown with an inducing carbon source, e.g., pectin, and in the second stage it was cultivated with glycerol, xylose, or fructose with the addition of monosodium L-glutamate as nitrogen source. In the two stage cultivation using pectin or glycerol as the carbon source the enzyme activity reached 400 units/ml, almost 3 times as much as that of one stage cultivation in a 10 liter fermentor. Using two stage cultivation in the 200 liter fermentor improved enzyme productivity over that in the 10 liter fermentor, with 500 units/ml of activity. Compared with the cultivation in Erlenmeyer flasks, fermentor cultivation improved enzyme productivity. The optimum cultivating conditions were agitation of 480 rpm with aeration of 0.5 vvm at 28 /sup 0/C. (4 figs, 4 tabs, 14 refs)

  5. A two-stage extraction procedure for insensitive munition (IM) explosive compounds in soils.

    Science.gov (United States)

    Felt, Deborah; Gurtowski, Luke; Nestler, Catherine C; Johnson, Jared; Larson, Steven

    2016-12-01

    The Department of Defense (DoD) is developing a new category of insensitive munitions (IMs) that are more resistant to detonation or promulgation from external stimuli than traditional munition formulations. The new explosive constituent compounds are 2,4-dinitroanisole (DNAN), nitroguanidine (NQ), and nitrotriazolone (NTO). The production and use of IM formulations may result in interaction of IM component compounds with soil. The chemical properties of these IM compounds present unique challenges for extraction from environmental matrices such as soil. A two-stage extraction procedure was developed and tested using several soil types amended with known concentrations of IM compounds. This procedure incorporates both an acidified phase and an organic phase to account for the chemical properties of the IM compounds. The method detection limits (MDLs) for all IM compounds in all soil types were regulatory risk-based Regional Screening Level (RSL) criteria for soil proposed by the U.S. Army Public Health Center. At defined environmentally relevant concentrations, the average recovery of each IM compound in each soil type was consistent and greater than 85%. The two-stage extraction method decreased the influence of soil composition on IM compound recovery. UV analysis of NTO established an isosbestic point based on varied pH at a detection wavelength of 341 nm. The two-stage soil extraction method is equally effective for traditional munition compounds, a potentially important point when examining soils exposed to both traditional and insensitive munitions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Final Report on Two-Stage Fast Spectrum Fuel Cycle Options

    International Nuclear Information System (INIS)

    Yang, Won Sik; Lin, C. S.; Hader, J. S.; Park, T. K.; Deng, P.; Yang, G.; Jung, Y. S.; Kim, T. K.; Stauff, N. E.

    2016-01-01

    This report presents the performance characteristics of two ''two-stage'' fast spectrum fuel cycle options proposed to enhance uranium resource utilization and to reduce nuclear waste generation. One is a two-stage fast spectrum fuel cycle option of continuous recycle of plutonium (Pu) in a fast reactor (FR) and subsequent burning of minor actinides (MAs) in an accelerator-driven system (ADS). The first stage is a sodium-cooled FR fuel cycle starting with low-enriched uranium (LEU) fuel; at the equilibrium cycle, the FR is operated using the recovered Pu and natural uranium without supporting LEU. Pu and uranium (U) are co-extracted from the discharged fuel and recycled in the first stage, and the recovered MAs are sent to the second stage. The second stage is a sodium-cooled ADS in which MAs are burned in an inert matrix fuel form. The discharged fuel of ADS is reprocessed, and all the recovered heavy metals (HMs) are recycled into the ADS. The other is a two-stage FR/ADS fuel cycle option with MA targets loaded in the FR. The recovered MAs are not directly sent to ADS, but partially incinerated in the FR in order to reduce the amount of MAs to be sent to the ADS. This is a heterogeneous recycling option of transuranic (TRU) elements

  7. Effects of earthworm casts and zeolite on the two-stage composting of green waste

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Lu, E-mail: zhanglu1211@gmail.com; Sun, Xiangyang, E-mail: xysunbjfu@gmail.com

    2015-05-15

    Highlights: • Earthworm casts (EWCs) and clinoptilolite (CL) were used in green waste composting. • Addition of EWCs + CL improved physico-chemical and microbiological properties. • Addition of EWCs + CL extended the duration of thermophilic periods during composting. • Addition of EWCs + CL enhanced humification, cellulose degradation, and nutrients. • Combined addition of 0.30% EWCs + 25% CL reduced composting time to 21 days. - Abstract: Because it helps protect the environment and encourages economic development, composting has become a viable method for organic waste disposal. The objective of this study was to investigate the effects of earthworm casts (EWCs) (at 0.0%, 0.30%, and 0.60%) and zeolite (clinoptilolite, CL) (at 0%, 15%, and 25%) on the two-stage composting of green waste. The combination of EWCs and CL improved the conditions of the composting process and the quality of the compost products in terms of the thermophilic phase, humification, nitrification, microbial numbers and enzyme activities, the degradation of cellulose and hemicellulose, and physico-chemical characteristics and nutrient contents of final composts. The compost matured in only 21 days with the optimized two-stage composting method rather than in the 90–270 days required for traditional composting. The optimal two-stage composting and the best quality compost were obtained with 0.30% EWCs and 25% CL.

  8. Effects of earthworm casts and zeolite on the two-stage composting of green waste

    International Nuclear Information System (INIS)

    Zhang, Lu; Sun, Xiangyang

    2015-01-01

    Highlights: • Earthworm casts (EWCs) and clinoptilolite (CL) were used in green waste composting. • Addition of EWCs + CL improved physico-chemical and microbiological properties. • Addition of EWCs + CL extended the duration of thermophilic periods during composting. • Addition of EWCs + CL enhanced humification, cellulose degradation, and nutrients. • Combined addition of 0.30% EWCs + 25% CL reduced composting time to 21 days. - Abstract: Because it helps protect the environment and encourages economic development, composting has become a viable method for organic waste disposal. The objective of this study was to investigate the effects of earthworm casts (EWCs) (at 0.0%, 0.30%, and 0.60%) and zeolite (clinoptilolite, CL) (at 0%, 15%, and 25%) on the two-stage composting of green waste. The combination of EWCs and CL improved the conditions of the composting process and the quality of the compost products in terms of the thermophilic phase, humification, nitrification, microbial numbers and enzyme activities, the degradation of cellulose and hemicellulose, and physico-chemical characteristics and nutrient contents of final composts. The compost matured in only 21 days with the optimized two-stage composting method rather than in the 90–270 days required for traditional composting. The optimal two-stage composting and the best quality compost were obtained with 0.30% EWCs and 25% CL

  9. Is the continuous two-stage anaerobic digestion process well suited for all substrates?

    Science.gov (United States)

    Lindner, Jonas; Zielonka, Simon; Oechsner, Hans; Lemmer, Andreas

    2016-01-01

    Two-stage anaerobic digestion systems are often considered to be advantageous compared to one-stage processes. Although process conditions and fermenter setups are well examined, overall substrate degradation in these systems is controversially discussed. Therefore, the aim of this study was to investigate how substrates with different fibre and sugar contents (hay/straw, maize silage, sugar beet) influence the degradation rate and methane production. Intermediates and gas compositions, as well as methane yields and VS-degradation degrees were recorded. The sugar beet substrate lead to a higher pH-value drop 5.67 in the acidification reactor, which resulted in a six time higher hydrogen production in comparison to the hay/straw substrate (pH-value drop 5.34). As the achieved yields in the two-stage system showed a difference of 70.6% for the hay/straw substrate, and only 7.8% for the sugar beet substrate. Therefore two-stage systems seem to be only recommendable for digesting sugar rich substrates. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Two-Stage Liver Transplantation with Temporary Porto-Middle Hepatic Vein Shunt

    Directory of Open Access Journals (Sweden)

    Giovanni Varotti

    2010-01-01

    Full Text Available Two-stage liver transplantation (LT has been reported for cases of fulminant liver failure that can lead to toxic hepatic syndrome, or massive hemorrhages resulting in uncontrollable bleeding. Technically, the first stage of the procedure consists of a total hepatectomy with preservation of the recipient's inferior vena cava (IVC, followed by the creation of a temporary end-to-side porto-caval shunt (TPCS. The second stage consists of removing the TPCS and implanting a liver graft when one becomes available. We report a case of a two-stage total hepatectomy and LT in which a temporary end-to-end anastomosis between the portal vein and the middle hepatic vein (TPMHV was performed as an alternative to the classic end-to-end TPCS. The creation of a TPMHV proved technically feasible and showed some advantages compared to the standard TPCS. In cases in which a two-stage LT with side-to-side caval reconstruction is utilized, TPMHV can be considered as a safe and effective alternative to standard TPCS.

  11. Productive efficiency of tea industry: A stochastic frontier approach

    African Journals Online (AJOL)

    USER

    2010-06-21

    Jun 21, 2010 ... Key words: Technical efficiency, stochastic frontier, translog ... present low performance of the tea industry in Bangladesh. ... The Technical inefficiency effect .... administrative, technical, clerical, sales and purchase staff.

  12. A computationally efficient approach for template matching-based ...

    Indian Academy of Sciences (India)

    In this paper, a new computationally efficient image registration method is ...... the proposed method requires less computational time as compared to traditional methods. ... Zitová B and Flusser J 2003 Image registration methods: A survey.

  13. Efficient channel estimation in massive MIMO systems - a distributed approach

    KAUST Repository

    Al-Naffouri, Tareq Y.

    2016-01-01

    We present two efficient algorithms for distributed estimation of channels in massive MIMO systems. The two cases of 1) generic, and 2) sparse channels is considered. The algorithms estimate the impulse response for each channel observed

  14. Evaluating efficiency of passenger railway stations: a DEA approach

    OpenAIRE

    Khadem Sameni, Melody; Preston, John; Khadem Sameni, Mona

    2016-01-01

    Stations are bottlenecks for railway transportation as they are where traffics merge and diverge. Numerous activities such as passengers boarding, alighting and interchanging, train formation and technical checks are also done at these points. The number of platforms is limited and it is vital to do all the work efficiently. For the first time in the literature, we implement a methodology based on data envelopment analysis which is benchmarked from ports and airport efficiency studies. It can...

  15. A Modern Approach to the Efficient-Market Hypothesis

    OpenAIRE

    Gabriel Frahm

    2013-01-01

    Market efficiency at least requires the absence of weak arbitrage opportunities, but this is not sufficient to establish a situation where the market is sensitive, i.e., where it "fully reflects" or "rapidly adjusts to" some information flow including the evolution of asset prices. By contrast, No Weak Arbitrage together with market sensitivity is sufficient and necessary for a market to be informationally efficient.

  16. Treatment of natural rubber processing wastewater using a combination system of a two-stage up-flow anaerobic sludge blanket and down-flow hanging sponge system.

    Science.gov (United States)

    Tanikawa, D; Syutsubo, K; Hatamoto, M; Fukuda, M; Takahashi, M; Choeisai, P K; Yamaguchi, T

    2016-01-01

    A pilot-scale experiment of natural rubber processing wastewater treatment was conducted using a combination system consisting of a two-stage up-flow anaerobic sludge blanket (UASB) and a down-flow hanging sponge (DHS) reactor for more than 10 months. The system achieved a chemical oxygen demand (COD) removal efficiency of 95.7% ± 1.3% at an organic loading rate of 0.8 kg COD/(m(3).d). Bacterial activity measurement of retained sludge from the UASB showed that sulfate-reducing bacteria (SRB), especially hydrogen-utilizing SRB, possessed high activity compared with methane-producing bacteria (MPB). Conversely, the acetate-utilizing activity of MPB was superior to SRB in the second stage of the reactor. The two-stage UASB-DHS system can reduce power consumption by 95% and excess sludge by 98%. In addition, it is possible to prevent emissions of greenhouse gases (GHG), such as methane, using this system. Furthermore, recovered methane from the two-stage UASB can completely cover the electricity needs for the operation of the two-stage UASB-DHS system, accounting for approximately 15% of the electricity used in the natural rubber manufacturing process.

  17. Empirical study of classification process for two-stage turbo air classifier in series

    Science.gov (United States)

    Yu, Yuan; Liu, Jiaxiang; Li, Gang

    2013-05-01

    The suitable process parameters for a two-stage turbo air classifier are important for obtaining the ultrafine powder that has a narrow particle-size distribution, however little has been published internationally on the classification process for the two-stage turbo air classifier in series. The influence of the process parameters of a two-stage turbo air classifier in series on classification performance is empirically studied by using aluminum oxide powders as the experimental material. The experimental results show the following: 1) When the rotor cage rotary speed of the first-stage classifier is increased from 2 300 r/min to 2 500 r/min with a constant rotor cage rotary speed of the second-stage classifier, classification precision is increased from 0.64 to 0.67. However, in this case, the final ultrafine powder yield is decreased from 79% to 74%, which means the classification precision and the final ultrafine powder yield can be regulated through adjusting the rotor cage rotary speed of the first-stage classifier. 2) When the rotor cage rotary speed of the second-stage classifier is increased from 2 500 r/min to 3 100 r/min with a constant rotor cage rotary speed of the first-stage classifier, the cut size is decreased from 13.16 μm to 8.76 μm, which means the cut size of the ultrafine powder can be regulated through adjusting the rotor cage rotary speed of the second-stage classifier. 3) When the feeding speed is increased from 35 kg/h to 50 kg/h, the "fish-hook" effect is strengthened, which makes the ultrafine powder yield decrease. 4) To weaken the "fish-hook" effect, the equalization of the two-stage wind speeds or the combination of a high first-stage wind speed with a low second-stage wind speed should be selected. This empirical study provides a criterion of process parameter configurations for a two-stage or multi-stage classifier in series, which offers a theoretical basis for practical production.

  18. Numerical analysis of flow interaction of turbine system in two-stage turbocharger of internal combustion engine

    Science.gov (United States)

    Liu, Y. B.; Zhuge, W. L.; Zhang, Y. J.; Zhang, S. Y.

    2016-05-01

    To reach the goal of energy conservation and emission reduction, high intake pressure is needed to meet the demand of high power density and high EGR rate for internal combustion engine. Present power density of diesel engine has reached 90KW/L and intake pressure ratio needed is over 5. Two-stage turbocharging system is an effective way to realize high compression ratio. Because turbocharging system compression work derives from exhaust gas energy. Efficiency of exhaust gas energy influenced by design and matching of turbine system is important to performance of high supercharging engine. Conventional turbine system is assembled by single-stage turbocharger turbines and turbine matching is based on turbine MAP measured on test rig. Flow between turbine system is assumed uniform and value of outlet physical quantities of turbine are regarded as the same as ambient value. However, there are three-dimension flow field distortion and outlet physical quantities value change which will influence performance of turbine system as were demonstrated by some studies. For engine equipped with two-stage turbocharging system, optimization of turbine system design will increase efficiency of exhaust gas energy and thereby increase engine power density. However flow interaction of turbine system will change flow in turbine and influence turbine performance. To recognize the interaction characteristics between high pressure turbine and low pressure turbine, flow in turbine system is modeled and simulated numerically. The calculation results suggested that static pressure field at inlet to low pressure turbine increases back pressure of high pressure turbine, however efficiency of high pressure turbine changes little; distorted velocity field at outlet to high pressure turbine results in swirl at inlet to low pressure turbine. Clockwise swirl results in large negative angle of attack at inlet to rotor which causes flow loss in turbine impeller passages and decreases turbine

  19. Measuring economy-wide energy efficiency performance: A parametric frontier approach

    International Nuclear Information System (INIS)

    Zhou, P.; Ang, B.W.; Zhou, D.Q.

    2012-01-01

    This paper proposes a parametric frontier approach to estimating economy-wide energy efficiency performance from a production efficiency point of view. It uses the Shephard energy distance function to define an energy efficiency index and adopts the stochastic frontier analysis technique to estimate the index. A case study of measuring the economy-wide energy efficiency performance of a sample of OECD countries using the proposed approach is presented. It is found that the proposed parametric frontier approach has higher discriminating power in energy efficiency performance measurement compared to its nonparametric frontier counterparts.

  20. New approaches for improving energy efficiency in the Brazilian industry

    Directory of Open Access Journals (Sweden)

    Paulo Henrique de Mello Santana

    2016-11-01

    Full Text Available The Brazilian government has been promoting energy efficiency measures for industry since the eighties but with very limited returns, as shown in this paper. The governments of some other countries dedicated much more effort and funds for this area and reached excellent results. The institutional arrangements and types of programmes adopted in these countries are briefly evaluated in the paper and provide valuable insights for several proposals put forward here to make more effective the Brazilian government actions directed to overcome market barriers and improve energy efficiency in the local industry. The proposed measures include the creation of Industrial Assessment Centres and an executive agency charged with the coordination of all energy efficiency programmes run by the Federal government. A large share of the Brazilian industry energy consumption comes from energy-intensive industrial branches. According to a recent survey, most of them have substantial energy conservation potentials. To materialize a fair amount of them, voluntary targets concerning energy efficiency gains should start to be negotiated between the Government and associations representing these industrial branches. Credit facilities and tax exemptions for energy-efficient equipment’s should be provided to stimulate the interest of the entrepreneurs and the setting-up of bolder targets.

  1. Scaling production and improving efficiency in DEA: an interactive approach

    Science.gov (United States)

    Rödder, Wilhelm; Kleine, Andreas; Dellnitz, Andreas

    2017-10-01

    DEA models help a DMU to detect its (in-)efficiency and to improve activities, if necessary. Efficiency is only one economic aim for a decision-maker; however, up- or downsizing might be a second one. Improving efficiency is the main topic in DEA; the long-term strategy towards the right production size should attract our attention as well. Not always the management of a DMU primarily focuses on technical efficiency but rather is interested in gaining scale effects. In this paper, a formula for returns to scale (RTS) is developed, and this formula is even applicable for interior points of technology. Particularly, technical and scale inefficient DMUs need sophisticated instruments to improve their situation. Considering RTS as well as efficiency, in this paper, we give an advice for each DMU to find an economically reliable path from its actual situation to better activities and finally to most productive scale size (mpss), perhaps. For realizing this path, we propose an interactive algorithm, thus harmonizing the scientific findings and the interests of the management. Small numerical examples illustrate such paths for selected DMUs; an empirical application in theatre management completes the contribution.

  2. A two-stage method for microcalcification cluster segmentation in mammography by deformable models

    International Nuclear Information System (INIS)

    Arikidis, N.; Kazantzi, A.; Skiadopoulos, S.; Karahaliou, A.; Costaridou, L.; Vassiou, K.

    2015-01-01

    Purpose: Segmentation of microcalcification (MC) clusters in x-ray mammography is a difficult task for radiologists. Accurate segmentation is prerequisite for quantitative image analysis of MC clusters and subsequent feature extraction and classification in computer-aided diagnosis schemes. Methods: In this study, a two-stage semiautomated segmentation method of MC clusters is investigated. The first stage is targeted to accurate and time efficient segmentation of the majority of the particles of a MC cluster, by means of a level set method. The second stage is targeted to shape refinement of selected individual MCs, by means of an active contour model. Both methods are applied in the framework of a rich scale-space representation, provided by the wavelet transform at integer scales. Segmentation reliability of the proposed method in terms of inter and intraobserver agreements was evaluated in a case sample of 80 MC clusters originating from the digital database for screening mammography, corresponding to 4 morphology types (punctate: 22, fine linear branching: 16, pleomorphic: 18, and amorphous: 24) of MC clusters, assessing radiologists’ segmentations quantitatively by two distance metrics (Hausdorff distance—HDIST cluster , average of minimum distance—AMINDIST cluster ) and the area overlap measure (AOM cluster ). The effect of the proposed segmentation method on MC cluster characterization accuracy was evaluated in a case sample of 162 pleomorphic MC clusters (72 malignant and 90 benign). Ten MC cluster features, targeted to capture morphologic properties of individual MCs in a cluster (area, major length, perimeter, compactness, and spread), were extracted and a correlation-based feature selection method yielded a feature subset to feed in a support vector machine classifier. Classification performance of the MC cluster features was estimated by means of the area under receiver operating characteristic curve (Az ± Standard Error) utilizing tenfold cross

  3. A Decision-making Model for a Two-stage Production-delivery System in SCM Environment

    Science.gov (United States)

    Feng, Ding-Zhong; Yamashiro, Mitsuo

    A decision-making model is developed for an optimal production policy in a two-stage production-delivery system that incorporates a fixed quantity supply of finished goods to a buyer at a fixed interval of time. First, a general cost model is formulated considering both supplier (of raw materials) and buyer (of finished products) sides. Then an optimal solution to the problem is derived on basis of the cost model. Using the proposed model and its optimal solution, one can determine optimal production lot size for each stage, optimal number of transportation for semi-finished goods, and optimal quantity of semi-finished goods transported each time to meet the lumpy demand of consumers. Also, we examine the sensitivity of raw materials ordering and production lot size to changes in ordering cost, transportation cost and manufacturing setup cost. A pragmatic computation approach for operational situations is proposed to solve integer approximation solution. Finally, we give some numerical examples.

  4. Nanocomposite YSZ-NiO Particles with Tailored Structure Synthesized in a Two-Stage Continuous Hydrothermal Flow Reactor

    DEFF Research Database (Denmark)

    Zielke, Philipp; Xu, Yu; Kiebach, Wolff-Ragnar

    2016-01-01

    core-shell structures or surface decorated particles could exhibit better performance compared with single phase materials. To obtain such advanced structures is the aim of the ProEco project (www.proeco.dk). In this project, a two-stage continuous reactor is built and used to synthesize such nano...... the performance of energy storage and conversion devices such as fuel cells, electrolyzers and batteries is important. One promising approach to further improve these devices is the use of carefully structured nanosized materials. Nano-composite particles combining different materials in advanced geometries like......-of-the-art solid oxide fuel and electrolysis cells. The prepared particles were characterized by X-ray powder diffraction, (high resolution) transmission electron microscopy, scanning tunnel transmission microscopy and Raman spectroscopy in order to determine crystal structure, particle size, surface morphology...

  5. Are the global REIT markets efficient by a new approach?

    Directory of Open Access Journals (Sweden)

    Fang Hao

    2013-01-01

    Full Text Available This study uses a panel KSS test by Nuri Ucar and Tolga Omay (2009, with a Fourier function based on the sequential panel selection method (SPSM procedure proposed by Georgios Chortareas and George Kapetanios (2009 to test the efficiency of REIT markets in 16 countries from 28 March 2008 to 27 June 2011. A Fourier approximation often captures the behavior of an unknown break, and testing for a unit root increases its power to do so. Moreover, SPSM can determine the mix of I(0 and I(1 series in a panel setting to clarify how many and which are random walk processes. Our empirical results demonstrate that REIT markets are efficient in all sampled countries except the UK. Our results imply that investors in countries with efficient REIT markets can adopt more passive portfolio strategies.

  6. Efficient Variational Approaches for Deformable Registration of Images

    Directory of Open Access Journals (Sweden)

    Mehmet Ali Akinlar

    2012-01-01

    Full Text Available Dirichlet, anisotropic, and Huber regularization terms are presented for efficient registration of deformable images. Image registration, an ill-posed optimization problem, is solved using a gradient-descent-based method and some fundamental theorems in calculus of variations. Euler-Lagrange equations with homogeneous Neumann boundary conditions are obtained. These equations are discretized by multigrid and finite difference numerical techniques. The method is applied to the registration of brain MR images of size 65×65. Computational results indicate that the presented method is quite fast and efficient in the registration of deformable medical images.

  7. Implementation and efficiency of two geometric stiffening approaches

    International Nuclear Information System (INIS)

    Lugris, Urbano; Naya, Miguel A.; Perez, Jose A.; Cuadrado, Javier

    2008-01-01

    When the modeling of flexible bodies is required in multibody systems, the floating frame of reference formulations are probably the most efficient methods available. In the case of beams undergoing high speed rotations, the geometric stiffening effect can appear due to geometric nonlinearities, and it is often not captured by the aforementioned methods, since it is common to linearize the elastic forces assuming small deformations. The present work discusses the implementation of different existing methods developed to consider such geometric nonlinearities within a floating frame of reference formulation in natural coordinates, making emphasis on the relation between efficiency and accuracy of the resulting algorithms, seeking to provide practical criteria of use

  8. Two-stage gas-phase bioreactor for the combined removal of hydrogen sulphide, methanol and alpha-pinene.

    Science.gov (United States)

    Rene, Eldon R; Jin, Yaomin; Veiga, María C; Kennes, Christian

    2009-11-01

    Biological treatment systems have emerged as cost-effective and eco-friendly techniques for treating waste gases from process industries at moderately high gas flow rates and low pollutant concentrations. In this study, we have assessed the performance of a two-stage bioreactor, namely a biotrickling filter packed with pall rings (BTF, 1st stage) and a perlite + pall ring mixed biofilter (BF, 2nd stage) operated in series, for handling a complex mixture of hydrogen sulphide (H2S), methanol (CH3OH) and alpha-pinene (C10H16). It has been reported that the presence of H2S can reduce the biofiltration efficiency of volatile organic compounds (VOCs) when both are present in the gas mixture. Hydrogen sulphide and methanol were removed in the first stage BTF, previously inoculated with H2S-adapted populations and a culture containing Candida boidinii, an acid-tolerant yeast, whereas, in the second stage, alpha-pinene was removed predominantly by the fungus Ophiostoma stenoceras. Experiments were conducted in five different phases, corresponding to inlet loading rates varying between 2.1 and 93.5 g m(-3) h(-1) for H2S, 55.3 and 1260.2 g m(-3) h(-1) for methanol, and 2.8 and 161.1 g m(-3) h(-1) for alpha-pinene. Empty bed residence times were varied between 83.4 and 10 s in the first stage and 146.4 and 17.6 s in the second stage. The BTF, working at a pH as low as 2.7 as a result of H2S degradation, removed most of the H2S and methanol but only very little alpha-pinene. On the other hand, the BF, at a pH around 6.0, removed the rest of the H2S, the non-degraded methanol and most of the alpha-pinene vapours. Attempts were originally made to remove the three pollutants in a single acidophilic bioreactor, but the Ophiostoma strain was hardly active at pH elimination capacities (ECs) reached by the two-stage bioreactor for individual pollutants were 894.4 g m(-3) h(-1) for methanol, 45.1 g m(-3) h(-1) for H2S and 138.1 g m(-3) h(-1) for alpha-pinene. The results from this

  9. Silicon concentrator cells in a two-stage photovoltaic system with a concentration factor of 300x

    Energy Technology Data Exchange (ETDEWEB)

    Mohr, A.

    2005-06-15

    In this work a rear contacted silicon concentrator cell was developed for an application in a two stage concentrator photovoltaic system. This system was developed at Fraunhofer ISE some years ago. The innovation of this one-axis tracked system is that it enables a high geometrical concentration of 300x in combination with a high optical efficiency (around 78%) and a large acceptance angle of {+-}23.5 all year through. For this, the system uses a parabolic mirror (40.4x) and a three dimensional second stage consisting of compound parabolic concentrators (CPCs, 7.7x). For the concentrator concept and particularly for an easy cell integration, the rear line contacted concentrator (RLCC) cells with a maximum efficiency of 25% were developed and a hybrid mounting concept for the RLCC cells is presented. The optical performance of different CPC materials was tested and analysed in this work. Finally, small modules consisting of six series interconnected RLCC cells and six CPCs were integrated into the concentrator system and tested outdoor. A system efficiency of 16.2% was reached at around 800 W/m2 direct irradiance under realistic outdoor conditions. (orig.)

  10. Maximum likelihood estimation of signal detection model parameters for the assessment of two-stage diagnostic strategies.

    Science.gov (United States)

    Lirio, R B; Dondériz, I C; Pérez Abalo, M C

    1992-08-01

    The methodology of Receiver Operating Characteristic curves based on the signal detection model is extended to evaluate the accuracy of two-stage diagnostic strategies. A computer program is developed for the maximum likelihood estimation of parameters that characterize the sensitivity and specificity of two-stage classifiers according to this extended methodology. Its use is briefly illustrated with data collected in a two-stage screening for auditory defects.

  11. Alternative approaches to evaluation of cow efficiency | MacNeil ...

    African Journals Online (AJOL)

    Estimated breeding values based on the preceding results and using the maternal genetic effect on ADG as a proxy for the direct genetic effect on milk production were combined in six indexes of cow efficiency. These indexes sought to increase output and decrease input simultaneously, to increase output holding input ...

  12. A direct mining approach to efficient constrained graph pattern discovery

    DEFF Research Database (Denmark)

    Zhu, Feida; Zhang, Zequn; Qu, Qiang

    2013-01-01

    Despite the wealth of research on frequent graph pattern mining, how to efficiently mine the complete set of those with constraints still poses a huge challenge to the existing algorithms mainly due to the inherent bottleneck in the mining paradigm. In essence, mining requests with explicitly-spe...

  13. Efficient learning strategy of Chinese characters based on network approach.

    Directory of Open Access Journals (Sweden)

    Xiaoyong Yan

    Full Text Available We develop an efficient learning strategy of Chinese characters based on the network of the hierarchical structural relations between Chinese characters. A more efficient strategy is that of learning the same number of useful Chinese characters in less effort or time. We construct a node-weighted network of Chinese characters, where character usage frequencies are used as node weights. Using this hierarchical node-weighted network, we propose a new learning method, the distributed node weight (DNW strategy, which is based on a new measure of nodes' importance that considers both the weight of the nodes and its location in the network hierarchical structure. Chinese character learning strategies, particularly their learning order, are analyzed as dynamical processes over the network. We compare the efficiency of three theoretical learning methods and two commonly used methods from mainstream Chinese textbooks, one for Chinese elementary school students and the other for students learning Chinese as a second language. We find that the DNW method significantly outperforms the others, implying that the efficiency of current learning methods of major textbooks can be greatly improved.

  14. The Relative Efficiency of Charter Schools: A Cost Frontier Approach

    Science.gov (United States)

    Gronberg, Timothy J.; Jansen, Dennis W.; Taylor, Lori L.

    2012-01-01

    Charters represent an expansion of public school choice, offering free, publicly funded educational alternatives to traditional public schools. One relatively unexplored research question concerning charter schools asks whether charter schools are more efficient suppliers of educational services than are traditional public schools. The potential…

  15. Efficient flow and human centred assembly by an interactive approach

    NARCIS (Netherlands)

    Eikhout, S.M.; Helmes, R.B.M.; Rhijn, J.W. van

    2004-01-01

    Due to fluctuations on the market, manufacturing of many product variations, and wish for fine-tuning between production and assembly a fan and motor manufacturing company wanted to improve their assembly line. The aims were efficient flow and human centered assembly in the new product line.

  16. The KS-KT-100 plant for two-stage vitrification of radioactive waste: results of tests with simulators

    International Nuclear Information System (INIS)

    Davydov, V.I.; Dobrygin, P.G.; Dolgov, V.V.; Sergeev, G.A.

    1976-01-01

    The Soviet Union has developed a two-stage process for phosphate vitrification of liquid radioactive waste involving the use, at the initial stage, of calcination in the pseudo-liquefied layer, followed by melting of the calcinate in a ceramic crucible (second stage). On the basis of the laboratory studies and bench tests using experimental equipment, the authors have developed and tried out an enlarged plant - the KS-KT-100. The plant includes units for preparing the solution, evaporation, calcination, melting and gas purification. The initial solution containing 240 g/litre of aluminium nitrate, 125 g/litre of sodium nitrate, 120 to 130 g/litre of orthophosphoric acid, and 90 to 150 g/litre of industrial molasses simulated fluxed nitrate waste. The tests have shown that the various units operate satisfactorily. The authors have determined the technological parameters for evaporation, calcination of the solution and melting of the calcinate. The presence of molasses in the solution (150 g/litre) makes it possible to decompose and distil 40% of the nitrate ion during evaporation. The calcination temperature is 350 to 400 0 C, and the fluidization rate 1.5 m/s. The capacity of the plant for the initial solution is 100 litres/h, for the evaporated solution 65 litres/h, and for the glass 20 kg/h. The efficiency of the gas purification system ranges between 10 7 and 10 9 . The test results show the feasibility of the two-stage method of vitrification in actual practice. (author)

  17. Pilot investigation of two-stage biofiltration for removal of natural organic matter in drinking water treatment.

    Science.gov (United States)

    Fu, Jie; Lee, Wan-Ning; Coleman, Clark; Meyer, Melissa; Carter, Jason; Nowack, Kirk; Huang, Ching-Hua

    2017-01-01

    A pilot study employing two parallel trains of two-stage biofiltration, i.e., a sand/anthracite (SA) biofilter followed by a biologically-active granular activated carbon (GAC) contactor, was conducted to test the efficiency, feasibility and stability of biofiltration for removing natural organic matter (NOM) after coagulation in a drinking water treatment plant. Results showed the biofiltration process could effectively remove turbidity (24% of dissolved organic carbon (DOC), >57% of UV 254 , and >44% of SUVA 254 ), where the SA biofilters showed a strong capacity for turbidity removal, while the GAC contactors played the dominant role in NOM removal. The vertical profile of water quality in the GAC contactors indicated the middle-upper portion was the critical zone for the removal of NOM, where relatively higher adsorption and enhanced biological removal were afforded. Fluorescence excitation-emission matrix (EEM) analysis of NOM showed that the GAC contactors effectively decreased the content of humic-like component, while protein-like component was refractory for the biofiltration process. Nutrients (NH 4 -N and PO 4 -P) supplementation applied upstream of one of the two-stage biofiltration trains (called engineered biofiltration) stimulated the growth of microorganisms, and showed a modest effect on promoting the biological removal of small non-aromatic compositions in NOM. Redundancy analysis (RDA) indicated influent UV 254 was the most explanatory water quality parameter for GAC contactors' treatment performance, and a high load of UV 254 would result in significantly reduced removals of UV 254 and SUVA 254 . Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Two-Stage Enzymatic Preparation of Eicosapentaenoic Acid (EPA) And Docosahexaenoic Acid (DHA) Enriched Fish Oil Triacylglycerols.

    Science.gov (United States)

    Zhang, Zhen; Liu, Fang; Ma, Xiang; Huang, Huihua; Wang, Yong

    2018-01-10

    Fish oil products in the form of triacylglycerols generally have relatively low contents of eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) and so it is of potential research and industrial interest to enrich the related contents in commercial products. Thereby an economical and efficient two-stage preparation of EPA and DHA enriched fish oil triacylglycerols is proposed in this study. The first stage was the partial hydrolysis of fish oil by only 0.2 wt.‰ AY "Amano" 400SD which led to increases of EPA and DHA contents in acylglycerols from 19.30 and 13.09 wt % to 25.95 and 22.06 wt %, respectively. Subsequently, products of the first stage were subjected to transesterification with EPA and DHA enriched fatty acid ethyl esters (EDEE) as the second stage to afford EPA and DHA enriched fish oil triacylglycerols by using as low as 2 wt % Novozyme 435. EDEEs prepared from fish oil ethyl ester, and recycled DHA and EPA, respectively, were applied in this stage. Final products prepared with two different sources of EDEEs were composed of 97.62 and 95.92 wt % of triacylglycerols, respectively, with EPA and DHA contents of 28.20 and 21.41 wt % for the former and 25.61 and 17.40 wt % for the latter. Results not only demonstrate this two-stage process's capability and industrial value for enriching EPA and DHA in fish oil products, but also offer new opportunities for the development of fortified fish oil products.

  19. Diagnosis Of Persistent Infection In Prosthetic Two-Stage Exchange: PCR analysis of Sonication fluid From Bone Cement Spacers.

    Science.gov (United States)

    Mariaux, Sandrine; Tafin, Ulrika Furustrand; Borens, Olivier

    2017-01-01

    Introduction: When treating periprosthetic joint infections with a two-stage procedure, antibiotic-impregnated spacers are used in the interval between removal of prosthesis and reimplantation. According to our experience, cultures of sonicated spacers are most often negative. The objective of our study was to investigate whether PCR analysis would improve the detection of bacteria in the spacer sonication fluid. Methods: A prospective monocentric study was performed from September 2014 to January 2016. Inclusion criteria were two-stage procedure for prosthetic infection and agreement of the patient to participate in the study. Beside tissues samples and sonication, broad range bacterial PCRs, specific S. aureus PCRs and Unyvero-multiplex PCRs were performed on the sonicated spacer fluid. Results: 30 patients were identified (15 hip, 14 knee and 1 ankle replacements). At reimplantation, cultures of tissue samples and spacer sonication fluid were all negative. Broad range PCRs were all negative. Specific S. aureus PCRs were positive in 5 cases. We had two persistent infections and four cases of infection recurrence were observed, with bacteria different than for the initial infection in three cases. Conclusion: The three different types of PCRs did not detect any bacteria in spacer sonication fluid that was culture-negative. In our study, PCR did not improve the bacterial detection and did not help to predict whether the patient will present a persistent or recurrent infection. Prosthetic 2-stage exchange with short interval and antibiotic-impregnated spacer is an efficient treatment to eradicate infection as both culture- and molecular-based methods were unable to detect bacteria in spacer sonication fluid after reimplantation.

  20. Experiences from the full-scale implementation of a new two-stage vertical flow constructed wetland design.

    Science.gov (United States)

    Langergraber, Guenter; Pressl, Alexander; Haberl, Raimund

    2014-01-01

    This paper describes the results of the first full-scale implementation of a two-stage vertical flow constructed wetland (CW) system developed to increase nitrogen removal. The full-scale system was constructed for the Bärenkogelhaus, which is located in Styria at the top of a mountain, 1,168 m above sea level. The Bärenkogelhaus has a restaurant with 70 seats, 16 rooms for overnight guests and is a popular site for day visits, especially during weekends and public holidays. The CW treatment system was designed for a hydraulic load of 2,500 L.d(-1) with a specific surface area requirement of 2.7 m(2) per person equivalent (PE). It was built in fall 2009 and started operation in April 2010 when the restaurant was re-opened. Samples were taken between July 2010 and June 2013 and were analysed in the laboratory of the Institute of Sanitary Engineering at BOKU University using standard methods. During 2010 the restaurant at Bärenkogelhaus was open 5 days a week whereas from 2011 the Bärenkogelhaus was open only on demand for events. This resulted in decreased organic loads of the system in the later period. In general, the measured effluent concentrations were low and the removal efficiencies high. During the whole period the ammonia nitrogen effluent concentration was below 1 mg/L even at effluent water temperatures below 3 °C. Investigations during high-load periods, i.e. events like weddings and festivals at weekends, with more than 100 visitors, showed a very robust treatment performance of the two-stage CW system. Effluent concentrations of chemical oxygen demand and NH4-N were not affected by these events with high hydraulic loads.

  1. Clinical evaluation of two-stage mandibular wisdom tooth extraction method to avoid mental nerve paresthesia

    International Nuclear Information System (INIS)

    Nozoe, Etsuro; Nakamura, Yasunori; Okawachi, Takako; Ishihata, Kiyohide; Shinnakasu, Mana; Nakamura, Norifumi

    2011-01-01

    Clinical courses following two-stage mandibular wisdom tooth extraction (TMWTE) carried out for preventing postoperative mental nerve paresthesia (MNP) were analyzed. When panoramic X-ray showed overlapping of wisdom tooth root on the superior 1/2 or more of the mandibular canal, interruption of the white line of the superior wall of the canal, or diversion of the canal, CT examination was facilitated. In cases where contact between the tooth root and canal was demonstrated in CT examination, TMWTE was then selected after gaining the patient's consent. TMWTE consisted of removing more than a half of the tooth crown and tooth root extraction at the second step after 2-3 months. The clinical features of wisdom teeth extracted and postoperative courses including tooth movement and occurrence of MNP during two-stage MWTE were evaluated. TMWTE was carried out for 40 teeth among 811 wisdom teeth (4.9%) that were extracted from 2007 to 2009. Among them, complete procedures were accomplished in 39 teeth, and crown removal was performed insufficiently at the first-stage operation in one tooth. Tooth movement was detected in 37 of 40 cases (92.5%). No postoperative MNP was observed in cases in which complete two-stage MWTE was carried out, but one case with insufficient crown removal was complicated by postoperative MNP. Seven mild complications (dehiscence, cold sensitivity, etc.) were noted after the first-stage operation. Therefore, we conclude that TMWTE for high-risk cases assessed by X-ray findings is useful to avoid MNP after MWTE. (author)

  2. Recent developments of a two-stage light gas gun for pellet injection

    International Nuclear Information System (INIS)

    Reggiori, A.

    1984-01-01

    A report is given on a two-stage pneumatic gun operated with ambient air as first stage driver which has been built and tested. Cylindrical polyethylene pellets of 1 mm diameter and 1 mm length have been launched at velocities up to 1800 m/s, with divergence angles of the pellet trajectory less than 1 0 . It is possible to optimize the pressure pulse for pellets of different masses, simply changing the mass of the piston and/or the initial pressures in the second stage. (author)

  3. Grids heat loading of an ion source in two-stage acceleration system

    International Nuclear Information System (INIS)

    Okumura, Yoshikazu; Ohara, Yoshihiro; Ohga, Tokumichi

    1978-05-01

    Heat loading of the extraction grids, which is one of the critical problems limiting the beam pulse duration at high power level, has been investigated experimentally, with an ion source in a two-stage acceleration system of four multi-aperture grids. The loading of each grid depends largely on extraction current and grid gap pressures; it decreases with improvement of the beam optics and with decrease of the pressures. In optimum operating modes, its level is typically less than -- 2% of the total beam power or -- 200 W/cm 2 at beam energies of 50 - 70 kV. (auth.)

  4. Two-Stage Maximum Likelihood Estimation (TSMLE for MT-CDMA Signals in the Indoor Environment

    Directory of Open Access Journals (Sweden)

    Sesay Abu B

    2004-01-01

    Full Text Available This paper proposes a two-stage maximum likelihood estimation (TSMLE technique suited for multitone code division multiple access (MT-CDMA system. Here, an analytical framework is presented in the indoor environment for determining the average bit error rate (BER of the system, over Rayleigh and Ricean fading channels. The analytical model is derived for quadrature phase shift keying (QPSK modulation technique by taking into account the number of tones, signal bandwidth (BW, bit rate, and transmission power. Numerical results are presented to validate the analysis, and to justify the approximations made therein. Moreover, these results are shown to agree completely with those obtained by simulation.

  5. The global stability of a delayed predator-prey system with two stage-structure

    International Nuclear Information System (INIS)

    Wang Fengyan; Pang Guoping

    2009-01-01

    Based on the classical delayed stage-structured model and Lotka-Volterra predator-prey model, we introduce and study a delayed predator-prey system, where prey and predator have two stages, an immature stage and a mature stage. The time delays are the time lengths between the immature's birth and maturity of prey and predator species. Results on global asymptotic stability of nonnegative equilibria of the delay system are given, which generalize and suggest that good continuity exists between the predator-prey system and its corresponding stage-structured system.

  6. Two-Stage Load Shedding for Secondary Control in Hierarchical Operation of Islanded Microgrids

    DEFF Research Database (Denmark)

    Zhou, Quan; Li, Zhiyi; Wu, Qiuwei

    2018-01-01

    A two-stage load shedding scheme is presented to cope with the severe power deficit caused by microgrid islanding. Coordinated with the fast response of inverter-based distributed energy resources (DERs), load shedding at each stage and the resulting power flow redistribution are estimated....... The first stage of load shedding will cease rapid frequency decline in which the measured frequency deviation is employed to guide the load shedding level and process. Once a new steady-state is reached, the second stage is activated, which performs load shedding according to the priorities of loads...

  7. The rearrangement process in a two-stage broadcast switching network

    DEFF Research Database (Denmark)

    Jacobsen, Søren B.

    1988-01-01

    The rearrangement process in the two-stage broadcast switching network presented by F.K. Hwang and G.W. Richards (ibid., vol.COM-33, no.10, p.1025-1035, Oct. 1985) is considered. By defining a certain function it is possible to calculate an upper bound on the number of connections to be moved...... during a rearrangement. When each inlet channel appears twice, the maximum number of connections to be moved is found. For a special class of inlet assignment patterns in the case of which each inlet channel appears three times, the maximum number of connections to be moved is also found. In the general...

  8. Risk-Averse Suppliers’ Optimal Pricing Strategies in a Two-Stage Supply Chain

    Directory of Open Access Journals (Sweden)

    Rui Shen

    2013-01-01

    Full Text Available Risk-averse suppliers’ optimal pricing strategies in two-stage supply chains under competitive environment are discussed. The suppliers in this paper focus more on losses as compared to profits, and they care their long-term relationship with their customers. We introduce for the suppliers a loss function, which covers both current loss and future loss. The optimal wholesale price is solved under situations of risk neutral, risk averse, and a combination of minimizing loss and controlling risk, respectively. Besides, some properties of and relations among these optimal wholesale prices are given as well. A numerical example is given to illustrate the performance of the proposed method.

  9. Modelling of Two-Stage Methane Digestion With Pretreatment of Biomass

    Science.gov (United States)

    Dychko, A.; Remez, N.; Opolinskyi, I.; Kraychuk, S.; Ostapchuk, N.; Yevtieieva, L.

    2018-04-01

    Systems of anaerobic digestion should be used for processing of organic waste. Managing the process of anaerobic recycling of organic waste requires reliable predicting of biogas production. Development of mathematical model of process of organic waste digestion allows determining the rate of biogas output at the two-stage process of anaerobic digestion considering the first stage. Verification of Konto's model, based on the studied anaerobic processing of organic waste, is implemented. The dependencies of biogas output and its rate from time are set and may be used to predict the process of anaerobic processing of organic waste.

  10. Simple Digital Control of a Two-Stage PFC Converter Using DSPIC30F Microprocessor

    DEFF Research Database (Denmark)

    Török, Lajos; Munk-Nielsen, Stig

    2010-01-01

    The use of dsPIC digital signal controllers (DSC) in Switch Mode Power Supply (SMPS) applications opens new perspectives for cheap and flexible digital control solutions. This paper presents the digital control of a two stage power factor corrector (PFC) converter. The PFC circuit is designed...... and built for 70W rated output power. Average current mode control for boost converter and current programmed control for forward converter are implemented on a dsPIC30F1010. Pulse Width Modulation (PWM) technique is used to drive the switching MOSFETs. Results show that digital solutions with ds...

  11. An Investigation on the Formation of Carbon Nanotubes by Two-Stage Chemical Vapor Deposition

    Directory of Open Access Journals (Sweden)

    M. S. Shamsudin

    2012-01-01

    Full Text Available High density of carbon nanotubes (CNTs has been synthesized from agricultural hydrocarbon: camphor oil using a one-hour synthesis time and a titanium dioxide sol gel catalyst. The pyrolysis temperature is studied in the range of 700–900°C at increments of 50°C. The synthesis process is done using a custom-made two-stage catalytic chemical vapor deposition apparatus. The CNT characteristics are investigated by field emission scanning electron microscopy and micro-Raman spectroscopy. The experimental results showed that structural properties of CNT are highly dependent on pyrolysis temperature changes.

  12. The Design, Construction and Operation of a 75 kW Two-Stage Gasifier

    DEFF Research Database (Denmark)

    Henriksen, Ulrik Birk; Ahrenfeldt, Jesper; Jensen, Torben Kvist

    2003-01-01

    The Two-Stage Gasifier was operated for several weeks (465 hours) and of these 190 hours continuously. The gasifier is operated automatically unattended day and night, and only small adjustments of the feeding rate were necessary once or twice a day. The operation was successful, and the output...... as expected. The engine operated well on the produced gas, and no deposits were observed in the engine afterwards. The bag house filter was an excellent and well operating gas cleaning system. Small amounts of deposits consisting of salts and carbonates were observed in the hot gas heat exchangers. The top...

  13. High-speed pellet injection with a two-stage pneumatic gun

    International Nuclear Information System (INIS)

    Reggiori, A.; Carlevaro, R.; Riva, G.; Daminelli, G.B.; Scaramuzzi, F.; Frattolillo, A.; Martinis, L.; Cardoni, P.; Mori, L.

    1988-01-01

    The injection of pellets of frozen hydrogen isotopes into fusion plasmas is envisioned as a fueling technique for future fusion reactors. Research is underway to obtain high injection speeds for solid H 2 and D 2 pellets. The optimization of a two-stage light gas gun is being pursued by the Milano group; the search for a convenient method of creating pellets with good mechanical properties and a secure attachment to the cold surface on which they are formed is carried out in Frascati. Velocities >2000 m/s have been obtained, but reproducibility is not yet satisfactory

  14. Whole genome sequencing: an efficient approach to ensuring food safety

    Science.gov (United States)

    Lakicevic, B.; Nastasijevic, I.; Dimitrijevic, M.

    2017-09-01

    Whole genome sequencing is an effective, powerful tool that can be applied to a wide range of public health and food safety applications. A major difference between WGS and the traditional typing techniques is that WGS allows all genes to be included in the analysis, instead of a well-defined subset of genes or variable intergenic regions. Also, the use of WGS can facilitate the understanding of contamination/colonization routes of foodborne pathogens within the food production environment, and can also afford efficient tracking of pathogens’ entry routes and distribution from farm-to-consumer. Tracking foodborne pathogens in the food processing-distribution-retail-consumer continuum is of the utmost importance for facilitation of outbreak investigations and rapid action in controlling/preventing foodborne outbreaks. Therefore, WGS likely will replace most of the numerous workflows used in public health laboratories to characterize foodborne pathogens into one consolidated, efficient workflow.

  15. Generalized Hurst exponent approach to efficiency in MENA markets

    Science.gov (United States)

    Sensoy, A.

    2013-10-01

    We study the time-varying efficiency of 15 Middle East and North African (MENA) stock markets by generalized Hurst exponent analysis of daily data with a rolling window technique. The study covers a time period of six years from January 2007 to December 2012. The results reveal that all MENA stock markets exhibit different degrees of long-range dependence varying over time and that the Arab Spring has had a negative effect on market efficiency in the region. The least inefficient market is found to be Turkey, followed by Israel, while the most inefficient markets are Iran, Tunisia, and UAE. Turkey and Israel show characteristics of developed financial markets. Reasons and implications are discussed.

  16. Measuring efficiency of international crude oil markets: A multifractality approach

    Science.gov (United States)

    Niere, H. M.

    2015-01-01

    The three major international crude oil markets are treated as complex systems and their multifractal properties are explored. The study covers daily prices of Brent crude, OPEC reference basket and West Texas Intermediate (WTI) crude from January 2, 2003 to January 2, 2014. A multifractal detrended fluctuation analysis (MFDFA) is employed to extract the generalized Hurst exponents in each of the time series. The generalized Hurst exponent is used to measure the degree of multifractality which in turn is used to quantify the efficiency of the three international crude oil markets. To identify whether the source of multifractality is long-range correlations or broad fat-tail distributions, shuffled data and surrogated data corresponding to each of the time series are generated. Shuffled data are obtained by randomizing the order of the price returns data. This will destroy any long-range correlation of the time series. Surrogated data is produced using the Fourier-Detrended Fluctuation Analysis (F-DFA). This is done by randomizing the phases of the price returns data in Fourier space. This will normalize the distribution of the time series. The study found that for the three crude oil markets, there is a strong dependence of the generalized Hurst exponents with respect to the order of fluctuations. This shows that the daily price time series of the markets under study have signs of multifractality. Using the degree of multifractality as a measure of efficiency, the results show that WTI is the most efficient while OPEC is the least efficient market. This implies that OPEC has the highest likelihood to be manipulated among the three markets. This reflects the fact that Brent and WTI is a very competitive market hence, it has a higher level of complexity compared against OPEC, which has a large monopoly power. Comparing with shuffled data and surrogated data, the findings suggest that for all the three crude oil markets, the multifractality is mainly due to long

  17. Efficient channel estimation in massive MIMO systems - a distributed approach

    KAUST Repository

    Al-Naffouri, Tareq Y.

    2016-01-21

    We present two efficient algorithms for distributed estimation of channels in massive MIMO systems. The two cases of 1) generic, and 2) sparse channels is considered. The algorithms estimate the impulse response for each channel observed by the antennas at the receiver (base station) in a coordinated manner by sharing minimal information among neighboring antennas. Simulations demonstrate the superior performance of the proposed methods as compared to other methods.

  18. An exergy approach to efficiency evaluation of desalination

    KAUST Repository

    Ng, Kim Choon

    2017-05-02

    This paper presents an evaluation process efficiency based on the consumption of primary energy for all types of practical desalination methods available hitherto. The conventional performance ratio has, thus far, been defined with respect to the consumption of derived energy, such as the electricity or steam, which are susceptible to the conversion losses of power plants and boilers that burned the input primary fuels. As derived energies are usually expressed by the units, either kWh or Joules, these units cannot differentiate the grade of energy supplied to the processes accurately. In this paper, the specific energy consumption is revisited for the efficacy of all large-scale desalination plants. In today\\'s combined production of electricity and desalinated water, accomplished with advanced cogeneration concept, the input exergy of fuels is utilized optimally and efficiently in a temperature cascaded manner. By discerning the exergy destruction successively in the turbines and desalination processes, the relative contribution of primary energy to the processes can be accurately apportioned to the input primary energy. Although efficiency is not a law of thermodynamics, however, a common platform for expressing the figures of merit explicit to the efficacy of desalination processes can be developed meaningfully that has the thermodynamic rigor up to the ideal or thermodynamic limit of seawater desalination for all scientists and engineers to aspire to.

  19. Efficient and robust cell detection: A structured regression approach.

    Science.gov (United States)

    Xie, Yuanpu; Xing, Fuyong; Shi, Xiaoshuang; Kong, Xiangfei; Su, Hai; Yang, Lin

    2018-02-01

    Efficient and robust cell detection serves as a critical prerequisite for many subsequent biomedical image analysis methods and computer-aided diagnosis (CAD). It remains a challenging task due to touching cells, inhomogeneous background noise, and large variations in cell sizes and shapes. In addition, the ever-increasing amount of available datasets and the high resolution of whole-slice scanned images pose a further demand for efficient processing algorithms. In this paper, we present a novel structured regression model based on a proposed fully residual convolutional neural network for efficient cell detection. For each testing image, our model learns to produce a dense proximity map that exhibits higher responses at locations near cell centers. Our method only requires a few training images with weak annotations (just one dot indicating the cell centroids). We have extensively evaluated our method using four different datasets, covering different microscopy staining methods (e.g., H & E or Ki-67 staining) or image acquisition techniques (e.g., bright-filed image or phase contrast). Experimental results demonstrate the superiority of our method over existing state of the art methods in terms of both detection accuracy and running time. Copyright © 2017. Published by Elsevier B.V.

  20. An exergy approach to efficiency evaluation of desalination

    Science.gov (United States)

    Ng, Kim Choon; Shahzad, Muhammad Wakil; Son, Hyuk Soo; Hamed, Osman A.

    2017-05-01

    This paper presents an evaluation process efficiency based on the consumption of primary energy for all types of practical desalination methods available hitherto. The conventional performance ratio has, thus far, been defined with respect to the consumption of derived energy, such as the electricity or steam, which are susceptible to the conversion losses of power plants and boilers that burned the input primary fuels. As derived energies are usually expressed by the units, either kWh or Joules, these units cannot differentiate the grade of energy supplied to the processes accurately. In this paper, the specific energy consumption is revisited for the efficacy of all large-scale desalination plants. In today's combined production of electricity and desalinated water, accomplished with advanced cogeneration concept, the input exergy of fuels is utilized optimally and efficiently in a temperature cascaded manner. By discerning the exergy destruction successively in the turbines and desalination processes, the relative contribution of primary energy to the processes can be accurately apportioned to the input primary energy. Although efficiency is not a law of thermodynamics, however, a common platform for expressing the figures of merit explicit to the efficacy of desalination processes can be developed meaningfully that has the thermodynamic rigor up to the ideal or thermodynamic limit of seawater desalination for all scientists and engineers to aspire to.

  1. Evaluating the Management System Approach for Industrial Energy Efficiency Improvements

    Directory of Open Access Journals (Sweden)

    Thomas Zobel

    2016-09-01

    Full Text Available Voluntary environmental management systems (EMS based on the international standard ISO 14001 have become widespread globally in recent years. The purpose of this study is to assess the impact of voluntary management systems on energy efficiency in the Swedish manufacturing industry by means of objective industrial energy data derived from mandatory annual environmental reports. The study focuses on changes in energy efficiency over a period of 12 years and includes both ISO 14001-certified companies and non-certified companies. Consideration is given to energy improvement efforts in the companies before the adoption of ISO 14001. The analysis has been carried out using statistical methods for two different industrial energy parameters: electricity and fossil fuel consumption. The results indicate that ISO 14001 adoption and certification has increased energy efficiency regarding the use of fossil fuel. In contrast, no effect of the management systems has been found concerning the use of electricity. The mixed results of this study are only partly in line with the results of previous studies based on perceptions of company representatives.

  2. A two-stage Bayesian design with sample size reestimation and subgroup analysis for phase II binary response trials.

    Science.gov (United States)

    Zhong, Wei; Koopmeiners, Joseph S; Carlin, Bradley P

    2013-11-01

    Frequentist sample size determination for binary outcome data in a two-arm clinical trial requires initial guesses of the event probabilities for the two treatments. Misspecification of these event rates may lead to a poor estimate of the necessary sample size. In contrast, the Bayesian approach that considers the treatment effect to be random variable having some distribution may offer a better, more flexible approach. The Bayesian sample size proposed by (Whitehead et al., 2008) for exploratory studies on efficacy justifies the acceptable minimum sample size by a "conclusiveness" condition. In this work, we introduce a new two-stage Bayesian design with sample size reestimation at the interim stage. Our design inherits the properties of good interpretation and easy implementation from Whitehead et al. (2008), generalizes their method to a two-sample setting, and uses a fully Bayesian predictive approach to reduce an overly large initial sample size when necessary. Moreover, our design can be extended to allow patient level covariates via logistic regression, now adjusting sample size within each subgroup based on interim analyses. We illustrate the benefits of our approach with a design in non-Hodgkin lymphoma with a simple binary covariate (patient gender), offering an initial step toward within-trial personalized medicine. Copyright © 2013 Elsevier Inc. All rights reserved.

  3. A hybrid approach for efficient anomaly detection using metaheuristic methods

    Directory of Open Access Journals (Sweden)

    Tamer F. Ghanem

    2015-07-01

    Full Text Available Network intrusion detection based on anomaly detection techniques has a significant role in protecting networks and systems against harmful activities. Different metaheuristic techniques have been used for anomaly detector generation. Yet, reported literature has not studied the use of the multi-start metaheuristic method for detector generation. This paper proposes a hybrid approach for anomaly detection in large scale datasets using detectors generated based on multi-start metaheuristic method and genetic algorithms. The proposed approach has taken some inspiration of negative selection-based detector generation. The evaluation of this approach is performed using NSL-KDD dataset which is a modified version of the widely used KDD CUP 99 dataset. The results show its effectiveness in generating a suitable number of detectors with an accuracy of 96.1% compared to other competitors of machine learning algorithms.

  4. Design of efficient and safe neural stimulators a multidisciplinary approach

    CERN Document Server

    van Dongen, Marijn

    2016-01-01

    This book discusses the design of neural stimulator systems which are used for the treatment of a wide variety of brain disorders such as Parkinson’s, depression and tinnitus. Whereas many existing books treating neural stimulation focus on one particular design aspect, such as the electrical design of the stimulator, this book uses a multidisciplinary approach: by combining the fields of neuroscience, electrophysiology and electrical engineering a thorough understanding of the complete neural stimulation chain is created (from the stimulation IC down to the neural cell). This multidisciplinary approach enables readers to gain new insights into stimulator design, while context is provided by presenting innovative design examples. Provides a single-source, multidisciplinary reference to the field of neural stimulation, bridging an important knowledge gap among the fields of bioelectricity, neuroscience, neuroengineering and microelectronics;Uses a top-down approach to understanding the neural activation proc...

  5. PRODUCTIVITY AND EFFICIENCY OF AGRICULTURAL AND NON AGRICULTURAL BANKS IN THE UNITED STATES: DEA APPROACH

    OpenAIRE

    Dias, Weeratilake

    1998-01-01

    Efficient operation of agricultural credit markets is very important both for the producer as well as for the policy makers. DEA approach is used to calculate productivity analysis which allows decomposition of sources of productivity changes into efficiency and technical change. Measured efficiencies are comparable to most recent parametric studies.

  6. Matrix approach to consistency of the additive efficient normalization of semivalues

    NARCIS (Netherlands)

    Xu, G.; Driessen, Theo; Sun, H.; Sun, H.

    2007-01-01

    In fact the Shapley value is the unique efficient semivalue. This motivated Ruiz et al. to do additive efficient normalization for semivalues. In this paper, by matrix approach we derive the relationship between the additive efficient normalization of semivalues and the Shapley value. Based on the

  7. Simulative design and process optimization of the two-stage stretch-blow molding process

    Energy Technology Data Exchange (ETDEWEB)

    Hopmann, Ch.; Rasche, S.; Windeck, C. [Institute of Plastics Processing at RWTH Aachen University (IKV) Pontstraße 49, 52062 Aachen (Germany)

    2015-05-22

    The total production costs of PET bottles are significantly affected by the costs of raw material. Approximately 70 % of the total costs are spent for the raw material. Therefore, stretch-blow molding industry intends to reduce the total production costs by an optimized material efficiency. However, there is often a trade-off between an optimized material efficiency and required product properties. Due to a multitude of complex boundary conditions, the design process of new stretch-blow molded products is still a challenging task and is often based on empirical knowledge. Application of current CAE-tools supports the design process by reducing development time and costs. This paper describes an approach to determine optimized preform geometry and corresponding process parameters iteratively. The wall thickness distribution and the local stretch ratios of the blown bottle are calculated in a three-dimensional process simulation. Thereby, the wall thickness distribution is correlated with an objective function and preform geometry as well as process parameters are varied by an optimization algorithm. Taking into account the correlation between material usage, process history and resulting product properties, integrative coupled simulation steps, e.g. structural analyses or barrier simulations, are performed. The approach is applied on a 0.5 liter PET bottle of Krones AG, Neutraubling, Germany. The investigations point out that the design process can be supported by applying this simulative optimization approach. In an optimization study the total bottle weight is reduced from 18.5 g to 15.5 g. The validation of the computed results is in progress.

  8. Simulative design and process optimization of the two-stage stretch-blow molding process

    International Nuclear Information System (INIS)

    Hopmann, Ch.; Rasche, S.; Windeck, C.

    2015-01-01

    The total production costs of PET bottles are significantly affected by the costs of raw material. Approximately 70 % of the total costs are spent for the raw material. Therefore, stretch-blow molding industry intends to reduce the total production costs by an optimized material efficiency. However, there is often a trade-off between an optimized material efficiency and required product properties. Due to a multitude of complex boundary conditions, the design process of new stretch-blow molded products is still a challenging task and is often based on empirical knowledge. Application of current CAE-tools supports the design process by reducing development time and costs. This paper describes an approach to determine optimized preform geometry and corresponding process parameters iteratively. The wall thickness distribution and the local stretch ratios of the blown bottle are calculated in a three-dimensional process simulation. Thereby, the wall thickness distribution is correlated with an objective function and preform geometry as well as process parameters are varied by an optimization algorithm. Taking into account the correlation between material usage, process history and resulting product properties, integrative coupled simulation steps, e.g. structural analyses or barrier simulations, are performed. The approach is applied on a 0.5 liter PET bottle of Krones AG, Neutraubling, Germany. The investigations point out that the design process can be supported by applying this simulative optimization approach. In an optimization study the total bottle weight is reduced from 18.5 g to 15.5 g. The validation of the computed results is in progress

  9. Hedgehog pathway mediates early acceleration of liver regeneration induced by a novel two-staged hepatectomy in mice.

    Science.gov (United States)

    Langiewicz, Magda; Schlegel, Andrea; Saponara, Enrica; Linecker, Michael; Borger, Pieter; Graf, Rolf; Humar, Bostjan; Clavien, Pierre A

    2017-03-01

    ALPPS, a novel two-staged approach for the surgical removal of large/multiple liver tumors, combines portal vein ligation (PVL) with parenchymal transection. This causes acceleration of compensatory liver growth, enabling faster and more extensive tumor removal. We sought to identify the plasma factors thought to mediate the regenerative acceleration following ALPPS. We compared a mouse model of ALPPS against PVL and additional control surgeries (n=6 per group). RNA deep sequencing was performed to identify candidate molecules unique to ALPPS liver (n=3 per group). Recombinant protein and a neutralizing antibody combined with appropriate surgeries were used to explore candidate functions in ALPPS (n=6 per group). Indian hedgehog (IHH/Ihh) levels were assessed in human ALPPS patient plasma (n=6). ALPPS in mouse confirmed significant acceleration of liver regeneration relative to PVL (pIhh mRNA, coding for a secreted ligand inducing hedgehog signaling, was uniquely upregulated in ALPPS liver (pIhh plasma levels rose 4h after surgery (pIhh alone was sufficient to induce ALPPS-like acceleration of liver growth. Conversely, blocking Ihh markedly inhibited the accelerating effects of ALPPS. In the small cohort of ALPPS patients, IHH tended to be elevated early after surgery. Ihh and hedgehog pathway activation provide the first mechanistic insight into the acceleration of liver regeneration triggered by ALPPS surgery. The accelerating potency of recombinant Ihh, and its potential effect in human ALPPS may lead to a clinical role for this protein. ALPPS, a novel two-staged hepatectomy, accelerates liver regeneration, thereby helping to treat patients with otherwise unresectable liver tumors. The molecular mechanisms behind this accelerated regeneration are unknown. Here, we elucidate that Indian hedgehog, a secreted ligand important for fetal development, is a crucial mediator of the regenerative acceleration triggered by ALPPS surgery. Copyright © 2016. Published by

  10. Success rates for initial eradication of peri-prosthetic knee infection treated with a two-stage procedure.

    Science.gov (United States)

    Kaminski, Andrzej; Citak, Mustafa; Schildhauer, Thomas Armin; Fehmer, Tobias

    2014-01-01

    In Germany, rates of primary total knee arthroplasty procedures and exchange arthroplasty procedures continue to rise. Late-onset peri-prosthetic infection constitutes a serious complication whose management may be dependent upon the spectrum of micro-organisms involved. The aim of this study was to provide a retrospective analysis of the effectiveness of initial eradication measures performed as part of a two-stage procedure. Between 2002 and 2008, a total of 328 patients who had received a first-time diagnosis of chronic peri-prosthetic knee infection following total knee arthroplasty (TKA) subsequently underwent surgery at our clinic. The surgical approach consisted of a two-stage procedure, with the initial procedure consisting of the removal of the prosthesis and radical debridement, followed by insertion of an antibiotic-loaded static spacer. The effectiveness of the procedure was assessed after six weeks, with each patient undergoing a number of clinical and laboratory-based tests, including knee joint aspiration. Staphylococcus aureus strains were responsible for 68% (n=223) of the total number of cases of peri-prosthetic knee infection. 19% of cases (n=62) showed evidence of gram-negative bacteria, while MRSA accounted for 15% (n=49) of cases. Six weeks after completion of the above-named treatment regimen, eradication of infection was considered successful in 289 patients (88.1%). Eradication was unsuccessful in 22% of MRSA infections (n=11) and 7% of MSSA infections (n=23). The treatment regimen outlined in this report is capable of achieving satisfactory results in the management of late-onset peri-prosthetic knee infection, with one exception: patients with infections caused by MRSA showed high failure rates.

  11. Probabilistic Damage Characterization Using the Computationally-Efficient Bayesian Approach

    Science.gov (United States)

    Warner, James E.; Hochhalter, Jacob D.

    2016-01-01

    This work presents a computationally-ecient approach for damage determination that quanti es uncertainty in the provided diagnosis. Given strain sensor data that are polluted with measurement errors, Bayesian inference is used to estimate the location, size, and orientation of damage. This approach uses Bayes' Theorem to combine any prior knowledge an analyst may have about the nature of the damage with information provided implicitly by the strain sensor data to form a posterior probability distribution over possible damage states. The unknown damage parameters are then estimated based on samples drawn numerically from this distribution using a Markov Chain Monte Carlo (MCMC) sampling algorithm. Several modi cations are made to the traditional Bayesian inference approach to provide signi cant computational speedup. First, an ecient surrogate model is constructed using sparse grid interpolation to replace a costly nite element model that must otherwise be evaluated for each sample drawn with MCMC. Next, the standard Bayesian posterior distribution is modi ed using a weighted likelihood formulation, which is shown to improve the convergence of the sampling process. Finally, a robust MCMC algorithm, Delayed Rejection Adaptive Metropolis (DRAM), is adopted to sample the probability distribution more eciently. Numerical examples demonstrate that the proposed framework e ectively provides damage estimates with uncertainty quanti cation and can yield orders of magnitude speedup over standard Bayesian approaches.

  12. AN EFFICIENT WEB PERSONALIZATION APPROACH TO DISCOVER USER INTERESTED DIRECTORIES

    Directory of Open Access Journals (Sweden)

    M. Robinson Joel

    2014-04-01

    Full Text Available Web Usage Mining is the application of data mining technique used to retrieve the web usage from web proxy log file. Web Usage Mining consists of three major stages: preprocessing, clustering and pattern analysis. This paper explains each of these stages in detail. In this proposed approach, the web directories are discovered based on the user’s interestingness. The web proxy log file undergoes a preprocessing phase to improve the quality of data. Fuzzy Clustering Algorithm is used to cluster the user and session into disjoint clusters. In this paper, an effective approach is presented for Web personalization based on an Advanced Apriori algorithm. It is used to select the user interested web directories. The proposed method is compared with the existing web personalization methods like Objective Probabilistic Directory Miner (OPDM, Objective Community Directory Miner (OCDM and Objective Clustering and Probabilistic Directory Miner (OCPDM. The result shows that the proposed approach provides better results than the aforementioned existing approaches. At last, an application is developed with the user interested directories and web usage details.

  13. Energy Efficiency of ORM Approaches: an Empirical Evaluation

    NARCIS (Netherlands)

    Procaccianti, G.; Lago, P.; Diesveld, W.

    2016-01-01

    Context. Object-Relational Mapping (ORM) frameworks are widely used in business software applications to interact with database systems. Even if ORMs introduce several benefits when compared to a plain SQL approach, these techniques have known disadvantages. Goal. In this paper, we present an

  14. A Two-Stage Method to Determine Optimal Product Sampling considering Dynamic Potential Market

    Science.gov (United States)

    Hu, Zhineng; Lu, Wei; Han, Bing

    2015-01-01

    This paper develops an optimization model for the diffusion effects of free samples under dynamic changes in potential market based on the characteristics of independent product and presents a two-stage method to figure out the sampling level. The impact analysis of the key factors on the sampling level shows that the increase of the external coefficient or internal coefficient has a negative influence on the sampling level. And the changing rate of the potential market has no significant influence on the sampling level whereas the repeat purchase has a positive one. Using logistic analysis and regression analysis, the global sensitivity analysis gives a whole analysis of the interaction of all parameters, which provides a two-stage method to estimate the impact of the relevant parameters in the case of inaccuracy of the parameters and to be able to construct a 95% confidence interval for the predicted sampling level. Finally, the paper provides the operational steps to improve the accuracy of the parameter estimation and an innovational way to estimate the sampling level. PMID:25821847

  15. Multiobjective Two-Stage Stochastic Programming Problems with Interval Discrete Random Variables

    Directory of Open Access Journals (Sweden)

    S. K. Barik

    2012-01-01

    Full Text Available Most of the real-life decision-making problems have more than one conflicting and incommensurable objective functions. In this paper, we present a multiobjective two-stage stochastic linear programming problem considering some parameters of the linear constraints as interval type discrete random variables with known probability distribution. Randomness of the discrete intervals are considered for the model parameters. Further, the concepts of best optimum and worst optimum solution are analyzed in two-stage stochastic programming. To solve the stated problem, first we remove the randomness of the problem and formulate an equivalent deterministic linear programming model with multiobjective interval coefficients. Then the deterministic multiobjective model is solved using weighting method, where we apply the solution procedure of interval linear programming technique. We obtain the upper and lower bound of the objective function as the best and the worst value, respectively. It highlights the possible risk involved in the decision-making tool. A numerical example is presented to demonstrate the proposed solution procedure.

  16. Two-stage acid saccharification of fractionated Gelidium amansii minimizing the sugar decomposition.

    Science.gov (United States)

    Jeong, Tae Su; Kim, Young Soo; Oh, Kyeong Keun

    2011-11-01

    Two-stage acid hydrolysis was conducted on easy reacting cellulose and resistant reacting cellulose of fractionated Gelidium amansii (f-GA). Acid hydrolysis of f-GA was performed at between 170 and 200 °C for a period of 0-5 min, and an acid concentration of 2-5% (w/v, H2SO4) to determine the optimal conditions for acid hydrolysis. In the first stage of the acid hydrolysis, an optimum glucose yield of 33.7% was obtained at a reaction temperature of 190 °C, an acid concentration of 3.0%, and a reaction time of 3 min. In the second stage, a glucose yield of 34.2%, on the basis the amount of residual cellulose from the f-GA, was obtained at a temperature of 190 °C, a sulfuric acid concentration of 4.0%, and a reaction time 3.7 min. Finally, 68.58% of the cellulose derived from f-GA was converted into glucose through two-stage acid saccharification under aforementioned conditions. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. Use of a two-stage light-gas gun as an injector for electromagnetic railguns

    International Nuclear Information System (INIS)

    Shahinpoor, M.

    1989-01-01

    Ablation of wall materials is known to be a major factor limiting the performance of railguns. To minimize this effect, it is desirable too inject projectiles into railgun at velocities greater than the ablation threshold velocity (6-8 km/s for copper rails). Because two-stage light-gas guns are capable of achieving such velocities, a program was initiated to design, build and evaluate the performance of a two-stage light gas gun, utilizing hydrogen gas, for use as an injector to an electromagnetic railgun. This effort is part of a project to develop a hypervelocity electromagnetic launcher (HELEOS) for use in equation-of-state studies. In this paper, the specific design features that enhance compatibility of the injector with the railgun, including a slip-joint between the injector launch tube and the coupling section to the railgun are described. The operational capabilities for using all major projectile velocity measuring techniques, such as in-bore pressure gauges, laser and CW x-ray interrupt techniques, flash x-ray and continuous in-bore velocity measurements using VISAR interferometry are also discussed. Finally an internal ballistics code for optimizing gun performance has been utilized to interpret performance data of the gun

  18. Statistical inference for extended or shortened phase II studies based on Simon's two-stage designs.

    Science.gov (United States)

    Zhao, Junjun; Yu, Menggang; Feng, Xi-Ping

    2015-06-07

    Simon's two-stage designs are popular choices for conducting phase II clinical trials, especially in the oncology trials to reduce the number of patients placed on ineffective experimental therapies. Recently Koyama and Chen (2008) discussed how to conduct proper inference for such studies because they found that inference procedures used with Simon's designs almost always ignore the actual sampling plan used. In particular, they proposed an inference method for studies when the actual second stage sample sizes differ from planned ones. We consider an alternative inference method based on likelihood ratio. In particular, we order permissible sample paths under Simon's two-stage designs using their corresponding conditional likelihood. In this way, we can calculate p-values using the common definition: the probability of obtaining a test statistic value at least as extreme as that observed under the null hypothesis. In addition to providing inference for a couple of scenarios where Koyama and Chen's method can be difficult to apply, the resulting estimate based on our method appears to have certain advantage in terms of inference properties in many numerical simulations. It generally led to smaller biases and narrower confidence intervals while maintaining similar coverages. We also illustrated the two methods in a real data setting. Inference procedures used with Simon's designs almost always ignore the actual sampling plan. Reported P-values, point estimates and confidence intervals for the response rate are not usually adjusted for the design's adaptiveness. Proper statistical inference procedures should be used.

  19. Two-stage single-volume exchange transfusion in severe hemolytic disease of the newborn.

    Science.gov (United States)

    Abbas, Wael; Attia, Nayera I; Hassanein, Sahar M A

    2012-07-01

    Evaluation of two-stage single-volume exchange transfusion (TSSV-ET) in decreasing the post-exchange rebound increase in serum bilirubin level, with subsequent reduction of the need for repeated exchange transfusions. The study included 104 neonates with hyperbilirubinemia needing exchange transfusion. They were randomly enrolled into two equal groups, each group comprised 52 neonates. TSSV-ET was performed for the 52 neonates and the traditional single-stage double-volume exchange transfusion (SSDV-ET) was performed to 52 neonates. TSSV-ET significantly lowered rebound serum bilirubin level (12.7 ± 1.1 mg/dL), compared to SSDV-ET (17.3 ± 1.7 mg/dL), p < 0.001. Need for repeated exchange transfusions was significantly lower in TSSV-ET group (13.5%), compared to 32.7% in SSDV-ET group, p < 0.05. No significant difference was found between the two groups as regards the morbidity (11.5% and 9.6%, respectively) and the mortality (1.9% for both groups). Two-stage single-volume exchange transfusion proved to be more effective in reducing rebound serum bilirubin level post-exchange and in decreasing the need for repeated exchange transfusions.

  20. QUICKGUN: An algorithm for estimating the performance of two-stage light gas guns

    International Nuclear Information System (INIS)

    Milora, S.L.; Combs, S.K.; Gouge, M.J.; Kincaid, R.W.

    1990-09-01

    An approximate method is described for solving the equation of motion of a projectile accelerated by a two-stage light gas gun that uses high-pressure (<100-bar) gas from a storage reservoir to drive a piston to moderate speed (<400 m/s) for the purpose of compressing the low molecular weight propellant gas (hydrogen or helium) to high pressure (1000 to 10,000 bar) and temperature (1000 to 10,000 K). Zero-dimensional, adiabatic (isentropic) processes are used to describe the time dependence of the ideal gas thermodynamic properties of the storage reservoir and the first and second stages of the system. A one-dimensional model based on an approximate method of characteristics, or wave diagram analysis, for flow with friction (nonisentropic) is used to describe the nonsteady compressible flow processes in the launch tube. Linear approximations are used for the characteristic and fluid particle trajectories by averaging the values of the flow parameters at the breech and at the base of the projectile. An assumed functional form for the Mach number at the breech provides the necessary boundary condition. Results of the calculation are compared with data obtained from two-stage light gas gun experiments at Oak Ridge National Laboratory for solid deuterium and nylon projectiles with masses ranging from 10 to 35 mg and for projectile speeds between 1.6 and 4.5 km/s. The predicted and measured velocities generally agree to within 15%. 19 refs., 3 figs., 2 tabs

  1. Fate of dissolved organic nitrogen in two stage trickling filter process.

    Science.gov (United States)

    Simsek, Halis; Kasi, Murthy; Wadhawan, Tanush; Bye, Christopher; Blonigen, Mark; Khan, Eakalak

    2012-10-15

    Dissolved organic nitrogen (DON) represents a significant portion of nitrogen in the final effluent of wastewater treatment plants (WWTPs). Biodegradable portion of DON (BDON) can support algal growth and/or consume dissolved oxygen in the receiving waters. The fate of DON and BDON has not been studied for trickling filter WWTPs. DON and BDON data were collected along the treatment train of a WWTP with a two-stage trickling filter process. DON concentrations in the influent and effluent were 27% and 14% of total dissolved nitrogen (TDN). The plant removed about 62% and 72% of the influent DON and BDON mainly by the trickling filters. The final effluent BDON values averaged 1.8 mg/L. BDON was found to be between 51% and 69% of the DON in raw wastewater and after various treatment units. The fate of DON and BDON through the two-stage trickling filter treatment plant was modeled. The BioWin v3.1 model was successfully applied to simulate ammonia, nitrite, nitrate, TDN, DON and BDON concentrations along the treatment train. The maximum growth rates for ammonia oxidizing bacteria (AOB) and nitrite oxidizing bacteria, and AOB half saturation constant influenced ammonia and nitrate output results. Hydrolysis and ammonification rates influenced all of the nitrogen species in the model output, including BDON. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. Two stage heterotrophy/photoinduction culture of Scenedesmus incrassatulus: potential for lutein production.

    Science.gov (United States)

    Flórez-Miranda, Liliana; Cañizares-Villanueva, Rosa Olivia; Melchy-Antonio, Orlando; Martínez-Jerónimo, Fernando; Flores-Ortíz, Cesar Mateo

    2017-11-20

    A biomass production process including two stages, heterotrophy/photoinduction (TSHP), was developed to improve biomass and lutein production by the green microalgae Scenedesmus incrassatulus. To determine the effects of different nitrogen sources (yeast extract and urea) and temperature in the heterotrophic stage, experiments using shake flask cultures with glucose as the carbon source were carried out. The highest biomass productivity and specific pigment concentrations were reached using urea+vitamins (U+V) at 30°C. The first stage of the TSHP process was done in a 6L bioreactor, and the inductions in a 3L airlift photobioreactor. At the end of the heterotrophic stage, S. incrassatulus achieved the maximal biomass concentration, increasing from 7.22gL -1 to 17.98gL -1 with an increase in initial glucose concentration from 10.6gL -1 to 30.3gL -1 . However, the higher initial glucose concentration resulted in a lower specific growth rate (μ) and lower cell yield (Y x/s ), possibly due to substrate inhibition. After 24h of photoinduction, lutein content in S. incrassatulus biomass was 7 times higher than that obtained at the end of heterotrophic cultivation, and the lutein productivity was 1.6 times higher compared with autotrophic culture of this microalga. Hence, the two-stage heterotrophy/photoinduction culture is an effective strategy for high cell density and lutein production in S. incrassatulus. Copyright © 2017. Published by Elsevier B.V.

  3. Hydrodeoxygenation of oils from cellulose in single and two-stage hydropyrolysis

    Energy Technology Data Exchange (ETDEWEB)

    Rocha, J.D.; Snape, C.E. [Strathclyde Univ., Glasgow (United Kingdom); Luengo, C.A. [Universidade Estadual de Campinas, SP (Brazil). Dept. de Fisica Aplicada

    1996-09-01

    To investigate the removal of oxygen (hydrodeoxygenation) during the hydropyrolysis of cellulose, single and two-stage experiments on pure cellulose have been carried out using hydrogen pressures up to 10 MPa and temperatures over the range 300-520{sup o}C. Carbon, oxygen and aromaticity balances have been determined from the product yields and compositions. For the two-stage tests, the primary oils were passed through a bed of commercial Ni/Mo {gamma}-alumina-supported catalyst (Criterion 424, presulphided) at 400{sup o}C. Raising the hydrogen pressure from atmospheric to 10 MPa increased the carbon conversion by 10 mole % which was roughly equally divided between the oil and hydrocarbon gases. The oxygen content of the primary oil was reduced by over 10% to below 20% w/w. The addition of a dispersed iron sulphide catalyst further increased the oil yield at 10 MPa and reduces the oxygen content of the oil by a further 10%. The effect of hydrogen pressure on oil yields was most pronounced at low flow rates where it is beneficial in helping to overcome diffusional resistances. Unlike the dispersed iron sulphide in the first stage, the use of the Ni-Mo catalyst in the second stage reduced both the oxygen content and aromaticity of the oils. (Author)

  4. Final Report on Two-Stage Fast Spectrum Fuel Cycle Options

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Won Sik [Purdue Univ., West Lafayette, IN (United States); Lin, C. S. [Purdue Univ., West Lafayette, IN (United States); Hader, J. S. [Purdue Univ., West Lafayette, IN (United States); Park, T. K. [Purdue Univ., West Lafayette, IN (United States); Deng, P. [Purdue Univ., West Lafayette, IN (United States); Yang, G. [Purdue Univ., West Lafayette, IN (United States); Jung, Y. S. [Purdue Univ., West Lafayette, IN (United States); Kim, T. K. [Argonne National Lab. (ANL), Argonne, IL (United States); Stauff, N. E. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-01-30

    This report presents the performance characteristics of two “two-stage” fast spectrum fuel cycle options proposed to enhance uranium resource utilization and to reduce nuclear waste generation. One is a two-stage fast spectrum fuel cycle option of continuous recycle of plutonium (Pu) in a fast reactor (FR) and subsequent burning of minor actinides (MAs) in an accelerator-driven system (ADS). The first stage is a sodium-cooled FR fuel cycle starting with low-enriched uranium (LEU) fuel; at the equilibrium cycle, the FR is operated using the recovered Pu and natural uranium without supporting LEU. Pu and uranium (U) are co-extracted from the discharged fuel and recycled in the first stage, and the recovered MAs are sent to the second stage. The second stage is a sodium-cooled ADS in which MAs are burned in an inert matrix fuel form. The discharged fuel of ADS is reprocessed, and all the recovered heavy metals (HMs) are recycled into the ADS. The other is a two-stage FR/ADS fuel cycle option with MA targets loaded in the FR. The recovered MAs are not directly sent to ADS, but partially incinerated in the FR in order to reduce the amount of MAs to be sent to the ADS. This is a heterogeneous recycling option of transuranic (TRU) elements

  5. Many-Objective Particle Swarm Optimization Using Two-Stage Strategy and Parallel Cell Coordinate System.

    Science.gov (United States)

    Hu, Wang; Yen, Gary G; Luo, Guangchun

    2017-06-01

    It is a daunting challenge to balance the convergence and diversity of an approximate Pareto front in a many-objective optimization evolutionary algorithm. A novel algorithm, named many-objective particle swarm optimization with the two-stage strategy and parallel cell coordinate system (PCCS), is proposed in this paper to improve the comprehensive performance in terms of the convergence and diversity. In the proposed two-stage strategy, the convergence and diversity are separately emphasized at different stages by a single-objective optimizer and a many-objective optimizer, respectively. A PCCS is exploited to manage the diversity, such as maintaining a diverse archive, identifying the dominance resistant solutions, and selecting the diversified solutions. In addition, a leader group is used for selecting the global best solutions to balance the exploitation and exploration of a population. The experimental results illustrate that the proposed algorithm outperforms six chosen state-of-the-art designs in terms of the inverted generational distance and hypervolume over the DTLZ test suite.

  6. Risk averse optimal operation of a virtual power plant using two stage stochastic programming

    International Nuclear Information System (INIS)

    Tajeddini, Mohammad Amin; Rahimi-Kian, Ashkan; Soroudi, Alireza

    2014-01-01

    VPP (Virtual Power Plant) is defined as a cluster of energy conversion/storage units which are centrally operated in order to improve the technical and economic performance. This paper addresses the optimal operation of a VPP considering the risk factors affecting its daily operation profits. The optimal operation is modelled in both day ahead and balancing markets as a two-stage stochastic mixed integer linear programming in order to maximize a GenCo (generation companies) expected profit. Furthermore, the CVaR (Conditional Value at Risk) is used as a risk measure technique in order to control the risk of low profit scenarios. The uncertain parameters, including the PV power output, wind power output and day-ahead market prices are modelled through scenarios. The proposed model is successfully applied to a real case study to show its applicability and the results are presented and thoroughly discussed. - Highlights: • Virtual power plant modelling considering a set of energy generating and conversion units. • Uncertainty modelling using two stage stochastic programming technique. • Risk modelling using conditional value at risk. • Flexible operation of renewable energy resources. • Electricity price uncertainty in day ahead energy markets

  7. A preventive maintenance policy based on dependent two-stage deterioration and external shocks

    International Nuclear Information System (INIS)

    Yang, Li; Ma, Xiaobing; Peng, Rui; Zhai, Qingqing; Zhao, Yu

    2017-01-01

    This paper proposes a preventive maintenance policy for a single-unit system whose failure has two competing and dependent causes, i.e., internal deterioration and sudden shocks. The internal failure process is divided into two stages, i.e. normal and defective. Shocks arrive according to a non-homogeneous Poisson process (NHPP), leading to the failure of the system immediately. The occurrence rate of a shock is affected by the state of the system. Both an age-based replacement and finite number of periodic inspections are schemed simultaneously to deal with the competing failures. The objective of this study is to determine the optimal preventive replacement interval, inspection interval and number of inspections such that the expected cost per unit time is minimized. A case study on oil pipeline maintenance is presented to illustrate the maintenance policy. - Highlights: • A maintenance model based on two-stage deterioration and sudden shocks is developed. • The impact of internal system state on external shock process is studied. • A new preventive maintenance strategy combining age-based replacements and periodic inspections is proposed. • Postponed replacement of a defective system is provided by restricting the number of inspections.

  8. Two-stage supercharging of a passenger car diesel engine; Zweistufige Aufladung eines Pkw-Dieselmotors

    Energy Technology Data Exchange (ETDEWEB)

    Wittmer, A.; Albrecht, P.; Becker, B.; Vogt, G.; Fischer, R. [Erphi Elektronik GmbH, Holzkirchen (Germany)

    2004-07-01

    Two-stage supercharging of internal combustion engines with specific capacities beyond 70 kW/l opens up new options for smaller charge volumes. A low-pressure and a high-pressure supercharger are connected in series, with by-passes. The control strategy is described in this contribution using a model of exhaust counterpressure. The potential of a two-stage supercharged diesel engine with CR injection was proved in two engines and in dynamic driving tests. The new concept offers optimum chances for downsizing provided that the driving performance is not affected. (orig.) [German] Die zweistufige Aufladung von Verbrennungsmotoren eroeffnet mit spezifischen Leistungen jenseits von 70 kW/l weitere Moeglichkeiten der Hubraumverkleinerung. Dabei werden ein Niederdruck- und ein Hochdrucklader mit Umgehungsleitungen in Reihe geschaltet. Die erforderliche Regelungsstrategie zum kontrollierten Uebergang von einer Stufe auf die naechste erfolgt in dem hier vorliegenden Beitrag anhand eines Modells fuer den Abgasgegendruck. Hierbei wird das Regelorgan so angesteuert, dass sich der gewuenschte Druck vor den Turbinen einstellt. An zwei Motoren konnten stationaere Ergebnisse das Leistungspotential eines zweistufig aufgeladenen Dieselmotors mit 'Common Rail' Einspritzung nachgewiesen werden. Die dynamischen Fahrversuche belegen eindrucksvoll den schnellen Ladedruckaufbau auch aus tiefen Drehzahlbereichen bei gleichzeitig gutem Uebergangsverhalten von der Hochdruck- auf die Niederdruckstufe. Damit bietet der zweistufig aufgeladene Dieselmotor mit dem hier dargestellten Regelungsverfahren optimale Voraussetzungen fuer 'Downsizing' unter der Randbedingung, dass moeglichst keine Einbussen bei den Fahrleistungen hinzunehmen sind. (orig.)

  9. A low-voltage sense amplifier with two-stage operational amplifier clamping for flash memory

    Science.gov (United States)

    Guo, Jiarong

    2017-04-01

    A low-voltage sense amplifier with reference current generator utilizing two-stage operational amplifier clamp structure for flash memory is presented in this paper, capable of operating with minimum supply voltage at 1 V. A new reference current generation circuit composed of a reference cell and a two-stage operational amplifier clamping the drain pole of the reference cell is used to generate the reference current, which avoids the threshold limitation caused by current mirror transistor in the traditional sense amplifier. A novel reference voltage generation circuit using dummy bit-line structure without pull-down current is also adopted, which not only improves the sense window enhancing read precision but also saves power consumption. The sense amplifier was implemented in a flash realized in 90 nm flash technology. Experimental results show the access time is 14.7 ns with power supply of 1.2 V and slow corner at 125 °C. Project supported by the National Natural Science Fundation of China (No. 61376028).

  10. Two-stage agglomeration of fine-grained herbal nettle waste

    Science.gov (United States)

    Obidziński, Sławomir; Joka, Magdalena; Fijoł, Olga

    2017-10-01

    This paper compares the densification work necessary for the pressure agglomeration of fine-grained dusty nettle waste, with the densification work involved in two-stage agglomeration of the same material. In the first stage, the material was pre-densified through coating with a binder material in the form of a 5% potato starch solution, and then subjected to pressure agglomeration. A number of tests were conducted to determine the effect of the moisture content in the nettle waste (15, 18 and 21%), as well as the process temperature (50, 70, 90°C) on the values of densification work and the density of the obtained pellets. For pre-densified pellets from a mixture of nettle waste and a starch solution, the conducted tests determined the effect of pellet particle size (1, 2, and 3 mm) and the process temperature (50, 70, 90°C) on the same values. On the basis of the tests, we concluded that the introduction of a binder material and the use of two-stage agglomeration in nettle waste densification resulted in increased densification work (as compared to the densification of nettle waste alone) and increased pellet density.

  11. A two-stage heating scheme for heat assisted magnetic recording

    Science.gov (United States)

    Xiong, Shaomin; Kim, Jeongmin; Wang, Yuan; Zhang, Xiang; Bogy, David

    2014-05-01

    Heat Assisted Magnetic Recording (HAMR) has been proposed to extend the storage areal density beyond 1 Tb/in.2 for the next generation magnetic storage. A near field transducer (NFT) is widely used in HAMR systems to locally heat the magnetic disk during the writing process. However, much of the laser power is absorbed around the NFT, which causes overheating of the NFT and reduces its reliability. In this work, a two-stage heating scheme is proposed to reduce the thermal load by separating the NFT heating process into two individual heating stages from an optical waveguide and a NFT, respectively. As the first stage, the optical waveguide is placed in front of the NFT and delivers part of laser energy directly onto the disk surface to heat it up to a peak temperature somewhat lower than the Curie temperature of the magnetic material. Then, the NFT works as the second heating stage to heat a smaller area inside the waveguide heated area further to reach the Curie point. The energy applied to the NFT in the second heating stage is reduced compared with a typical single stage NFT heating system. With this reduced thermal load to the NFT by the two-stage heating scheme, the lifetime of the NFT can be extended orders longer under the cyclic load condition.

  12. A New Concept of Two-Stage Multi-Element Resonant-/Cyclo-Converter for Two-Phase IM/SM Motor

    Directory of Open Access Journals (Sweden)

    Mahmud Ali Rzig Abdalmula

    2013-01-01

    Full Text Available The paper deals with a new concept of power electronic two-phase system with two-stage DC/AC/AC converter and two-phase IM/PMSM motor. The proposed system consisting of two-stage converter comprises: input resonant boost converter with AC output, two-phase half-bridge cyclo-converter commutated by HF AC input voltage, and induction or synchronous motor. Such a system with AC interlink, as a whole unit, has better properties as a 3-phase reference VSI inverter: higher efficiency due to soft switching of both converter stages, higher switching frequency, smaller dimensions and weight with lesser number of power semiconductor switches and better price. In comparison with currently used conventional system configurations the proposed system features a good efficiency of electronic converters and also has a good torque overloading of two-phase AC induction or synchronous motors. Design of two-stage multi-element resonant converter and results of simulation experiments are presented in the paper.

  13. Modified septic tank-anaerobic filter unit as a two-stage onsite domestic wastewater treatment system.

    Science.gov (United States)

    Sharma, Meena Kumari; Khursheed, Anwar; Kazmi, Absar Ahmad

    2014-01-01

    This study demonstrates the performance evaluation of a uniquely designed two-stage system for onsite treatment of domestic wastewater. The system consisted of two upflow anaerobic bioreactors, a modified septic tank followed by an upflow anaerobic filter, accommodated within a single cylindrical unit. The system was started up without inoculation at 24 h hydraulic retention time (HRT). It achieved a steady-state condition after 120 days. The system was observed to be remarkably efficient in removing pollutants during steady-state condition with the average removal efficiency of 88.6 +/- 3.7% for chemical oxygen demand, 86.3 +/- 4.9% for biochemical oxygen demand and 91.2 +/- 9.7% for total suspended solids. The microbial analysis revealed a high reduction (>90%) capacity of the system for indicator organism and pathogens. It also showed a very good endurance against imposed hydraulic shock load. Tracer study showed that the flow pattern was close to plug flow reactor. Mean HRT was also found to be close to the designed value.

  14. Reprint of "Two-stage sparse coding of region covariance via Log-Euclidean kernels to detect saliency".

    Science.gov (United States)

    Zhang, Ying-Ying; Yang, Cai; Zhang, Ping

    2017-08-01

    In this paper, we present a novel bottom-up saliency detection algorithm from the perspective of covariance matrices on a Riemannian manifold. Each superpixel is described by a region covariance matrix on Riemannian Manifolds. We carry out a two-stage sparse coding scheme via Log-Euclidean kernels to extract salient objects efficiently. In the first stage, given background dictionary on image borders, sparse coding of each region covariance via Log-Euclidean kernels is performed. The reconstruction error on the background dictionary is regarded as the initial saliency of each superpixel. In the second stage, an improvement of the initial result is achieved by calculating reconstruction errors of the superpixels on foreground dictionary, which is extracted from the first stage saliency map. The sparse coding in the second stage is similar to the first stage, but is able to effectively highlight the salient objects uniformly from the background. Finally, three post-processing methods-highlight-inhibition function, context-based saliency weighting, and the graph cut-are adopted to further refine the saliency map. Experiments on four public benchmark datasets show that the proposed algorithm outperforms the state-of-the-art methods in terms of precision, recall and mean absolute error, and demonstrate the robustness and efficiency of the proposed method. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. An approach to efficient mobility management in intelligent networks

    Science.gov (United States)

    Murthy, K. M. S.

    1995-01-01

    Providing personal communication systems supporting full mobility require intelligent networks for tracking mobile users and facilitating outgoing and incoming calls over different physical and network environments. In realizing the intelligent network functionalities, databases play a major role. Currently proposed network architectures envision using the SS7-based signaling network for linking these DB's and also for interconnecting DB's with switches. If the network has to support ubiquitous, seamless mobile services, then it has to support additionally mobile application parts, viz., mobile origination calls, mobile destination calls, mobile location updates and inter-switch handovers. These functions will generate significant amount of data and require them to be transferred between databases (HLR, VLR) and switches (MSC's) very efficiently. In the future, the users (fixed or mobile) may use and communicate with sophisticated CPE's (e.g. multimedia, multipoint and multisession calls) which may require complex signaling functions. This will generate volumness service handling data and require efficient transfer of these message between databases and switches. Consequently, the network providers would be able to add new services and capabilities to their networks incrementally, quickly and cost-effectively.

  16. Buildings Energy Efficiency: Interventions Analysis under a Smart Cities Approach

    Directory of Open Access Journals (Sweden)

    Gabriele Battista

    2014-07-01

    Full Text Available Most of the world’s population lives in urban areas and in inefficient buildings under the energy point of view. Starting from these assumptions, there is the need to identify methodologies and innovations able to improve social development and the quality of life of people living in cities. Smart cities can be a viable solution. The methodology traditionally adopted to evaluate building energy efficiency starts from the structure’s energy demands analysis and the demands reduction evaluation. Consequently, the energy savings is assessed through a cascade of interventions. Regarding the building envelope, the first intervention is usually related to the reduction of the thermal transmittance value, but there is also the need to emphasize the building energy savings through other parameters, such as the solar gain factor and dye solar absorbance coefficients. In this contribution, a standard building has been modeled by means of the well-known dynamic software, TRNSYS. This study shows a parametrical analysis through which it is possible to evaluate the effect of each single intervention and, consequently, its influence on the building energy demand. Through this analysis, an intervention chart has been carried out, aiming to assess the intervention efficiency starting from the percentage variation of energy demands.

  17. On a multiscale approach for filter efficiency simulations

    KAUST Repository

    Iliev, Oleg

    2014-07-01

    Filtration in general, and the dead end depth filtration of solid particles out of fluid in particular, is intrinsic multiscale problem. The deposition (capturing of particles) essentially depends on local velocity, on microgeometry (pore scale geometry) of the filtering medium and on the diameter distribution of the particles. The deposited (captured) particles change the microstructure of the porous media what leads to change of permeability. The changed permeability directly influences the velocity field and pressure distribution inside the filter element. To close the loop, we mention that the velocity influences the transport and deposition of particles. In certain cases one can evaluate the filtration efficiency considering only microscale or only macroscale models, but in general an accurate prediction of the filtration efficiency requires multiscale models and algorithms. This paper discusses the single scale and the multiscale models, and presents a fractional time step discretization algorithm for the multiscale problem. The velocity within the filter element is computed at macroscale, and is used as input for the solution of microscale problems at selected locations of the porous medium. The microscale problem is solved with respect to transport and capturing of individual particles, and its solution is postprocessed to provide permeability values for macroscale computations. Results from computational experiments with an oil filter are presented and discussed.

  18. Distribution of extracellular potassium and electrophysiologic changes during two-stage coronary ligation in the isolated, perfused canine heart

    NARCIS (Netherlands)

    Coronel, R.; Fiolet, J. W.; Wilms-Schopman, J. G.; Opthof, T.; Schaapherder, A. F.; Janse, M. J.

    1989-01-01

    We studied the relation between [K+]o and the electrophysiologic changes during a "Harris two-stage ligation," which is an occlusion of a coronary artery, preceded by a 30-minute period of 50% reduction of flow through the artery. This two-stage ligation has been reported to be antiarrhythmic. Local

  19. Performance of an iterative two-stage bayesian technique for population pharmacokinetic analysis of rich data sets

    NARCIS (Netherlands)

    Proost, Johannes H.; Eleveld, Douglas J.

    2006-01-01

    Purpose. To test the suitability of an Iterative Two-Stage Bayesian (ITSB) technique for population pharmacokinetic analysis of rich data sets, and to compare ITSB with Standard Two-Stage (STS) analysis and nonlinear Mixed Effect Modeling (MEM). Materials and Methods. Data from a clinical study with

  20. Rapid Two-stage Versus One-stage Surgical Repair of Interrupted Aortic Arch with Ventricular Septal Defect in Neonates

    Directory of Open Access Journals (Sweden)

    Meng-Lin Lee

    2008-11-01

    Conclusion: The outcome of rapid two-stage repair is comparable to that of one-stage repair. Rapid two-stage repair has the advantages of significantly shorter cardiopulmonary bypass duration and AXC time, and avoids deep hypothermic circulatory arrest. LVOTO remains an unresolved issue, and postoperative aortic arch restenosis can be dilated effectively by percutaneous balloon angioplasty.