WorldWideScience

Sample records for monitoring optimization approaches

  1. Optimizing Groundwater Monitoring Networks Using Integrated Statistical and Geostatistical Approaches

    Directory of Open Access Journals (Sweden)

    Jay Krishna Thakur

    2015-08-01

    Full Text Available The aim of this work is to investigate new approaches using methods based on statistics and geo-statistics for spatio-temporal optimization of groundwater monitoring networks. The formulated and integrated methods were tested with the groundwater quality data set of Bitterfeld/Wolfen, Germany. Spatially, the monitoring network was optimized using geo-statistical methods. Temporal optimization of the monitoring network was carried out using Sen’s method (1968. For geostatistical network optimization, a geostatistical spatio-temporal algorithm was used to identify redundant wells in 2- and 2.5-D Quaternary and Tertiary aquifers. Influences of interpolation block width, dimension, contaminant association, groundwater flow direction and aquifer homogeneity on statistical and geostatistical methods for monitoring network optimization were analysed. The integrated approach shows 37% and 28% redundancies in the monitoring network in Quaternary aquifer and Tertiary aquifer respectively. The geostatistical method also recommends 41 and 22 new monitoring wells in the Quaternary and Tertiary aquifers respectively. In temporal optimization, an overall optimized sampling interval was recommended in terms of lower quartile (238 days, median quartile (317 days and upper quartile (401 days in the research area of Bitterfeld/Wolfen. Demonstrated methods for improving groundwater monitoring network can be used in real monitoring network optimization with due consideration given to influencing factors.

  2. A probabilistic approach for optimal sensor allocation in structural health monitoring

    International Nuclear Information System (INIS)

    Azarbayejani, M; Reda Taha, M M; El-Osery, A I; Choi, K K

    2008-01-01

    Recent advances in sensor technology promote using large sensor networks to efficiently and economically monitor, identify and quantify damage in structures. In structural health monitoring (SHM) systems, the effectiveness and reliability of the sensor network are crucial to determine the optimal number and locations of sensors in SHM systems. Here, we suggest a probabilistic approach for identifying the optimal number and locations of sensors for SHM. We demonstrate a methodology to establish the probability distribution function that identifies the optimal sensor locations such that damage detection is enhanced. The approach is based on using the weights of a neural network trained from simulations using a priori knowledge about damage locations and damage severities to generate a normalized probability distribution function for optimal sensor allocation. We also demonstrate that the optimal sensor network can be related to the highest probability of detection (POD). The redundancy of the proposed sensor network is examined using a 'leave one sensor out' analysis. A prestressed concrete bridge is selected as a case study to demonstrate the effectiveness of the proposed method. The results show that the proposed approach can provide a robust design for sensor networks that are more efficient than a uniform distribution of sensors on a structure

  3. Optimization of hydrometric monitoring network in urban drainage systems using information theory.

    Science.gov (United States)

    Yazdi, J

    2017-10-01

    Regular and continuous monitoring of urban runoff in both quality and quantity aspects is of great importance for controlling and managing surface runoff. Due to the considerable costs of establishing new gauges, optimization of the monitoring network is essential. This research proposes an approach for site selection of new discharge stations in urban areas, based on entropy theory in conjunction with multi-objective optimization tools and numerical models. The modeling framework provides an optimal trade-off between the maximum possible information content and the minimum shared information among stations. This approach was applied to the main surface-water collection system in Tehran to determine new optimal monitoring points under the cost considerations. Experimental results on this drainage network show that the obtained cost-effective designs noticeably outperform the consulting engineers' proposal in terms of both information contents and shared information. The research also determined the highly frequent sites at the Pareto front which might be important for decision makers to give a priority for gauge installation on those locations of the network.

  4. A combined geostatistical-optimization model for the optimal design of a groundwater quality monitoring network

    Science.gov (United States)

    Kolosionis, Konstantinos; Papadopoulou, Maria P.

    2017-04-01

    Monitoring networks provide essential information for water resources management especially in areas with significant groundwater exploitation due to extensive agricultural activities. In this work, a simulation-optimization framework is developed based on heuristic optimization methodologies and geostatistical modeling approaches to obtain an optimal design for a groundwater quality monitoring network. Groundwater quantity and quality data obtained from 43 existing observation locations at 3 different hydrological periods in Mires basin in Crete, Greece will be used in the proposed framework in terms of Regression Kriging to develop the spatial distribution of nitrates concentration in the aquifer of interest. Based on the existing groundwater quality mapping, the proposed optimization tool will determine a cost-effective observation wells network that contributes significant information to water managers and authorities. The elimination of observation wells that add little or no beneficial information to groundwater level and quality mapping of the area can be obtain using estimations uncertainty and statistical error metrics without effecting the assessment of the groundwater quality. Given the high maintenance cost of groundwater monitoring networks, the proposed tool could used by water regulators in the decision-making process to obtain a efficient network design that is essential.

  5. Stepped MS(All) Relied Transition (SMART): An approach to rapidly determine optimal multiple reaction monitoring mass spectrometry parameters for small molecules.

    Science.gov (United States)

    Ye, Hui; Zhu, Lin; Wang, Lin; Liu, Huiying; Zhang, Jun; Wu, Mengqiu; Wang, Guangji; Hao, Haiping

    2016-02-11

    Multiple reaction monitoring (MRM) is a universal approach for quantitative analysis because of its high specificity and sensitivity. Nevertheless, optimization of MRM parameters remains as a time and labor-intensive task particularly in multiplexed quantitative analysis of small molecules in complex mixtures. In this study, we have developed an approach named Stepped MS(All) Relied Transition (SMART) to predict the optimal MRM parameters of small molecules. SMART requires firstly a rapid and high-throughput analysis of samples using a Stepped MS(All) technique (sMS(All)) on a Q-TOF, which consists of serial MS(All) events acquired from low CE to gradually stepped-up CE values in a cycle. The optimal CE values can then be determined by comparing the extracted ion chromatograms for the ion pairs of interest among serial scans. The SMART-predicted parameters were found to agree well with the parameters optimized on a triple quadrupole from the same vendor using a mixture of standards. The parameters optimized on a triple quadrupole from a different vendor was also employed for comparison, and found to be linearly correlated with the SMART-predicted parameters, suggesting the potential applications of the SMART approach among different instrumental platforms. This approach was further validated by applying to simultaneous quantification of 31 herbal components in the plasma of rats treated with a herbal prescription. Because the sMS(All) acquisition can be accomplished in a single run for multiple components independent of standards, the SMART approach are expected to find its wide application in the multiplexed quantitative analysis of complex mixtures. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Enhanced Multi-Objective Optimization of Groundwater Monitoring Networks

    DEFF Research Database (Denmark)

    Bode, Felix; Binning, Philip John; Nowak, Wolfgang

    Drinking-water well catchments include many sources for potential contaminations like gas stations or agriculture. Finding optimal positions of monitoring wells for such purposes is challenging because there are various parameters (and their uncertainties) that influence the reliability...... and optimality of any suggested monitoring location or monitoring network. The goal of this project is to develop and establish a concept to assess, design, and optimize early-warning systems within well catchments. Such optimal monitoring networks need to optimize three competing objectives: (1) a high...... be reduced to a minimum. The method is based on numerical simulation of flow and transport in heterogeneous porous media coupled with geostatistics and Monte-Carlo, wrapped up within the framework of formal multi-objective optimization. In order to gain insight into the flow and transport physics...

  7. Optimal river monitoring network using optimal partition analysis: a case study of Hun River, Northeast China.

    Science.gov (United States)

    Wang, Hui; Liu, Chunyue; Rong, Luge; Wang, Xiaoxu; Sun, Lina; Luo, Qing; Wu, Hao

    2018-01-09

    River monitoring networks play an important role in water environmental management and assessment, and it is critical to develop an appropriate method to optimize the monitoring network. In this study, an effective method was proposed based on the attainment rate of National Grade III water quality, optimal partition analysis and Euclidean distance, and Hun River was taken as a method validation case. There were 7 sampling sites in the monitoring network of the Hun River, and 17 monitoring items were analyzed once a month during January 2009 to December 2010. The results showed that the main monitoring items in the surface water of Hun River were ammonia nitrogen (NH 4 + -N), chemical oxygen demand, and biochemical oxygen demand. After optimization, the required number of monitoring sites was reduced from seven to three, and 57% of the cost was saved. In addition, there were no significant differences between non-optimized and optimized monitoring networks, and the optimized monitoring networks could correctly represent the original monitoring network. The duplicate setting degree of monitoring sites decreased after optimization, and the rationality of the monitoring network was improved. Therefore, the optimal method was identified as feasible, efficient, and economic.

  8. Model Based Optimal Sensor Network Design for Condition Monitoring in an IGCC Plant

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, Rajeeva; Kumar, Aditya; Dai, Dan; Seenumani, Gayathri; Down, John; Lopez, Rodrigo

    2012-12-31

    This report summarizes the achievements and final results of this program. The objective of this program is to develop a general model-based sensor network design methodology and tools to address key issues in the design of an optimal sensor network configuration: the type, location and number of sensors used in a network, for online condition monitoring. In particular, the focus in this work is to develop software tools for optimal sensor placement (OSP) and use these tools to design optimal sensor network configuration for online condition monitoring of gasifier refractory wear and radiant syngas cooler (RSC) fouling. The methodology developed will be applicable to sensing system design for online condition monitoring for broad range of applications. The overall approach consists of (i) defining condition monitoring requirement in terms of OSP and mapping these requirements in mathematical terms for OSP algorithm, (ii) analyzing trade-off of alternate OSP algorithms, down selecting the most relevant ones and developing them for IGCC applications (iii) enhancing the gasifier and RSC models as required by OSP algorithms, (iv) applying the developed OSP algorithm to design the optimal sensor network required for the condition monitoring of an IGCC gasifier refractory and RSC fouling. Two key requirements for OSP for condition monitoring are desired precision for the monitoring variables (e.g. refractory wear) and reliability of the proposed sensor network in the presence of expected sensor failures. The OSP problem is naturally posed within a Kalman filtering approach as an integer programming problem where the key requirements of precision and reliability are imposed as constraints. The optimization is performed over the overall network cost. Based on extensive literature survey two formulations were identified as being relevant to OSP for condition monitoring; one based on LMI formulation and the other being standard INLP formulation. Various algorithms to solve

  9. Processing Approaches for DAS-Enabled Continuous Seismic Monitoring

    Science.gov (United States)

    Dou, S.; Wood, T.; Freifeld, B. M.; Robertson, M.; McDonald, S.; Pevzner, R.; Lindsey, N.; Gelvin, A.; Saari, S.; Morales, A.; Ekblaw, I.; Wagner, A. M.; Ulrich, C.; Daley, T. M.; Ajo Franklin, J. B.

    2017-12-01

    Distributed Acoustic Sensing (DAS) is creating a "field as laboratory" capability for seismic monitoring of subsurface changes. By providing unprecedented spatial and temporal sampling at a relatively low cost, DAS enables field-scale seismic monitoring to have durations and temporal resolutions that are comparable to those of laboratory experiments. Here we report on seismic processing approaches developed during data analyses of three case studies all using DAS-enabled seismic monitoring with applications ranging from shallow permafrost to deep reservoirs: (1) 10-hour downhole monitoring of cement curing at Otway, Australia; (2) 2-month surface monitoring of controlled permafrost thaw at Fairbanks, Alaska; (3) multi-month downhole and surface monitoring of carbon sequestration at Decatur, Illinois. We emphasize the data management and processing components relevant to DAS-based seismic monitoring, which include scalable approaches to data management, pre-processing, denoising, filtering, and wavefield decomposition. DAS has dramatically increased the data volume to the extent that terabyte-per-day data loads are now typical, straining conventional approaches to data storage and processing. To achieve more efficient use of disk space and network bandwidth, we explore improved file structures and data compression schemes. Because noise floor of DAS measurements is higher than that of conventional sensors, optimal processing workflow involving advanced denoising, deconvolution (of the source signatures), and stacking approaches are being established to maximize signal content of DAS data. The resulting workflow of data management and processing could accelerate the broader adaption of DAS for continuous monitoring of critical processes.

  10. WiMAX network performance monitoring & optimization

    DEFF Research Database (Denmark)

    Zhang, Qi; Dam, H

    2008-01-01

    frequency reuse, capacity planning, proper network dimensioning, multi-class data services and so on. Furthermore, as a small operator we also want to reduce the demand for sophisticated technicians and man labour hours. To meet these critical demands, we design a generic integrated network performance......In this paper we present our WiMAX (worldwide interoperability for microwave access) network performance monitoring and optimization solution. As a new and small WiMAX network operator, there are many demanding issues that we have to deal with, such as limited available frequency resource, tight...... this integrated network performance monitoring and optimization system in our WiMAX networks. This integrated monitoring and optimization system has such good flexibility and scalability that individual function component can be used by other operators with special needs and more advanced function components can...

  11. Expert system and process optimization techniques for real-time monitoring and control of plasma processes

    Science.gov (United States)

    Cheng, Jie; Qian, Zhaogang; Irani, Keki B.; Etemad, Hossein; Elta, Michael E.

    1991-03-01

    To meet the ever-increasing demand of the rapidly-growing semiconductor manufacturing industry it is critical to have a comprehensive methodology integrating techniques for process optimization real-time monitoring and adaptive process control. To this end we have accomplished an integrated knowledge-based approach combining latest expert system technology machine learning method and traditional statistical process control (SPC) techniques. This knowledge-based approach is advantageous in that it makes it possible for the task of process optimization and adaptive control to be performed consistently and predictably. Furthermore this approach can be used to construct high-level and qualitative description of processes and thus make the process behavior easy to monitor predict and control. Two software packages RIST (Rule Induction and Statistical Testing) and KARSM (Knowledge Acquisition from Response Surface Methodology) have been developed and incorporated with two commercially available packages G2 (real-time expert system) and ULTRAMAX (a tool for sequential process optimization).

  12. FREQUENCY OPTIMIZATION FOR SECURITY MONITORING OF COMPUTER SYSTEMS

    Directory of Open Access Journals (Sweden)

    Вogatyrev V.A.

    2015-03-01

    Full Text Available The subject areas of the proposed research are monitoring facilities for protection of computer systems exposed to destructive attacks of accidental and malicious nature. The interval optimization model of test monitoring for the detection of hazardous states of security breach caused by destructive attacks is proposed. Optimization function is to maximize profit in case of requests servicing in conditions of uncertainty, and intensity variance of the destructive attacks including penalties when servicing of requests is in dangerous conditions. The vector task of system availability maximization and minimization of probabilities for its downtime and dangerous conditions is proposed to be reduced to the scalar optimization problem based on the criterion of profit maximization from information services (service of requests that integrates these private criteria. Optimization variants are considered with the definition of the averaged periodic activities of monitoring and adapting of these periods to the changes in the intensity of destructive attacks. Adaptation efficiency of the monitoring frequency to changes in the activity of the destructive attacks is shown. The proposed solutions can find their application for optimization of test monitoring intervals to detect hazardous conditions of security breach that makes it possible to increase the system effectiveness, and specifically, to maximize the expected profit from information services.

  13. Optimizing an estuarine water quality monitoring program through an entropy-based hierarchical spatiotemporal Bayesian framework

    Science.gov (United States)

    Alameddine, Ibrahim; Karmakar, Subhankar; Qian, Song S.; Paerl, Hans W.; Reckhow, Kenneth H.

    2013-10-01

    The total maximum daily load program aims to monitor more than 40,000 standard violations in around 20,000 impaired water bodies across the United States. Given resource limitations, future monitoring efforts have to be hedged against the uncertainties in the monitored system, while taking into account existing knowledge. In that respect, we have developed a hierarchical spatiotemporal Bayesian model that can be used to optimize an existing monitoring network by retaining stations that provide the maximum amount of information, while identifying locations that would benefit from the addition of new stations. The model assumes the water quality parameters are adequately described by a joint matrix normal distribution. The adopted approach allows for a reduction in redundancies, while emphasizing information richness rather than data richness. The developed approach incorporates the concept of entropy to account for the associated uncertainties. Three different entropy-based criteria are adopted: total system entropy, chlorophyll-a standard violation entropy, and dissolved oxygen standard violation entropy. A multiple attribute decision making framework is adopted to integrate the competing design criteria and to generate a single optimal design. The approach is implemented on the water quality monitoring system of the Neuse River Estuary in North Carolina, USA. The model results indicate that the high priority monitoring areas identified by the total system entropy and the dissolved oxygen violation entropy criteria are largely coincident. The monitoring design based on the chlorophyll-a standard violation entropy proved to be less informative, given the low probabilities of violating the water quality standard in the estuary.

  14. Optimizing the spatial pattern of networks for monitoring radioactive releases

    NARCIS (Netherlands)

    Melles, S.J.; Heuvelink, G.B.M.; Twenhofel, C.J.W.; Dijk, van A.; Hiemstra, P.H.; Baume, O.P.; Stohlker, U.

    2011-01-01

    This study presents a method to optimize the sampling design of environmental monitoring networks in a multi-objective setting. We optimize the permanent network of radiation monitoring stations in the Netherlands and parts of Germany as an example. The optimization method proposed combines

  15. How to study optimal timing of PET/CT for monitoring of cancer treatment

    DEFF Research Database (Denmark)

    Vach, Werner; Høilund-Carlsen, Poul Flemming; Fischer, Barbara Malene Bjerregaard

    2011-01-01

    Purpose: The use of PET/CT for monitoring treatment response in cancer patients after chemo- or radiotherapy is a very promising approach to optimize cancer treatment. However, the timing of the PET/CT-based evaluation of reduction in viable tumor tissue is a crucial question. We investigated how...

  16. Reconnecting Stochastic Methods With Hydrogeological Applications: A Utilitarian Uncertainty Analysis and Risk Assessment Approach for the Design of Optimal Monitoring Networks

    Science.gov (United States)

    Bode, Felix; Ferré, Ty; Zigelli, Niklas; Emmert, Martin; Nowak, Wolfgang

    2018-03-01

    Collaboration between academics and practitioners promotes knowledge transfer between research and industry, with both sides benefiting greatly. However, academic approaches are often not feasible given real-world limits on time, cost and data availability, especially for risk and uncertainty analyses. Although the need for uncertainty quantification and risk assessment are clear, there are few published studies examining how scientific methods can be used in practice. In this work, we introduce possible strategies for transferring and communicating academic approaches to real-world applications, countering the current disconnect between increasingly sophisticated academic methods and methods that work and are accepted in practice. We analyze a collaboration between academics and water suppliers in Germany who wanted to design optimal groundwater monitoring networks for drinking-water well catchments. Our key conclusions are: to prefer multiobjective over single-objective optimization; to replace Monte-Carlo analyses by scenario methods; and to replace data-hungry quantitative risk assessment by easy-to-communicate qualitative methods. For improved communication, it is critical to set up common glossaries of terms to avoid misunderstandings, use striking visualization to communicate key concepts, and jointly and continually revisit the project objectives. Ultimately, these approaches and recommendations are simple and utilitarian enough to be transferred directly to other practical water resource related problems.

  17. Optimal unemployment insurance with monitoring and sanctions

    NARCIS (Netherlands)

    Boone, J.; Fredriksson, P.; Holmlund, B.; van Ours, J.C.

    2007-01-01

    This article analyses the design of optimal unemployment insurance in a search equilibrium framework where search effort among the unemployed is not perfectly observable. We examine to what extent the optimal policy involves monitoring of search effort and benefit sanctions if observed search is

  18. Optimal Design of Multitype Groundwater Monitoring Networks Using Easily Accessible Tools.

    Science.gov (United States)

    Wöhling, Thomas; Geiges, Andreas; Nowak, Wolfgang

    2016-11-01

    Monitoring networks are expensive to establish and to maintain. In this paper, we extend an existing data-worth estimation method from the suite of PEST utilities with a global optimization method for optimal sensor placement (called optimal design) in groundwater monitoring networks. Design optimization can include multiple simultaneous sensor locations and multiple sensor types. Both location and sensor type are treated simultaneously as decision variables. Our method combines linear uncertainty quantification and a modified genetic algorithm for discrete multilocation, multitype search. The efficiency of the global optimization is enhanced by an archive of past samples and parallel computing. We demonstrate our methodology for a groundwater monitoring network at the Steinlach experimental site, south-western Germany, which has been established to monitor river-groundwater exchange processes. The target of optimization is the best possible exploration for minimum variance in predicting the mean travel time of the hyporheic exchange. Our results demonstrate that the information gain of monitoring network designs can be explored efficiently and with easily accessible tools prior to taking new field measurements or installing additional measurement points. The proposed methods proved to be efficient and can be applied for model-based optimal design of any type of monitoring network in approximately linear systems. Our key contributions are (1) the use of easy-to-implement tools for an otherwise complex task and (2) yet to consider data-worth interdependencies in simultaneous optimization of multiple sensor locations and sensor types. © 2016, National Ground Water Association.

  19. HEURISTIC APPROACHES FOR PORTFOLIO OPTIMIZATION

    OpenAIRE

    Manfred Gilli, Evis Kellezi

    2000-01-01

    The paper first compares the use of optimization heuristics to the classical optimization techniques for the selection of optimal portfolios. Second, the heuristic approach is applied to problems other than those in the standard mean-variance framework where the classical optimization fails.

  20. Design and optimization of a ground water monitoring system using GIS and multicriteria decision analysis

    Energy Technology Data Exchange (ETDEWEB)

    Dutta, D.; Gupta, A.D.; Ramnarong, V.

    1998-12-31

    A GIS-based methodology has been developed to design a ground water monitoring system and implemented for a selected area in Mae-Klong River Basin, Thailand. A multicriteria decision-making analysis has been performed to optimize the network system based on major criteria which govern the monitoring network design such as minimization of cost of construction, reduction of kriging standard deviations, etc. The methodology developed in this study is a new approach to designing monitoring networks which can be used for any site considering site-specific aspects. It makes it possible to choose the best monitoring network from various alternatives based on the prioritization of decision factors.

  1. Optimal Joint Liability Lending and with Costly Peer Monitoring

    NARCIS (Netherlands)

    Carli, Francesco; Uras, R.B.

    2014-01-01

    This paper characterizes an optimal group loan contract with costly peer monitoring. Using a fairly standard moral hazard framework, we show that the optimal group lending contract could exhibit a joint-liability scheme. However, optimality of joint-liability requires the involvement of a group

  2. Signal processing for solar array monitoring, fault detection, and optimization

    CERN Document Server

    Braun, Henry; Spanias, Andreas

    2012-01-01

    Although the solar energy industry has experienced rapid growth recently, high-level management of photovoltaic (PV) arrays has remained an open problem. As sensing and monitoring technology continues to improve, there is an opportunity to deploy sensors in PV arrays in order to improve their management. In this book, we examine the potential role of sensing and monitoring technology in a PV context, focusing on the areas of fault detection, topology optimization, and performance evaluation/data visualization. First, several types of commonly occurring PV array faults are considered and detection algorithms are described. Next, the potential for dynamic optimization of an array's topology is discussed, with a focus on mitigation of fault conditions and optimization of power output under non-fault conditions. Finally, monitoring system design considerations such as type and accuracy of measurements, sampling rate, and communication protocols are considered. It is our hope that the benefits of monitoring presen...

  3. Optimizing Liquid Effluent Monitoring at a Large Nuclear Complex

    International Nuclear Information System (INIS)

    Chou, Charissa J.; Johnson, V.G.; Barnett, Brent B.; Olson, Phillip M.

    2003-01-01

    Monitoring data for a centralized effluent treatment and disposal facility at the Hanford Site, a defense nuclear complex undergoing cleanup and decommissioning in southeast Washington State, was evaluated to optimize liquid effluent monitoring efficiency. Wastewater from several facilities is collected and discharged to the ground at a common disposal site. The discharged water infiltrates through 60 m of soil column to the groundwater, which eventually flows into the Columbia River, the second largest river in the contiguous United States. Protection of this important natural resource is the major objective of both cleanup and groundwater and effluent monitoring activities at the Hanford Site. Four years of effluent data were evaluated for this study. More frequent sampling was conducted during the first year of operation to assess temporal variability in analyte concentrations, to determine operational factors contributing to waste stream variability and to assess the probability of exceeding permit limits. Subsequently, the study was updated which included evaluation of the sampling and analysis regime. It was concluded that the probability of exceeding permit limits was one in a million under normal operating conditions, sampling frequency could be reduced, and several analytes could be eliminated, while indicators could be substituted for more expensive analyses. Findings were used by the state regulatory agency to modify monitoring requirements for a new discharge permit. The primary focus of this paper is on the statistical approaches and rationale that led to the successful permit modification and to a more cost-effective effluent monitoring program

  4. OPTIMIZATION METHODS FOR HYDROECOLOGICAL MONITORING SYSTEMS

    Directory of Open Access Journals (Sweden)

    Inna Pivovarova

    2016-09-01

    Full Text Available The paper describes current approaches to the rational distribution of monitoring stations. A short review and the organization of the system of hydro-geological observations in different countries are presented. On the basis of real data we propose a solution to the problem of how to calculate the average area per one hydrological station, which is the main indicator of the efficiency and performance of the monitoring system in general. We conclude that a comprehensive approach to the monitoring system organization is important, because only hydrometric and hydrochemical activities coordinated in time provide possibilities needed to analyse the underline causes of the observed pollutants content dynamics in water bodies in the long term.

  5. Optimization of rootkit revealing system resources – A game theoretic approach

    Directory of Open Access Journals (Sweden)

    K. Muthumanickam

    2015-10-01

    Full Text Available Malicious rootkit is a collection of programs designed with the intent of infecting and monitoring the victim computer without the user’s permission. After the victim has been compromised, the remote attacker can easily cause further damage. In order to infect, compromise and monitor, rootkits adopt Native Application Programming Interface (API hooking technique. To reveal the hidden rootkits, current rootkit detection techniques check different data structures which hold reference to Native APIs. To verify these data structures, a large amount of system resources are required. This is because of the number of APIs in these data structures being quite large. Game theoretic approach is a useful mathematical tool to simulate network attacks. In this paper, a mathematical model is framed to optimize resource consumption using game-theory. To the best of our knowledge, this is the first work to be proposed for optimizing resource consumption while revealing rootkit presence using game theory. Non-cooperative game model is taken to discuss the problem. Analysis and simulation results show that our game theoretic model can effectively reduce the resource consumption by selectively monitoring the number of APIs in windows platform.

  6. Preventive radioecological assessment of territory for optimization of monitoring and countermeasures after radiation accidents.

    Science.gov (United States)

    Prister, B S; Vinogradskaya, V D; Lev, T D; Talerko, M M; Garger, E K; Onishi, Y; Tischenko, O G

    2018-04-01

    A methodology of a preventive radioecological assessment of the territory has been developed for optimizing post-emergency monitoring and countermeasure implementation in an event of a severe radiation accident. Approaches and main stages of integrated radioecological zoning of the territory are described. An algorithm for the assessment of the potential radioecological criticality (sensitivity) of the area is presented. The proposed approach is validated using data of the dosimetric passportization in Ukraine after the Chernobyl accident for the test site settlements. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Dynamical System Approaches to Combinatorial Optimization

    DEFF Research Database (Denmark)

    Starke, Jens

    2013-01-01

    of large times as an asymptotically stable point of the dynamics. The obtained solutions are often not globally optimal but good approximations of it. Dynamical system and neural network approaches are appropriate methods for distributed and parallel processing. Because of the parallelization......Several dynamical system approaches to combinatorial optimization problems are described and compared. These include dynamical systems derived from penalty methods; the approach of Hopfield and Tank; self-organizing maps, that is, Kohonen networks; coupled selection equations; and hybrid methods...... thereof can be used as models for many industrial problems like manufacturing planning and optimization of flexible manufacturing systems. This is illustrated for an example in distributed robotic systems....

  8. A system approach to nuclear facility monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Argo, P.E.; Doak, J.E.; Howse, J.W.

    1996-09-01

    Sensor technology for use in nuclear facility monitoring has reached and advanced stage of development. Research on where to place these sensors in a facility and how to combine their outputs in a meaningful fashion does not appear to be keeping pace. In this paper, we take a global view of the problem where sensor technology is viewed as only one piece of a large puzzle. Other pieces of this puzzle include the optimal location and type of sensors used in a specific facility, the rate at which sensors record information, and the risk associated with the materials/processes at a facility. If the data are analyzed off-site, how will they be transmitted? Is real-time analysis necessary? Are we monitoring only the facility itself, or might we also monitor the processing that occurs there? How are we going to combine the output from the various sensors to give us an accurate picture of the state of the facility? This paper will not try to answer all these questions, but rather it will attempt to stimulate thought in this area by formulating a systems approach to the problem demonstrated by a prototype system and a systems proposed for an actual facility. Our focus will be on the data analysis aspect of the problem.

  9. Monitoring and optimization of ATLAS Tier 2 center GoeGrid

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00219638; Quadt, Arnulf; Yahyapour, Ramin

    The demand on computational and storage resources is growing along with the amount of information that needs to be processed and preserved. In order to ease the provisioning of the digital services to the growing number of consumers, more and more distributed computing systems and platforms are actively developed and employed. The building block of the distributed computing infrastructure are single computing centers, similar to the Worldwide LHC Computing Grid, Tier 2 centre GoeGrid. The main motivation of this thesis was the optimization of GoeGrid performance by efficient monitoring. The goal has been achieved by means of the GoeGrid monitoring information analysis. The data analysis approach was based on the adaptive-network-based fuzzy inference system (ANFIS) and machine learning algorithm such as Linear Support Vector Machine (SVM). The main object of the research was the digital service, since availability, reliability and serviceability of the computing platform can be measured according to the const...

  10. Optimization of in-vivo monitoring program for radiation emergency response

    Energy Technology Data Exchange (ETDEWEB)

    Ha, Wi Ho; Kim, Jong Kyung [Dept. of Nuclear Engineering, Hanyang University, Seoul (Korea, Republic of)

    2016-12-15

    In case of radiation emergencies, internal exposure monitoring for the members of public will be required to confirm internal contamination of each individual. In-vivo monitoring technique using portable gamma spectrometer can be easily applied for internal exposure monitoring in the vicinity of the on-site area. In this study, minimum detectable doses (MDDs) for '1'3'4Cs, {sup 137}Cs, and {sup 131}I were calculated adjusting minimum detectable activities (MDAs) from 50 to 1,000 Bq to find out the optimal in-vivo counting condition. DCAL software was used to derive retention fraction of Cs and I isotopes in the whole body and thyroid, respectively. A minimum detectable level was determined to set committed effective dose of 0.1 mSv for emergency response. We found that MDDs at each MDA increased along with the elapsed time. 1,000 Bq for {sup 134}Cs and {sup 137}Cs, and 100 Bq for {sup 131}I were suggested as optimal MDAs to provide in-vivo monitoring service in case of radiation emergencies. In-vivo monitoring program for emergency response should be designed to achieve the optimal MDA suggested from the present work. We expect that a reduction of counting time compared with routine monitoring program can achieve the high throughput system in case of radiation emergencies.

  11. A systems approach to nuclear facility monitoring

    International Nuclear Information System (INIS)

    Argo, P.E.; Doak, J.E.; Howse, J.W.

    1996-01-01

    Sensor technology for use in nuclear facility monitoring has reached an advanced stage of development. Research on where to place these sensors in a facility and how to combine their outputs in a meaningful fashion does not appear to be keeping pace. In this paper, the authors take a global view of the problem where sensor technology is viewed as only one piece of a large puzzle. Other pieces of this puzzle include the optimal location and type of sensors used in a specific facility, the rate at which sensors record information, and the risk associated with the materials/processes at a facility. If the data are analyzed off-site, how will they be transmitted? Is real-time analysis necessary? Is one monitoring only the facility itself, or might one also monitor the processing that occurs there (e.g., tank levels and concentrations)? How is one going to combine the outputs from the various sensors to give us an accurate picture of the state of the facility? This paper will not try to answer all these questions, but rather it will attempt to stimulate thought in this area by formulating a systems approach to the problem demonstrated by a prototype system and a system proposed for an actual facility. The focus will be on the data analysis aspect of the problem. Future work in this area should focus on recommendations and guidelines for a monitoring system based upon the type of facility and processing that occurs there

  12. Identifying optimal remotely-sensed variables for ecosystem monitoring in Colorado Plateau drylands

    Science.gov (United States)

    Poitras, Travis; Villarreal, Miguel; Waller, Eric K.; Nauman, Travis; Miller, Mark E.; Duniway, Michael C.

    2018-01-01

    Water-limited ecosystems often recover slowly following anthropogenic or natural disturbance. Multitemporal remote sensing can be used to monitor ecosystem recovery after disturbance; however, dryland vegetation cover can be challenging to accurately measure due to sparse cover and spectral confusion between soils and non-photosynthetic vegetation. With the goal of optimizing a monitoring approach for identifying both abrupt and gradual vegetation changes, we evaluated the ability of Landsat-derived spectral variables to characterize surface variability of vegetation cover and bare ground across a range of vegetation community types. Using three year composites of Landsat data, we modeled relationships between spectral information and field data collected at monitoring sites near Canyonlands National Park, UT. We also developed multiple regression models to assess improvement over single variables. We found that for all vegetation types, percent cover bare ground could be accurately modeled with single indices that included a combination of red and shortwave infrared bands, while near infrared-based vegetation indices like NDVI worked best for quantifying tree cover and total live vegetation cover in woodlands. We applied four models to characterize the spatial distribution of putative grassland ecological states across our study area, illustrating how this approach can be implemented to guide dryland ecosystem management.

  13. [Study on the optimization of monitoring indicators of drinking water quality during health supervision].

    Science.gov (United States)

    Ye, Bixiong; E, Xueli; Zhang, Lan

    2015-01-01

    To optimize non-regular drinking water quality indices (except Giardia and Cryptosporidium) of urban drinking water. Several methods including drinking water quality exceed the standard, the risk of exceeding standard, the frequency of detecting concentrations below the detection limit, water quality comprehensive index evaluation method, and attribute reduction algorithm of rough set theory were applied, redundancy factor of water quality indicators were eliminated, control factors that play a leading role in drinking water safety were found. Optimization results showed in 62 unconventional water quality monitoring indicators of urban drinking water, 42 water quality indicators could be optimized reduction by comprehensively evaluation combined with attribute reduction of rough set. Optimization of the water quality monitoring indicators and reduction of monitoring indicators and monitoring frequency could ensure the safety of drinking water quality while lowering monitoring costs and reducing monitoring pressure of the sanitation supervision departments.

  14. Optimal redistribution of an urban air quality monitoring network using atmospheric dispersion model and genetic algorithm

    Science.gov (United States)

    Hao, Yufang; Xie, Shaodong

    2018-03-01

    Air quality monitoring networks play a significant role in identifying the spatiotemporal patterns of air pollution, and they need to be deployed efficiently, with a minimum number of sites. The revision and optimal adjustment of existing monitoring networks is crucial for cities that have undergone rapid urban expansion and experience temporal variations in pollution patterns. The approach based on the Weather Research and Forecasting-California PUFF (WRF-CALPUFF) model and genetic algorithm (GA) was developed to design an optimal monitoring network. The maximization of coverage with minimum overlap and the ability to detect violations of standards were developed as the design objectives for redistributed networks. The non-dominated sorting genetic algorithm was applied to optimize the network size and site locations simultaneously for Shijiazhuang city, one of the most polluted cities in China. The assessment on the current network identified the insufficient spatial coverage of SO2 and NO2 monitoring for the expanding city. The optimization results showed that significant improvements were achieved in multiple objectives by redistributing the original network. Efficient coverage of the resulting designs improved to 60.99% and 76.06% of the urban area for SO2 and NO2, respectively. The redistributing design for multi-pollutant including 8 sites was also proposed, with the spatial representation covered 52.30% of the urban area and the overlapped areas decreased by 85.87% compared with the original network. The abilities to detect violations of standards were not improved as much as the other two objectives due to the conflicting nature between the multiple objectives. Additionally, the results demonstrated that the algorithm was slightly sensitive to the parameter settings, with the number of generations presented the most significant effect. Overall, our study presents an effective and feasible procedure for air quality network optimization at a city scale.

  15. Population Modeling Approach to Optimize Crop Harvest Strategy. The Case of Field Tomato.

    Science.gov (United States)

    Tran, Dinh T; Hertog, Maarten L A T M; Tran, Thi L H; Quyen, Nguyen T; Van de Poel, Bram; Mata, Clara I; Nicolaï, Bart M

    2017-01-01

    In this study, the aim is to develop a population model based approach to optimize fruit harvesting strategies with regard to fruit quality and its derived economic value. This approach was applied to the case of tomato fruit harvesting under Vietnamese conditions. Fruit growth and development of tomato (cv. "Savior") was monitored in terms of fruit size and color during both the Vietnamese winter and summer growing seasons. A kinetic tomato fruit growth model was applied to quantify biological fruit-to-fruit variation in terms of their physiological maturation. This model was successfully calibrated. Finally, the model was extended to translate the fruit-to-fruit variation at harvest into the economic value of the harvested crop. It can be concluded that a model based approach to the optimization of harvest date and harvest frequency with regard to economic value of the crop as such is feasible. This approach allows growers to optimize their harvesting strategy by harvesting the crop at more uniform maturity stages meeting the stringent retail demands for homogeneous high quality product. The total farm profit would still depend on the impact a change in harvesting strategy might have on related expenditures. This model based harvest optimisation approach can be easily transferred to other fruit and vegetable crops improving homogeneity of the postharvest product streams.

  16. Optimization of the monitoring of landfill gas and leachate in closed methanogenic landfills.

    Science.gov (United States)

    Jovanov, Dejan; Vujić, Bogdana; Vujić, Goran

    2018-06-15

    Monitoring of the gas and leachate parameters in a closed landfill is a long-term activity defined by national legislative worldwide. Serbian Waste Disposal Law defines the monitoring of a landfill at least 30 years after its closing, but the definition of the monitoring extent (number and type of parameters) is incomplete. In order to define and clear all the uncertainties, this research focuses on process of monitoring optimization, using the closed landfill in Zrenjanin, Serbia, as the experimental model. The aim of optimization was to find representative parameters which would define the physical, chemical and biological processes in the closed methanogenic landfill and to make this process less expensive. Research included development of the five monitoring models with different number of gas and leachate parameters and each model has been processed in open source software GeoGebra which is often used for solving optimization problems. The results of optimization process identified the most favorable monitoring model which fulfills all the defined criteria not only from the point of view of mathematical analyses, but also from the point of view of environment protection. The final outcome of this research - the minimal required parameters which should be included in the landfill monitoring are precisely defined. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Optimal design of hydrometric monitoring networks with dynamic components based on Information Theory

    Science.gov (United States)

    Alfonso, Leonardo; Chacon, Juan; Solomatine, Dimitri

    2016-04-01

    The EC-FP7 WeSenseIt project proposes the development of a Citizen Observatory of Water, aiming at enhancing environmental monitoring and forecasting with the help of citizens equipped with low-cost sensors and personal devices such as smartphones and smart umbrellas. In this regard, Citizen Observatories may complement the limited data availability in terms of spatial and temporal density, which is of interest, among other areas, to improve hydraulic and hydrological models. At this point, the following question arises: how can citizens, who are part of a citizen observatory, be optimally guided so that the data they collect and send is useful to improve modelling and water management? This research proposes a new methodology to identify the optimal location and timing of potential observations coming from moving sensors of hydrological variables. The methodology is based on Information Theory, which has been widely used in hydrometric monitoring design [1-4]. In particular, the concepts of Joint Entropy, as a measure of the amount of information that is contained in a set of random variables, which, in our case, correspond to the time series of hydrological variables captured at given locations in a catchment. The methodology presented is a step forward in the state of the art because it solves the multiobjective optimisation problem of getting simultaneously the minimum number of informative and non-redundant sensors needed for a given time, so that the best configuration of monitoring sites is found at every particular moment in time. To this end, the existing algorithms have been improved to make them efficient. The method is applied to cases in The Netherlands, UK and Italy and proves to have a great potential to complement the existing in-situ monitoring networks. [1] Alfonso, L., A. Lobbrecht, and R. Price (2010a), Information theory-based approach for location of monitoring water level gauges in polders, Water Resour. Res., 46(3), W03528 [2] Alfonso, L., A

  18. A case study of optimization in the decision process: Siting groundwater monitoring wells

    International Nuclear Information System (INIS)

    Cardwell, H.; Huff, D.; Douthitt, J.; Sale, M.

    1993-12-01

    Optimization is one of the tools available to assist decision makers in balancing multiple objectives and concerns. In a case study of the siting decision for groundwater monitoring wells, we look at the influence of the optimization models on the decisions made by the responsible groundwater specialist. This paper presents a multi-objective integer programming model for determining the location of monitoring wells associated with a groundwater pump-and-treat remediation. After presenting the initial optimization results, we analyze the actual decision and revise the model to incorporate elements of the problem that were later identified as important in the decision-making process. The results of a revised model are compared to the actual siting plans, the recommendations from the initial optimization runs, and the initial monitoring network proposed by the decision maker

  19. Optimizing urine drug testing for monitoring medication compliance in pain management.

    Science.gov (United States)

    Melanson, Stacy E F; Ptolemy, Adam S; Wasan, Ajay D

    2013-12-01

    It can be challenging to successfully monitor medication compliance in pain management. Clinicians and laboratorians need to collaborate to optimize patient care and maximize operational efficiency. The test menu, assay cutoffs, and testing algorithms utilized in the urine drug testing panels should be periodically reviewed and tailored to the patient population to effectively assess compliance and avoid unnecessary testing and cost to the patient. Pain management and pathology collaborated on an important quality improvement initiative to optimize urine drug testing for monitoring medication compliance in pain management. We retrospectively reviewed 18 months of data from our pain management center. We gathered data on test volumes, positivity rates, and the frequency of false positive results. We also reviewed the clinical utility of our testing algorithms, assay cutoffs, and adulterant panel. In addition, the cost of each component was calculated. The positivity rate for ethanol and 3,4-methylenedioxymethamphetamine were us to optimize our testing panel for monitoring medication compliance in pain management and reduce cost. Wiley Periodicals, Inc.

  20. A novel approach for optimal chiller loading using particle swarm optimization

    Energy Technology Data Exchange (ETDEWEB)

    Ardakani, A. Jahanbani; Ardakani, F. Fattahi; Hosseinian, S.H. [Department of Electrical Engineering, Amirkabir University of Technology (Tehran Polytechnic), Hafez Avenue, Tehran 15875-4413 (Iran, Islamic Republic of)

    2008-07-01

    This study employs two new methods to solve optimal chiller loading (OCL) problem. These methods are continuous genetic algorithm (GA) and particle swarm optimization (PSO). Because of continuous nature of variables in OCL problem, continuous GA and PSO easily overcome deficiencies in other conventional optimization methods. Partial load ratio (PLR) of the chiller is chosen as the variable to be optimized and consumption power of the chiller is considered as fitness function. Both of these methods find the optimal solution while the equality constraint is exactly satisfied. Some of the major advantages of proposed approaches over other conventional methods can be mentioned as fast convergence, escaping from getting into local optima, simple implementation as well as independency of the solution from the problem. Abilities of proposed methods are examined with reference to an example system. To demonstrate these abilities, results are compared with binary genetic algorithm method. The proposed approaches can be perfectly applied to air-conditioning systems. (author)

  1. Optimal spatio-temporal design of water quality monitoring networks for reservoirs: Application of the concept of value of information

    Science.gov (United States)

    Maymandi, Nahal; Kerachian, Reza; Nikoo, Mohammad Reza

    2018-03-01

    This paper presents a new methodology for optimizing Water Quality Monitoring (WQM) networks of reservoirs and lakes using the concept of the value of information (VOI) and utilizing results of a calibrated numerical water quality simulation model. With reference to the value of information theory, water quality of every checkpoint with a specific prior probability differs in time. After analyzing water quality samples taken from potential monitoring points, the posterior probabilities are updated using the Baye's theorem, and VOI of the samples is calculated. In the next step, the stations with maximum VOI is selected as optimal stations. This process is repeated for each sampling interval to obtain optimal monitoring network locations for each interval. The results of the proposed VOI-based methodology is compared with those obtained using an entropy theoretic approach. As the results of the two methodologies would be partially different, in the next step, the results are combined using a weighting method. Finally, the optimal sampling interval and location of WQM stations are chosen using the Evidential Reasoning (ER) decision making method. The efficiency and applicability of the methodology are evaluated using available water quantity and quality data of the Karkheh Reservoir in the southwestern part of Iran.

  2. A Machine-Learning and Filtering Based Data Assimilation Framework for Geologic Carbon Sequestration Monitoring Optimization

    Science.gov (United States)

    Chen, B.; Harp, D. R.; Lin, Y.; Keating, E. H.; Pawar, R.

    2017-12-01

    Monitoring is a crucial aspect of geologic carbon sequestration (GCS) risk management. It has gained importance as a means to ensure CO2 is safely and permanently stored underground throughout the lifecycle of a GCS project. Three issues are often involved in a monitoring project: (i) where is the optimal location to place the monitoring well(s), (ii) what type of data (pressure, rate and/or CO2 concentration) should be measured, and (iii) What is the optimal frequency to collect the data. In order to address these important issues, a filtering-based data assimilation procedure is developed to perform the monitoring optimization. The optimal monitoring strategy is selected based on the uncertainty reduction of the objective of interest (e.g., cumulative CO2 leak) for all potential monitoring strategies. To reduce the computational cost of the filtering-based data assimilation process, two machine-learning algorithms: Support Vector Regression (SVR) and Multivariate Adaptive Regression Splines (MARS) are used to develop the computationally efficient reduced-order-models (ROMs) from full numerical simulations of CO2 and brine flow. The proposed framework for GCS monitoring optimization is demonstrated with two examples: a simple 3D synthetic case and a real field case named Rock Spring Uplift carbon storage site in Southwestern Wyoming.

  3. The use of hierarchical clustering for the design of optimized monitoring networks

    Science.gov (United States)

    Soares, Joana; Makar, Paul Andrew; Aklilu, Yayne; Akingunola, Ayodeji

    2018-05-01

    Associativity analysis is a powerful tool to deal with large-scale datasets by clustering the data on the basis of (dis)similarity and can be used to assess the efficacy and design of air quality monitoring networks. We describe here our use of Kolmogorov-Zurbenko filtering and hierarchical clustering of NO2 and SO2 passive and continuous monitoring data to analyse and optimize air quality networks for these species in the province of Alberta, Canada. The methodology applied in this study assesses dissimilarity between monitoring station time series based on two metrics: 1 - R, R being the Pearson correlation coefficient, and the Euclidean distance; we find that both should be used in evaluating monitoring site similarity. We have combined the analytic power of hierarchical clustering with the spatial information provided by deterministic air quality model results, using the gridded time series of model output as potential station locations, as a proxy for assessing monitoring network design and for network optimization. We demonstrate that clustering results depend on the air contaminant analysed, reflecting the difference in the respective emission sources of SO2 and NO2 in the region under study. Our work shows that much of the signal identifying the sources of NO2 and SO2 emissions resides in shorter timescales (hourly to daily) due to short-term variation of concentrations and that longer-term averages in data collection may lose the information needed to identify local sources. However, the methodology identifies stations mainly influenced by seasonality, if larger timescales (weekly to monthly) are considered. We have performed the first dissimilarity analysis based on gridded air quality model output and have shown that the methodology is capable of generating maps of subregions within which a single station will represent the entire subregion, to a given level of dissimilarity. We have also shown that our approach is capable of identifying different

  4. Game-theoretic approaches to optimal risk sharing

    NARCIS (Netherlands)

    Boonen, T.J.

    2014-01-01

    This Ph.D. thesis studies optimal risk capital allocation and optimal risk sharing. The first chapter deals with the problem of optimally allocating risk capital across divisions within a financial institution. To do so, an asymptotic approach is used to generalize the well-studied Aumann-Shapley

  5. Empirical data and optimal monitoring policies: the case of four Russian sea harbours

    Energy Technology Data Exchange (ETDEWEB)

    Deissenberg, C. [CEFI-CNRS, Les Milles (France); Gurman, V.; Shevchuk, E. [RAS, Program Systems Inst., Pereslavl-Zalessky (Russian Federation); Ryumina, E. [Russian Academy of Sciences, Moscow (Russian Federation). Inst. of Economic Market Problems; Shevlyagin, K. [State Committee of the Environment Protection of the Russian Federation, Moscow (Russian Federation). Marine Environment Dept.

    2001-07-01

    In this paper, we describe the present state of empirical information about oil spills and oil monitoring activities in Russian harbours. We explain how we gathered, organized, and estimated the data needed to run the monitoring efforts optimization model of Deissenberg et al. (2001). We present, analyse, and discuss the results of the optimizations carried out with this model on the basis of the empirical data. These results show, in particular, that the economic efficiency of the monitoring activities decreases rapidly as the corresponding budget increases. This suggests that, rather urgently, measures other than monitoring should be initiated to control sea harbour pollution. (Author)

  6. Optimizing Seismic Monitoring Networks for EGS and Conventional Geothermal Projects

    Science.gov (United States)

    Kraft, Toni; Herrmann, Marcus; Bethmann, Falko; Stefan, Wiemer

    2013-04-01

    In the past several years, geological energy technologies receive growing attention and have been initiated in or close to urban areas. Some of these technologies involve injecting fluids into the subsurface (e.g., oil and gas development, waste disposal, and geothermal energy development) and have been found or suspected to cause small to moderate sized earthquakes. These earthquakes, which may have gone unnoticed in the past when they occurred in remote sparsely populated areas, are now posing a considerable risk for the public acceptance of these technologies in urban areas. The permanent termination of the EGS project in Basel, Switzerland after a number of induced ML~3 (minor) earthquakes in 2006 is one prominent example. It is therefore essential for the future development and success of these geological energy technologies to develop strategies for managing induced seismicity and keeping the size of induced earthquakes at a level that is acceptable to all stakeholders. Most guidelines and recommendations on induced seismicity published since the 1970ies conclude that an indispensable component of such a strategy is the establishment of seismic monitoring in an early stage of a project. This is because an appropriate seismic monitoring is the only way to detect and locate induced microearthquakes with sufficient certainty to develop an understanding of the seismic and geomechanical response of the reservoir to the geotechnical operation. In addition, seismic monitoring lays the foundation for the establishment of advanced traffic light systems and is therefore an important confidence building measure towards the local population and authorities. We have developed an optimization algorithm for seismic monitoring networks in urban areas that allows to design and evaluate seismic network geometries for arbitrary geotechnical operation layouts. The algorithm is based on the D-optimal experimental design that aims to minimize the error ellipsoid of the linearized

  7. Optimal layout of radiological environment monitoring based on TOPSIS method

    International Nuclear Information System (INIS)

    Li Sufen; Zhou Chunlin

    2006-01-01

    TOPSIS is a method for multi-objective-decision-making, which can be applied to comprehensive assessment of environmental quality. This paper adopts it to get the optimal layout of radiological environment monitoring, it is proved that this method is a correct, simple and convenient, practical one, and beneficial to supervision departments to scientifically and reasonably layout Radiological Environment monitoring sites. (authors)

  8. Portfolio optimization using median-variance approach

    Science.gov (United States)

    Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli

    2013-04-01

    Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.

  9. Statistical sampling approaches for soil monitoring

    NARCIS (Netherlands)

    Brus, D.J.

    2014-01-01

    This paper describes three statistical sampling approaches for regional soil monitoring, a design-based, a model-based and a hybrid approach. In the model-based approach a space-time model is exploited to predict global statistical parameters of interest such as the space-time mean. In the hybrid

  10. A Collaborative Neurodynamic Approach to Multiple-Objective Distributed Optimization.

    Science.gov (United States)

    Yang, Shaofu; Liu, Qingshan; Wang, Jun

    2018-04-01

    This paper is concerned with multiple-objective distributed optimization. Based on objective weighting and decision space decomposition, a collaborative neurodynamic approach to multiobjective distributed optimization is presented. In the approach, a system of collaborative neural networks is developed to search for Pareto optimal solutions, where each neural network is associated with one objective function and given constraints. Sufficient conditions are derived for ascertaining the convergence to a Pareto optimal solution of the collaborative neurodynamic system. In addition, it is proved that each connected subsystem can generate a Pareto optimal solution when the communication topology is disconnected. Then, a switching-topology-based method is proposed to compute multiple Pareto optimal solutions for discretized approximation of Pareto front. Finally, simulation results are discussed to substantiate the performance of the collaborative neurodynamic approach. A portfolio selection application is also given.

  11. Microseismic Monitoring Design Optimization Based on Multiple Criteria Decision Analysis

    Science.gov (United States)

    Kovaleva, Y.; Tamimi, N.; Ostadhassan, M.

    2017-12-01

    Borehole microseismic monitoring of hydraulic fracture treatments of unconventional reservoirs is a widely used method in the oil and gas industry. Sometimes, the quality of the acquired microseismic data is poor. One of the reasons for poor data quality is poor survey design. We attempt to provide a comprehensive and thorough workflow, using multiple criteria decision analysis (MCDA), to optimize planning micriseismic monitoring. So far, microseismic monitoring has been used extensively as a powerful tool for determining fracture parameters that affect the influx of formation fluids into the wellbore. The factors that affect the quality of microseismic data and their final results include average distance between microseismic events and receivers, complexity of the recorded wavefield, signal-to-noise ratio, data aperture, etc. These criteria often conflict with each other. In a typical microseismic monitoring, those factors should be considered to choose the best monitoring well(s), optimum number of required geophones, and their depth. We use MDCA to address these design challenges and develop a method that offers an optimized design out of all possible combinations to produce the best data acquisition results. We believe that this will be the first research to include the above-mentioned factors in a 3D model. Such a tool would assist companies and practicing engineers in choosing the best design parameters for future microseismic projects.

  12. Using a water-food-energy nexus approach for optimal irrigation management during drought events in Nebraska

    Science.gov (United States)

    Campana, P. E.; Zhang, J.; Yao, T.; Melton, F. S.; Yan, J.

    2017-12-01

    Climate change and drought have severe impacts on the agricultural sector affecting crop yields, water availability, and energy consumption for irrigation. Monitoring, assessing and mitigating the effects of climate change and drought on the agricultural and energy sectors are fundamental challenges that require investigation for water, food, and energy security issues. Using an integrated water-food-energy nexus approach, this study is developing a comprehensive drought management system through integration of real-time drought monitoring with real-time irrigation management. The spatially explicit model developed, GIS-OptiCE, can be used for simulation, multi-criteria optimization and generation of forecasts to support irrigation management. To demonstrate the value of the approach, the model has been applied to one major corn region in Nebraska to study the effects of the 2012 drought on crop yield and irrigation water/energy requirements as compared to a wet year such as 2009. The water-food-energy interrelationships evaluated show that significant water volumes and energy are required to halt the negative effects of drought on the crop yield. The multi-criteria optimization problem applied in this study indicates that the optimal solutions of irrigation do not necessarily correspond to those that would produce the maximum crop yields, depending on both water and economic constraints. In particular, crop pricing forecasts are extremely important to define the optimal irrigation management strategy. The model developed shows great potential in precision agriculture by providing near real-time data products including information on evapotranspiration, irrigation volumes, energy requirements, predicted crop growth, and nutrient requirements.

  13. Optimizing the Energy and Throughput of a Water-Quality Monitoring System.

    Science.gov (United States)

    Olatinwo, Segun O; Joubert, Trudi-H

    2018-04-13

    This work presents a new approach to the maximization of energy and throughput in a wireless sensor network (WSN), with the intention of applying the approach to water-quality monitoring. Water-quality monitoring using WSN technology has become an interesting research area. Energy scarcity is a critical issue that plagues the widespread deployment of WSN systems. Different power supplies, harvesting energy from sustainable sources, have been explored. However, when energy-efficient models are not put in place, energy harvesting based WSN systems may experience an unstable energy supply, resulting in an interruption in communication, and low system throughput. To alleviate these problems, this paper presents the joint maximization of the energy harvested by sensor nodes and their information-transmission rate using a sum-throughput technique. A wireless information and power transfer (WIPT) method is considered by harvesting energy from dedicated radio frequency sources. Due to the doubly near-far condition that confronts WIPT systems, a new WIPT system is proposed to improve the fairness of resource utilization in the network. Numerical simulation results are presented to validate the mathematical formulations for the optimization problem, which maximize the energy harvested and the overall throughput rate. Defining the performance metrics of achievable throughput and fairness in resource sharing, the proposed WIPT system outperforms an existing state-of-the-art WIPT system, with the comparison based on numerical simulations of both systems. The improved energy efficiency of the proposed WIPT system contributes to addressing the problem of energy scarcity.

  14. Optimized autonomous space in-situ sensor web for volcano monitoring

    Science.gov (United States)

    Song, W.-Z.; Shirazi, B.; Huang, R.; Xu, M.; Peterson, N.; LaHusen, R.; Pallister, J.; Dzurisin, D.; Moran, S.; Lisowski, M.; Kedar, S.; Chien, S.; Webb, F.; Kiely, A.; Doubleday, J.; Davies, A.; Pieri, D.

    2010-01-01

    In response to NASA's announced requirement for Earth hazard monitoring sensor-web technology, a multidisciplinary team involving sensor-network experts (Washington State University), space scientists (JPL), and Earth scientists (USGS Cascade Volcano Observatory (CVO)), have developed a prototype of dynamic and scalable hazard monitoring sensor-web and applied it to volcano monitoring. The combined Optimized Autonomous Space In-situ Sensor-web (OASIS) has two-way communication capability between ground and space assets, uses both space and ground data for optimal allocation of limited bandwidth resources on the ground, and uses smart management of competing demands for limited space assets. It also enables scalability and seamless infusion of future space and in-situ assets into the sensor-web. The space and in-situ control components of the system are integrated such that each element is capable of autonomously tasking the other. The ground in-situ was deployed into the craters and around the flanks of Mount St. Helens in July 2009, and linked to the command and control of the Earth Observing One (EO-1) satellite. ?? 2010 IEEE.

  15. Optimal Design of Air Quality Monitoring Network and its Application in an Oil Refinery Plant: An Approach to Keep Health Satus of Workers

    Directory of Open Access Journals (Sweden)

    Khaled ZoroufchiBenis

    2015-12-01

    Full Text Available Background: Industrial air pollution is a growing challenge to humane health, especially in developing countries, where there is no systematic monitoring of air pollution. Given the importance of the availabil­ity of valid information on population exposure to air pollutants, it is important to design an optimal Air Quality Monitoring Network (AQMN for assessing population exposure to air pollution and predicting the magnitude of the health risks to the population. Methods: A multi-pollutant method (implemented as a MATLAB program was explored for configur­ing an AQMN to detect the highest level of pollution around an oil refinery plant. The method ranks potential monitoring sites (grids according to their ability to represent the ambient concentra­tion. The term of cluster of contiguous grids that exceed a threshold value was used to calculate the Station Dosage. Selection of the best configuration of AQMN was done based on the ratio of a sta­tion’s dosage to the total dosage in the network. Results: Six monitoring stations were needed to detect the pollutants concentrations around the study area for estimating the level and distribution of exposure in the population with total network effi­ciency of about 99%. An analysis of the design procedure showed that wind regimes have greatest effect on the location of monitoring stations. Conclusion: The optimal AQMN enables authorities to implement an effective program of air quality management for protecting human health.

  16. Scientific Opportunities for Monitoring at Environmental Remediation Sites (SOMERS): Integrated Systems-Based Approaches to Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Bunn, Amoret L.; Wellman, Dawn M.; Deeb, Rula A.; Hawley, Elizabeth L.; Truex, Michael J.; Peterson, Mark; Freshley, Mark D.; Pierce, Eric M.; McCord, John; Young, Michael H.; Gilmore, Tyler J.; Miller, Rick; Miracle, Ann L.; Kaback, Dawn; Eddy-Dilek, Carol; Rossabi, Joe; Lee, Michelle H.; Bush, Richard P.; Beam , Paul; Chamberlain, G. M.; Marble, Justin; Whitehurst, Latrincy; Gerdes, Kurt D.; Collazo, Yvette

    2012-05-15

    Through an inter-disciplinary effort, DOE is addressing a need to advance monitoring approaches from sole reliance on cost- and labor-intensive point-source monitoring to integrated systems-based approaches such as flux-based approaches and the use of early indicator parameters. Key objectives include identifying current scientific, technical and implementation opportunities and challenges, prioritizing science and technology strategies to meet current needs within the DOE complex for the most challenging environments, and developing an integrated and risk-informed monitoring framework.

  17. On-line monitoring applications at nuclear power plants. A risk informed approach to calibration reduction

    International Nuclear Information System (INIS)

    Shankar, Ramesh; Hussey, Aaron; Davis, Eddie

    2003-01-01

    On-line monitoring of instrument channels provides increased information about the condition of monitored channels through accurate, more frequent evaluation of each cannel's performance over time. This type of performance monitoring is a methodology that offers an alternate approach to traditional time-directed calibration. EPRI's strategic role in on-line monitoring is to facilitate its implementation and cost-effective use in numerous applications at power plants. To this end, EPRI has sponsored an on-line monitoring implementation project at multiple nuclear plants specifically intended to install and use on-line monitoring technology. The selected on-line monitoring method is based on the Multivariate State Estimation Technique. The project has a planned three-year life; seven plants are participating in the project. The goal is to apply on-line monitoring to all types of power plant applications and document all aspects of the implementation process in a series of EPRI reports. These deliverables cover installation, modeling, optimization, and proven cost-benefit. This paper discusses the actual implementation of on-line monitoring to various nuclear plant instrument systems. Examples of detected instrument drift are provided. (author)

  18. Optimized Enhanced Bioremediation Through 4D Geophysical Monitoring and Autonomous Data Collection, Processing and Analysis

    Science.gov (United States)

    2014-09-01

    ER-200717) Optimized Enhanced Bioremediation Through 4D Geophysical Monitoring and Autonomous Data Collection, Processing and Analysis...N/A 3. DATES COVERED - 4. TITLE AND SUBTITLE Optimized Enhanced Bioremediation Through 4D Geophysical Monitoring and Autonomous Data...8 2.1.2 The Geophysical Signatures of Bioremediation ......................................... 8 2.2 PRIOR

  19. Random Matrix Approach for Primal-Dual Portfolio Optimization Problems

    Science.gov (United States)

    Tada, Daichi; Yamamoto, Hisashi; Shinzato, Takashi

    2017-12-01

    In this paper, we revisit the portfolio optimization problems of the minimization/maximization of investment risk under constraints of budget and investment concentration (primal problem) and the maximization/minimization of investment concentration under constraints of budget and investment risk (dual problem) for the case that the variances of the return rates of the assets are identical. We analyze both optimization problems by the Lagrange multiplier method and the random matrix approach. Thereafter, we compare the results obtained from our proposed approach with the results obtained in previous work. Moreover, we use numerical experiments to validate the results obtained from the replica approach and the random matrix approach as methods for analyzing both the primal and dual portfolio optimization problems.

  20. A Hybrid Heuristic Optimization Approach for Leak Detection in Pipe Networks Using Ordinal Optimization Approach and the Symbiotic Organism Search

    Directory of Open Access Journals (Sweden)

    Chao-Chih Lin

    2017-10-01

    Full Text Available A new transient-based hybrid heuristic approach is developed to optimize a transient generation process and to detect leaks in pipe networks. The approach couples the ordinal optimization approach (OOA and the symbiotic organism search (SOS to solve the optimization problem by means of iterations. A pipe network analysis model (PNSOS is first used to determine steady-state head distribution and pipe flow rates. The best transient generation point and its relevant valve operation parameters are optimized by maximizing the objective function of transient energy. The transient event is created at the chosen point, and the method of characteristics (MOC is used to analyze the transient flow. The OOA is applied to sift through the candidate pipes and the initial organisms with leak information. The SOS is employed to determine the leaks by minimizing the sum of differences between simulated and computed head at the observation points. Two synthetic leaking scenarios, a simple pipe network and a water distribution network (WDN, are chosen to test the performance of leak detection ordinal symbiotic organism search (LDOSOS. Leak information can be accurately identified by the proposed approach for both of the scenarios. The presented technique makes a remarkable contribution to the success of leak detection in the pipe networks.

  1. Vector-model-supported approach in prostate plan optimization

    International Nuclear Information System (INIS)

    Liu, Eva Sau Fan; Wu, Vincent Wing Cheung; Harris, Benjamin; Lehman, Margot; Pryor, David; Chan, Lawrence Wing Chi

    2017-01-01

    Lengthy time consumed in traditional manual plan optimization can limit the use of step-and-shoot intensity-modulated radiotherapy/volumetric-modulated radiotherapy (S&S IMRT/VMAT). A vector model base, retrieving similar radiotherapy cases, was developed with respect to the structural and physiologic features extracted from the Digital Imaging and Communications in Medicine (DICOM) files. Planning parameters were retrieved from the selected similar reference case and applied to the test case to bypass the gradual adjustment of planning parameters. Therefore, the planning time spent on the traditional trial-and-error manual optimization approach in the beginning of optimization could be reduced. Each S&S IMRT/VMAT prostate reference database comprised 100 previously treated cases. Prostate cases were replanned with both traditional optimization and vector-model-supported optimization based on the oncologists' clinical dose prescriptions. A total of 360 plans, which consisted of 30 cases of S&S IMRT, 30 cases of 1-arc VMAT, and 30 cases of 2-arc VMAT plans including first optimization and final optimization with/without vector-model-supported optimization, were compared using the 2-sided t-test and paired Wilcoxon signed rank test, with a significance level of 0.05 and a false discovery rate of less than 0.05. For S&S IMRT, 1-arc VMAT, and 2-arc VMAT prostate plans, there was a significant reduction in the planning time and iteration with vector-model-supported optimization by almost 50%. When the first optimization plans were compared, 2-arc VMAT prostate plans had better plan quality than 1-arc VMAT plans. The volume receiving 35 Gy in the femoral head for 2-arc VMAT plans was reduced with the vector-model-supported optimization compared with the traditional manual optimization approach. Otherwise, the quality of plans from both approaches was comparable. Vector-model-supported optimization was shown to offer much shortened planning time and iteration

  2. Vector-model-supported approach in prostate plan optimization

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Eva Sau Fan [Department of Radiation Oncology, Princess Alexandra Hospital, Brisbane (Australia); Department of Health Technology and Informatics, The Hong Kong Polytechnic University (Hong Kong); Wu, Vincent Wing Cheung [Department of Health Technology and Informatics, The Hong Kong Polytechnic University (Hong Kong); Harris, Benjamin [Department of Radiation Oncology, Princess Alexandra Hospital, Brisbane (Australia); Lehman, Margot; Pryor, David [Department of Radiation Oncology, Princess Alexandra Hospital, Brisbane (Australia); School of Medicine, University of Queensland (Australia); Chan, Lawrence Wing Chi, E-mail: wing.chi.chan@polyu.edu.hk [Department of Health Technology and Informatics, The Hong Kong Polytechnic University (Hong Kong)

    2017-07-01

    Lengthy time consumed in traditional manual plan optimization can limit the use of step-and-shoot intensity-modulated radiotherapy/volumetric-modulated radiotherapy (S&S IMRT/VMAT). A vector model base, retrieving similar radiotherapy cases, was developed with respect to the structural and physiologic features extracted from the Digital Imaging and Communications in Medicine (DICOM) files. Planning parameters were retrieved from the selected similar reference case and applied to the test case to bypass the gradual adjustment of planning parameters. Therefore, the planning time spent on the traditional trial-and-error manual optimization approach in the beginning of optimization could be reduced. Each S&S IMRT/VMAT prostate reference database comprised 100 previously treated cases. Prostate cases were replanned with both traditional optimization and vector-model-supported optimization based on the oncologists' clinical dose prescriptions. A total of 360 plans, which consisted of 30 cases of S&S IMRT, 30 cases of 1-arc VMAT, and 30 cases of 2-arc VMAT plans including first optimization and final optimization with/without vector-model-supported optimization, were compared using the 2-sided t-test and paired Wilcoxon signed rank test, with a significance level of 0.05 and a false discovery rate of less than 0.05. For S&S IMRT, 1-arc VMAT, and 2-arc VMAT prostate plans, there was a significant reduction in the planning time and iteration with vector-model-supported optimization by almost 50%. When the first optimization plans were compared, 2-arc VMAT prostate plans had better plan quality than 1-arc VMAT plans. The volume receiving 35 Gy in the femoral head for 2-arc VMAT plans was reduced with the vector-model-supported optimization compared with the traditional manual optimization approach. Otherwise, the quality of plans from both approaches was comparable. Vector-model-supported optimization was shown to offer much shortened planning time and iteration

  3. Renal function monitoring in heart failure - what is the optimal frequency? A narrative review.

    Science.gov (United States)

    Al-Naher, Ahmed; Wright, David; Devonald, Mark Alexander John; Pirmohamed, Munir

    2018-01-01

    The second most common cause of hospitalization due to adverse drug reactions in the UK is renal dysfunction due to diuretics, particularly in patients with heart failure, where diuretic therapy is a mainstay of treatment regimens. Therefore, the optimal frequency for monitoring renal function in these patients is an important consideration for preventing renal failure and hospitalization. This review looks at the current evidence for optimal monitoring practices of renal function in patients with heart failure according to national and international guidelines on the management of heart failure (AHA/NICE/ESC/SIGN). Current guidance of renal function monitoring is in large part based on expert opinion, with a lack of clinical studies that have specifically evaluated the optimal frequency of renal function monitoring in patients with heart failure. Furthermore, there is variability between guidelines, and recommendations are typically nonspecific. Safer prescribing of diuretics in combination with other antiheart failure treatments requires better evidence for frequency of renal function monitoring. We suggest developing more personalized monitoring rather than from the current medication-based guidance. Such flexible clinical guidelines could be implemented using intelligent clinical decision support systems. Personalized renal function monitoring would be more effective in preventing renal decline, rather than reacting to it. © 2017 The Authors. British Journal of Clinical Pharmacology published by John Wiley & Sons Ltd on behalf of British Pharmacological Society.

  4. Reliability-based optimal structural design by the decoupling approach

    International Nuclear Information System (INIS)

    Royset, J.O.; Der Kiureghian, A.; Polak, E.

    2001-01-01

    A decoupling approach for solving optimal structural design problems involving reliability terms in the objective function, the constraint set or both is discussed and extended. The approach employs a reformulation of each problem, in which reliability terms are replaced by deterministic functions. The reformulated problems can be solved by existing semi-infinite optimization algorithms and computational reliability methods. It is shown that the reformulated problems produce solutions that are identical to those of the original problems when the limit-state functions defining the reliability problem are affine. For nonaffine limit-state functions, approximate solutions are obtained by solving series of reformulated problems. An important advantage of the approach is that the required reliability and optimization calculations are completely decoupled, thus allowing flexibility in the choice of the optimization algorithm and the reliability computation method

  5. Optimal taxation and welfare benefits with monitoring of job search

    NARCIS (Netherlands)

    Boone, J.; Bovenberg, A.L.

    2013-01-01

    In order to investigate the interaction between tax policy, welfare benefits, the government technology for monitoring and sanctioning inadequate search, workfare, and externalities from work, we incorporate endogenous job search and involuntary unemployment into a model of optimal nonlinear income

  6. A Practical Approach to Governance and Optimization of Structured Data Elements.

    Science.gov (United States)

    Collins, Sarah A; Gesner, Emily; Morgan, Steven; Mar, Perry; Maviglia, Saverio; Colburn, Doreen; Tierney, Diana; Rocha, Roberto

    2015-01-01

    Definition and configuration of clinical content in an enterprise-wide electronic health record (EHR) implementation is highly complex. Sharing of data definitions across applications within an EHR implementation project may be constrained by practical limitations, including time, tools, and expertise. However, maintaining rigor in an approach to data governance is important for sustainability and consistency. With this understanding, we have defined a practical approach for governance of structured data elements to optimize data definitions given limited resources. This approach includes a 10 step process: 1) identification of clinical topics, 2) creation of draft reference models for clinical topics, 3) scoring of downstream data needs for clinical topics, 4) prioritization of clinical topics, 5) validation of reference models for clinical topics, and 6) calculation of gap analyses of EHR compared against reference model, 7) communication of validated reference models across project members, 8) requested revisions to EHR based on gap analysis, 9) evaluation of usage of reference models across project, and 10) Monitoring for new evidence requiring revisions to reference model.

  7. A Risk-Based Multi-Objective Optimization Concept for Early-Warning Monitoring Networks

    Science.gov (United States)

    Bode, F.; Loschko, M.; Nowak, W.

    2014-12-01

    Groundwater is a resource for drinking water and hence needs to be protected from contaminations. However, many well catchments include an inventory of known and unknown risk sources which cannot be eliminated, especially in urban regions. As matter of risk control, all these risk sources should be monitored. A one-to-one monitoring situation for each risk source would lead to a cost explosion and is even impossible for unknown risk sources. However, smart optimization concepts could help to find promising low-cost monitoring network designs.In this work we develop a concept to plan monitoring networks using multi-objective optimization. Our considered objectives are to maximize the probability of detecting all contaminations and the early warning time and to minimize the installation and operating costs of the monitoring network. A qualitative risk ranking is used to prioritize the known risk sources for monitoring. The unknown risk sources can neither be located nor ranked. Instead, we represent them by a virtual line of risk sources surrounding the production well.We classify risk sources into four different categories: severe, medium and tolerable for known risk sources and an extra category for the unknown ones. With that, early warning time and detection probability become individual objectives for each risk class. Thus, decision makers can identify monitoring networks which are valid for controlling the top risk sources, and evaluate the capabilities (or search for least-cost upgrade) to also cover moderate, tolerable and unknown risk sources. Monitoring networks which are valid for the remaining risk also cover all other risk sources but the early-warning time suffers.The data provided for the optimization algorithm are calculated in a preprocessing step by a flow and transport model. Uncertainties due to hydro(geo)logical phenomena are taken into account by Monte-Carlo simulations. To avoid numerical dispersion during the transport simulations we use the

  8. Ant colony optimization and neural networks applied to nuclear power plant monitoring

    International Nuclear Information System (INIS)

    Santos, Gean Ribeiro dos; Andrade, Delvonei Alves de; Pereira, Iraci Martinez

    2015-01-01

    A recurring challenge in production processes is the development of monitoring and diagnosis systems. Those systems help on detecting unexpected changes and interruptions, preventing losses and mitigating risks. Artificial Neural Networks (ANNs) have been extensively used in creating monitoring systems. Usually the ANNs created to solve this kind of problem are created by taking into account only parameters as the number of inputs, outputs, and hidden layers. The result networks are generally fully connected and have no improvements in its topology. This work intends to use an Ant Colony Optimization (ACO) algorithm to create a tuned neural network. The ACO search algorithm will use Back Error Propagation (BP) to optimize the network topology by suggesting the best neuron connections. The result ANN will be applied to monitoring the IEA-R1 research reactor at IPEN. (author)

  9. Ant colony optimization and neural networks applied to nuclear power plant monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Santos, Gean Ribeiro dos; Andrade, Delvonei Alves de; Pereira, Iraci Martinez, E-mail: gean@usp.br, E-mail: delvonei@ipen.br, E-mail: martinez@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2015-07-01

    A recurring challenge in production processes is the development of monitoring and diagnosis systems. Those systems help on detecting unexpected changes and interruptions, preventing losses and mitigating risks. Artificial Neural Networks (ANNs) have been extensively used in creating monitoring systems. Usually the ANNs created to solve this kind of problem are created by taking into account only parameters as the number of inputs, outputs, and hidden layers. The result networks are generally fully connected and have no improvements in its topology. This work intends to use an Ant Colony Optimization (ACO) algorithm to create a tuned neural network. The ACO search algorithm will use Back Error Propagation (BP) to optimize the network topology by suggesting the best neuron connections. The result ANN will be applied to monitoring the IEA-R1 research reactor at IPEN. (author)

  10. Optimizing the Energy and Throughput of a Water-Quality Monitoring System

    Directory of Open Access Journals (Sweden)

    Segun O. Olatinwo

    2018-04-01

    Full Text Available This work presents a new approach to the maximization of energy and throughput in a wireless sensor network (WSN, with the intention of applying the approach to water-quality monitoring. Water-quality monitoring using WSN technology has become an interesting research area. Energy scarcity is a critical issue that plagues the widespread deployment of WSN systems. Different power supplies, harvesting energy from sustainable sources, have been explored. However, when energy-efficient models are not put in place, energy harvesting based WSN systems may experience an unstable energy supply, resulting in an interruption in communication, and low system throughput. To alleviate these problems, this paper presents the joint maximization of the energy harvested by sensor nodes and their information-transmission rate using a sum-throughput technique. A wireless information and power transfer (WIPT method is considered by harvesting energy from dedicated radio frequency sources. Due to the doubly near–far condition that confronts WIPT systems, a new WIPT system is proposed to improve the fairness of resource utilization in the network. Numerical simulation results are presented to validate the mathematical formulations for the optimization problem, which maximize the energy harvested and the overall throughput rate. Defining the performance metrics of achievable throughput and fairness in resource sharing, the proposed WIPT system outperforms an existing state-of-the-art WIPT system, with the comparison based on numerical simulations of both systems. The improved energy efficiency of the proposed WIPT system contributes to addressing the problem of energy scarcity.

  11. Strain sensors optimal placement for vibration-based structural health monitoring. The effect of damage on the initially optimal configuration

    Science.gov (United States)

    Loutas, T. H.; Bourikas, A.

    2017-12-01

    We revisit the optimal sensor placement of engineering structures problem with an emphasis on in-plane dynamic strain measurements and to the direction of modal identification as well as vibration-based damage detection for structural health monitoring purposes. The approach utilized is based on the maximization of a norm of the Fisher Information Matrix built with numerically obtained mode shapes of the structure and at the same time prohibit the sensorization of neighbor degrees of freedom as well as those carrying similar information, in order to obtain a satisfactory coverage. A new convergence criterion of the Fisher Information Matrix (FIM) norm is proposed in order to deal with the issue of choosing an appropriate sensor redundancy threshold, a concept recently introduced but not further investigated concerning its choice. The sensor configurations obtained via a forward sequential placement algorithm are sub-optimal in terms of FIM norm values but the selected sensors are not allowed to be placed in neighbor degrees of freedom providing thus a better coverage of the structure and a subsequent better identification of the experimental mode shapes. The issue of how service induced damage affects the initially nominated as optimal sensor configuration is also investigated and reported. The numerical model of a composite sandwich panel serves as a representative aerospace structure upon which our investigations are based.

  12. Availability analysis of mechanical systems with condition-based maintenance using semi-Markov and evaluation of optimal condition monitoring interval

    Science.gov (United States)

    Kumar, Girish; Jain, Vipul; Gandhi, O. P.

    2018-03-01

    Maintenance helps to extend equipment life by improving its condition and avoiding catastrophic failures. Appropriate model or mechanism is, thus, needed to quantify system availability vis-a-vis a given maintenance strategy, which will assist in decision-making for optimal utilization of maintenance resources. This paper deals with semi-Markov process (SMP) modeling for steady state availability analysis of mechanical systems that follow condition-based maintenance (CBM) and evaluation of optimal condition monitoring interval. The developed SMP model is solved using two-stage analytical approach for steady-state availability analysis of the system. Also, CBM interval is decided for maximizing system availability using Genetic Algorithm approach. The main contribution of the paper is in the form of a predictive tool for system availability that will help in deciding the optimum CBM policy. The proposed methodology is demonstrated for a centrifugal pump.

  13. Optimization of nonlinear controller with an enhanced biogeography approach

    Directory of Open Access Journals (Sweden)

    Mohammed Salem

    2014-07-01

    Full Text Available This paper is dedicated to the optimization of nonlinear controllers basing of an enhanced Biogeography Based Optimization (BBO approach. Indeed, The BBO is combined to a predator and prey model where several predators are used with introduction of a modified migration operator to increase the diversification along the optimization process so as to avoid local optima and reach the optimal solution quickly. The proposed approach is used in tuning the gains of PID controller for nonlinear systems. Simulations are carried out over a Mass spring damper and an inverted pendulum and has given remarkable results when compared to genetic algorithm and BBO.

  14. Robust Portfolio Optimization using CAPM Approach

    Directory of Open Access Journals (Sweden)

    mohsen gharakhani

    2013-08-01

    Full Text Available In this paper, a new robust model of multi-period portfolio problem has been developed. One of the key concerns in any asset allocation problem is how to cope with uncertainty about future returns. There are some approaches in the literature for this purpose including stochastic programming and robust optimization. Applying these techniques to multi-period portfolio problem may increase the problem size in a way that the resulting model is intractable. In this paper, a novel approach has been proposed to formulate multi-period portfolio problem as an uncertain linear program assuming that asset return follows the single-index factor model. Robust optimization technique has been also used to solve the problem. In order to evaluate the performance of the proposed model, a numerical example has been applied using simulated data.

  15. Soft computing approach for reliability optimization: State-of-the-art survey

    International Nuclear Information System (INIS)

    Gen, Mitsuo; Yun, Young Su

    2006-01-01

    In the broadest sense, reliability is a measure of performance of systems. As systems have grown more complex, the consequences of their unreliable behavior have become severe in terms of cost, effort, lives, etc., and the interest in assessing system reliability and the need for improving the reliability of products and systems have become very important. Most solution methods for reliability optimization assume that systems have redundancy components in series and/or parallel systems and alternative designs are available. Reliability optimization problems concentrate on optimal allocation of redundancy components and optimal selection of alternative designs to meet system requirement. In the past two decades, numerous reliability optimization techniques have been proposed. Generally, these techniques can be classified as linear programming, dynamic programming, integer programming, geometric programming, heuristic method, Lagrangean multiplier method and so on. A Genetic Algorithm (GA), as a soft computing approach, is a powerful tool for solving various reliability optimization problems. In this paper, we briefly survey GA-based approach for various reliability optimization problems, such as reliability optimization of redundant system, reliability optimization with alternative design, reliability optimization with time-dependent reliability, reliability optimization with interval coefficients, bicriteria reliability optimization, and reliability optimization with fuzzy goals. We also introduce the hybrid approaches for combining GA with fuzzy logic, neural network and other conventional search techniques. Finally, we have some experiments with an example of various reliability optimization problems using hybrid GA approach

  16. Treatment of chronic myeloid leukemia: assessing risk, monitoring response, and optimizing outcome.

    Science.gov (United States)

    Shanmuganathan, Naranie; Hiwase, Devendra Keshaorao; Ross, David Morrall

    2017-12-01

    Over the past two decades, tyrosine kinase inhibitors have become the foundation of chronic myeloid leukemia (CML) treatment. The choice between imatinib and newer tyrosine kinase inhibitors (TKIs) needs to be balanced against the known toxicity and efficacy data for each drug, the therapeutic goal being to maximize molecular response assessed by BCR-ABL RQ-PCR assay. There is accumulating evidence that the early achievement of molecular targets is a strong predictor of superior long-term outcomes. Early response assessment provides the opportunity to intervene early with the aim of ensuring an optimal response. Failure to achieve milestones or loss of response can have diverse causes. We describe how clinical and laboratory monitoring can be used to ensure that each patient is achieving an optimal response and, in patients who do not reach optimal response milestones, how the monitoring results can be used to detect resistance and understand its origins.

  17. Monitoring and optimization of thermal recovery wells at Nexen's Long Lake project

    Energy Technology Data Exchange (ETDEWEB)

    Furtado, S.; Howe, A.; Wozney, G.; Zaffar, S. [Nexen Inc. (Canada); Nelson, A. [Matrikon Inc. (Canada)

    2011-07-01

    The Long Lake project, operated by Nexen and situated in the Athabasca Oil Sands area in Alberta, Canada is a steam assisted gravity drainage scheme. In such thermal recovery processes, access to real time information is crucial. Nexen used specific tools to optimize monitoring in its Long Lake project and the aim of this paper is to present those customized well and facilities dashboards and reservoir trends. Real time and historical data on pressure, temperature injection and production rates are used in a Honeywell PHD Historian connected to a Delta-V DCS system to optimize recovery from the deposit. Results showed that these enhanced monitoring capabilities provided Nexen the ability to react rapidly to abnormal conditions, which resulted in significant financial benefits. The implementation of dashboard and reservoir trends in its Long Lake project helped Nexen to better monitor the reservoir and thus to optimize bitumen recovery.

  18. A Dynamic Programming Model for Optimizing Frequency of Time-Lapse Seismic Monitoring in Geological CO2 Storage

    Science.gov (United States)

    Bhattacharjya, D.; Mukerji, T.; Mascarenhas, O.; Weyant, J.

    2005-12-01

    programming to optimize over the entire finite time horizon. We use a Monte Carlo approach to explore trade-offs between survey costs, remediation costs, and survey frequency and to analyze the sensitivity to leakage probabilities, and carbon tax. The model can be useful in determining a monitoring regime appropriate to a specific site's risk and set of remediation options, rather than a generic one based on a maximum downside risk threshold for CO2 storage as a whole. This may have implications on the overall costs associated with deploying Carbon capture and storage on a large scale.

  19. Condition Monitoring of Sensors in a NPP Using Optimized PCA

    Directory of Open Access Journals (Sweden)

    Wei Li

    2018-01-01

    Full Text Available An optimized principal component analysis (PCA framework is proposed to implement condition monitoring for sensors in a nuclear power plant (NPP in this paper. Compared with the common PCA method in previous research, the PCA method in this paper is optimized at different modeling procedures, including data preprocessing stage, modeling parameter selection stage, and fault detection and isolation stage. Then, the model’s performance is greatly improved through these optimizations. Finally, sensor measurements from a real NPP are used to train the optimized PCA model in order to guarantee the credibility and reliability of the simulation results. Meanwhile, artificial faults are sequentially imposed to sensor measurements to estimate the fault detection and isolation ability of the proposed PCA model. Simulation results show that the optimized PCA model is capable of detecting and isolating the sensors regardless of whether they exhibit major or small failures. Meanwhile, the quantitative evaluation results also indicate that better performance can be obtained in the optimized PCA method compared with the common PCA method.

  20. Monitoring T-Cell Responses in Translational Studies: Optimization of Dye-Based Proliferation Assay for Evaluation of Antigen-Specific Responses

    Directory of Open Access Journals (Sweden)

    Anja Ten Brinke

    2017-12-01

    Full Text Available Adoptive therapy with regulatory T cells or tolerance-inducing antigen (Ag-presenting cells is innovative and promising therapeutic approach to control undesired and harmful activation of the immune system, as observed in autoimmune diseases, solid organ and bone marrow transplantation. One of the critical issues to elucidate the mechanisms responsible for success or failure of these therapies and define the specificity of the therapy is the evaluation of the Ag-specific T-cell responses. Several efforts have been made to develop suitable and reproducible assays. Here, we focus on dye-based proliferation assays. We highlight with practical examples the fundamental issues to take into consideration for implementation of an effective and sensitive dye-based proliferation assay to monitor Ag-specific responses in patients. The most critical points were used to design a road map to set up and analyze the optimal assay to assess Ag-specific T-cell responses in patients undergoing different treatments. This is the first step to optimize monitoring of tolerance induction, allowing comparison of outcomes of different clinical studies. The road map can also be applied to other therapeutic interventions, not limited to tolerance induction therapies, in which Ag-specific T-cell responses are relevant such as vaccination approaches and cancer immunotherapy.

  1. Rational risk-based decision support for drinking water well managers by optimized monitoring designs

    Science.gov (United States)

    Enzenhöfer, R.; Geiges, A.; Nowak, W.

    2011-12-01

    Advection-based well-head protection zones are commonly used to manage the contamination risk of drinking water wells. Considering the insufficient knowledge about hazards and transport properties within the catchment, current Water Safety Plans recommend that catchment managers and stakeholders know, control and monitor all possible hazards within the catchments and perform rational risk-based decisions. Our goal is to supply catchment managers with the required probabilistic risk information, and to generate tools that allow for optimal and rational allocation of resources between improved monitoring versus extended safety margins and risk mitigation measures. To support risk managers with the indispensable information, we address the epistemic uncertainty of advective-dispersive solute transport and well vulnerability (Enzenhoefer et al., 2011) within a stochastic simulation framework. Our framework can separate between uncertainty of contaminant location and actual dilution of peak concentrations by resolving heterogeneity with high-resolution Monte-Carlo simulation. To keep computational costs low, we solve the reverse temporal moment transport equation. Only in post-processing, we recover the time-dependent solute breakthrough curves and the deduced well vulnerability criteria from temporal moments by non-linear optimization. Our first step towards optimal risk management is optimal positioning of sampling locations and optimal choice of data types to reduce best the epistemic prediction uncertainty for well-head delineation, using the cross-bred Likelihood Uncertainty Estimator (CLUE, Leube et al., 2011) for optimal sampling design. Better monitoring leads to more reliable and realistic protection zones and thus helps catchment managers to better justify smaller, yet conservative safety margins. In order to allow an optimal choice in sampling strategies, we compare the trade-off in monitoring versus the delineation costs by accounting for ill

  2. Minimizing transient influence in WHPA delineation: An optimization approach for optimal pumping rate schemes

    Science.gov (United States)

    Rodriguez-Pretelin, A.; Nowak, W.

    2017-12-01

    For most groundwater protection management programs, Wellhead Protection Areas (WHPAs) have served as primarily protection measure. In their delineation, the influence of time-varying groundwater flow conditions is often underestimated because steady-state assumptions are commonly made. However, it has been demonstrated that temporary variations lead to significant changes in the required size and shape of WHPAs. Apart from natural transient groundwater drivers (e.g., changes in the regional angle of flow direction and seasonal natural groundwater recharge), anthropogenic causes such as transient pumping rates are of the most influential factors that require larger WHPAs. We hypothesize that WHPA programs that integrate adaptive and optimized pumping-injection management schemes can counter transient effects and thus reduce the additional areal demand in well protection under transient conditions. The main goal of this study is to present a novel management framework that optimizes pumping schemes dynamically, in order to minimize the impact triggered by transient conditions in WHPA delineation. For optimizing pumping schemes, we consider three objectives: 1) to minimize the risk of pumping water from outside a given WHPA, 2) to maximize the groundwater supply and 3) to minimize the involved operating costs. We solve transient groundwater flow through an available transient groundwater and Lagrangian particle tracking model. The optimization problem is formulated as a dynamic programming problem. Two different optimization approaches are explored: I) the first approach aims for single-objective optimization under objective (1) only. The second approach performs multiobjective optimization under all three objectives where compromise pumping rates are selected from the current Pareto front. Finally, we look for WHPA outlines that are as small as possible, yet allow the optimization problem to find the most suitable solutions.

  3. RF cavity design exploiting a new derivative-free trust region optimization approach

    Directory of Open Access Journals (Sweden)

    Abdel-Karim S.O. Hassan

    2015-11-01

    Full Text Available In this article, a novel derivative-free (DF surrogate-based trust region optimization approach is proposed. In the proposed approach, quadratic surrogate models are constructed and successively updated. The generated surrogate model is then optimized instead of the underlined objective function over trust regions. Truncated conjugate gradients are employed to find the optimal point within each trust region. The approach constructs the initial quadratic surrogate model using few data points of order O(n, where n is the number of design variables. The proposed approach adopts weighted least squares fitting for updating the surrogate model instead of interpolation which is commonly used in DF optimization. This makes the approach more suitable for stochastic optimization and for functions subject to numerical error. The weights are assigned to give more emphasis to points close to the current center point. The accuracy and efficiency of the proposed approach are demonstrated by applying it to a set of classical bench-mark test problems. It is also employed to find the optimal design of RF cavity linear accelerator with a comparison analysis with a recent optimization technique.

  4. The Transcranial Doppler Sonography for Optimal Monitoring and Optimization of Cerebral Perfusion in Aortic Arch Surgery: A Case Series.

    Science.gov (United States)

    Ghazy, Tamer; Darwisch, Ayham; Schmidt, Torsten; Nguyen, Phong; Elmihy, Sohaila; Fajfrova, Zuzana; Zickmüller, Claudia; Matschke, Klaus; Kappert, Utz

    2017-06-16

    To analyze the feasibility and advantages of transcranial doppler sonography (TCD) for monitoring and optimization of selective cerebral perfusion (SCP) in aortic arch surgery. From April 2013 to April 2014, nine patients with extensive aortic pathology underwent surgery under moderate hypothermic cardiac arrest with unilateral antegrade SCP under TCD monitoring in our institution. Adequate sonographic window and visualization of circle of Willis were to be confirmed. Intraoperatively, a cerebral cross-filling of the contralateral cerebral arteries on the unilateral SCP was to be confirmed with TCD. If no cross-filling was confirmed, an optimization of the SCP was performed via increasing cerebral flow and increasing PCO2. If not successful, the SCP was to be switched to bilateral perfusion. Air bubble hits were recorded at the termination of SCP. A sonographic window was confirmed in all patients. Procedural success was 100%. The mean operative time was 298 ± 89 minutes. Adequate cross-filling was confirmed in 8 patients. In 1 patient, inadequate cross-filling was detected by TCD and an optimization of cerebral flow was necessary, which was successfully confirmed by TCD. There was no conversion to bilateral perfusion. Extensive air bubble hits were confirmed in 1 patient, who suffered a postoperative stroke. The 30-day mortality rate was 0. Conclusion: The TCD is feasible for cerebral perfusion monitoring in aortic surgery. It enables a confirmation of adequacy of cerebral perfusion strategy or the need for its optimization. Documentation of calcific or air-bubble hits might add insight into patients suffering postoperative neurological deficits.

  5. Efficient approach for reliability-based optimization based on weighted importance sampling approach

    International Nuclear Information System (INIS)

    Yuan, Xiukai; Lu, Zhenzhou

    2014-01-01

    An efficient methodology is presented to perform the reliability-based optimization (RBO). It is based on an efficient weighted approach for constructing an approximation of the failure probability as an explicit function of the design variables which is referred to as the ‘failure probability function (FPF)’. It expresses the FPF as a weighted sum of sample values obtained in the simulation-based reliability analysis. The required computational effort for decoupling in each iteration is just single reliability analysis. After the approximation of the FPF is established, the target RBO problem can be decoupled into a deterministic one. Meanwhile, the proposed weighted approach is combined with a decoupling approach and a sequential approximate optimization framework. Engineering examples are given to demonstrate the efficiency and accuracy of the presented methodology

  6. Equipment reliability process improvement and preventive maintenance optimization

    International Nuclear Information System (INIS)

    Darragi, M.; Georges, A.; Vaillancourt, R.; Komljenovic, D.; Croteau, M.

    2004-01-01

    The Gentilly-2 Nuclear Power Plant wants to optimize its preventive maintenance program through an Integrated Equipment Reliability Process. All equipment reliability related activities should be reviewed and optimized in a systematic approach especially for aging plants such as G2. This new approach has to be founded on best practices methods with the purpose of the rationalization of the preventive maintenance program and the performance monitoring of on-site systems, structures and components (SSC). A rational preventive maintenance strategy is based on optimized task scopes and frequencies depending on their applicability, critical effects on system safety and plant availability as well as cost-effectiveness. Preventive maintenance strategy efficiency is systematically monitored through degradation indicators. (author)

  7. Establishing an air pollution monitoring network for intra-urban population exposure assessment : a location-allocation approach

    Energy Technology Data Exchange (ETDEWEB)

    Kanaroglou, P.S. [McMaster Univ., Hamilton, ON (Canada). School of Geography and Geology; Jerrett, M.; Beckerman, B.; Arain, M.A. [McMaster Univ., Hamilton, ON (Canada). School of Geography and Geology]|[McMaster Univ., Hamilton, ON (Canada). McMaster Inst. of Environment and Health; Morrison, J. [Carleton Univ., Ottawa, ON (Canada). School of Computer Science; Gilbert, N.L. [Health Canada, Ottawa, ON (Canada). Air Health Effects Div; Brook, J.R. [Meteorological Service of Canada, Toronto, ON (Canada)

    2004-10-01

    A study was conducted to assess the relation between traffic-generated air pollution and health reactions ranging from childhood asthma to mortality from lung cancer. In particular, it developed a formal method of optimally locating a dense network of air pollution monitoring stations in order to derive an exposure assessment model based on the data obtained from the monitoring stations and related land use, population and biophysical information. The method for determining the locations of 100 nitrogen dioxide monitors in Toronto, Ontario focused on land use, transportation infrastructure and the distribution of at-risk populations. The exposure assessment produced reasonable estimates at the intra-urban scale. This method for locating air pollution monitors effectively maximizes sampling coverage in relation to important socio-demographic characteristics and likely pollution variability. The location-allocation approach integrates many variables into the demand surface to reconfigure a monitoring network and is especially useful for measuring traffic pollutants with fine-scale spatial variability. The method also shows great promise for improving the assessment of exposure to ambient air pollution in epidemiologic studies. 19 refs., 3 tabs., 4 figs.

  8. Computing Optimal Stochastic Portfolio Execution Strategies: A Parametric Approach Using Simulations

    Science.gov (United States)

    Moazeni, Somayeh; Coleman, Thomas F.; Li, Yuying

    2010-09-01

    Computing optimal stochastic portfolio execution strategies under appropriate risk consideration presents great computational challenge. We investigate a parametric approach for computing optimal stochastic strategies using Monte Carlo simulations. This approach allows reduction in computational complexity by computing coefficients for a parametric representation of a stochastic dynamic strategy based on static optimization. Using this technique, constraints can be similarly handled using appropriate penalty functions. We illustrate the proposed approach to minimize the expected execution cost and Conditional Value-at-Risk (CVaR).

  9. Distributed Cooperative Optimal Control for Multiagent Systems on Directed Graphs: An Inverse Optimal Approach.

    Science.gov (United States)

    Zhang, Huaguang; Feng, Tao; Yang, Guang-Hong; Liang, Hongjing

    2015-07-01

    In this paper, the inverse optimal approach is employed to design distributed consensus protocols that guarantee consensus and global optimality with respect to some quadratic performance indexes for identical linear systems on a directed graph. The inverse optimal theory is developed by introducing the notion of partial stability. As a result, the necessary and sufficient conditions for inverse optimality are proposed. By means of the developed inverse optimal theory, the necessary and sufficient conditions are established for globally optimal cooperative control problems on directed graphs. Basic optimal cooperative design procedures are given based on asymptotic properties of the resulting optimal distributed consensus protocols, and the multiagent systems can reach desired consensus performance (convergence rate and damping rate) asymptotically. Finally, two examples are given to illustrate the effectiveness of the proposed methods.

  10. A Clustering Based Approach for Observability and Controllability Analysis for Optimal Placement of PMU

    Science.gov (United States)

    Murthy, Ch; MIEEE; Mohanta, D. K.; SMIEE; Meher, Mahendra

    2017-08-01

    Continuous monitoring and control of the power system is essential for its healthy operation. This can be achieved by making the system observable as well as controllable. Many efforts have been made by several researchers to make the system observable by placing the Phasor Measurement Units (PMUs) at the optimal locations. But so far the idea of controllability with PMUs is not considered. This paper contributes how to check whether the system is controllable or not, if not then how make it controllable using a clustering approach. IEEE 14 bus system is considered to illustrate the concept of controllability.

  11. Quantum Resonance Approach to Combinatorial Optimization

    Science.gov (United States)

    Zak, Michail

    1997-01-01

    It is shown that quantum resonance can be used for combinatorial optimization. The advantage of the approach is in independence of the computing time upon the dimensionality of the problem. As an example, the solution to a constraint satisfaction problem of exponential complexity is demonstrated.

  12. Hybrid Swarm Intelligence Optimization Approach for Optimal Data Storage Position Identification in Wireless Sensor Networks

    Science.gov (United States)

    Mohanasundaram, Ranganathan; Periasamy, Pappampalayam Sanmugam

    2015-01-01

    The current high profile debate with regard to data storage and its growth have become strategic task in the world of networking. It mainly depends on the sensor nodes called producers, base stations, and also the consumers (users and sensor nodes) to retrieve and use the data. The main concern dealt here is to find an optimal data storage position in wireless sensor networks. The works that have been carried out earlier did not utilize swarm intelligence based optimization approaches to find the optimal data storage positions. To achieve this goal, an efficient swam intelligence approach is used to choose suitable positions for a storage node. Thus, hybrid particle swarm optimization algorithm has been used to find the suitable positions for storage nodes while the total energy cost of data transmission is minimized. Clustering-based distributed data storage is utilized to solve clustering problem using fuzzy-C-means algorithm. This research work also considers the data rates and locations of multiple producers and consumers to find optimal data storage positions. The algorithm is implemented in a network simulator and the experimental results show that the proposed clustering and swarm intelligence based ODS strategy is more effective than the earlier approaches. PMID:25734182

  13. Optimization of deformation monitoring networks using finite element strain analysis

    Science.gov (United States)

    Alizadeh-Khameneh, M. Amin; Eshagh, Mehdi; Jensen, Anna B. O.

    2018-04-01

    An optimal design of a geodetic network can fulfill the requested precision and reliability of the network, and decrease the expenses of its execution by removing unnecessary observations. The role of an optimal design is highlighted in deformation monitoring network due to the repeatability of these networks. The core design problem is how to define precision and reliability criteria. This paper proposes a solution, where the precision criterion is defined based on the precision of deformation parameters, i. e. precision of strain and differential rotations. A strain analysis can be performed to obtain some information about the possible deformation of a deformable object. In this study, we split an area into a number of three-dimensional finite elements with the help of the Delaunay triangulation and performed the strain analysis on each element. According to the obtained precision of deformation parameters in each element, the precision criterion of displacement detection at each network point is then determined. The developed criterion is implemented to optimize the observations from the Global Positioning System (GPS) in Skåne monitoring network in Sweden. The network was established in 1989 and straddled the Tornquist zone, which is one of the most active faults in southern Sweden. The numerical results show that 17 out of all 21 possible GPS baseline observations are sufficient to detect minimum 3 mm displacement at each network point.

  14. A robust quantitative near infrared modeling approach for blend monitoring.

    Science.gov (United States)

    Mohan, Shikhar; Momose, Wataru; Katz, Jeffrey M; Hossain, Md Nayeem; Velez, Natasha; Drennen, James K; Anderson, Carl A

    2018-01-30

    This study demonstrates a material sparing Near-Infrared modeling approach for powder blend monitoring. In this new approach, gram scale powder mixtures are subjected to compression loads to simulate the effect of scale using an Instron universal testing system. Models prepared by the new method development approach (small-scale method) and by a traditional method development (blender-scale method) were compared by simultaneously monitoring a 1kg batch size blend run. Both models demonstrated similar model performance. The small-scale method strategy significantly reduces the total resources expended to develop Near-Infrared calibration models for on-line blend monitoring. Further, this development approach does not require the actual equipment (i.e., blender) to which the method will be applied, only a similar optical interface. Thus, a robust on-line blend monitoring method can be fully developed before any large-scale blending experiment is viable, allowing the blend method to be used during scale-up and blend development trials. Copyright © 2017. Published by Elsevier B.V.

  15. Optimization of monitoring sewage with radionuclide contaminants

    International Nuclear Information System (INIS)

    Egorov, V.N.

    1991-01-01

    Recommendations on optimization of monitoring contaminated sewage aimed at enviromental protection agxinst radioactive contamination at minimum cost are presented. The way of selecting water sampling technique depends on water composition stability and flow rate. Depending on the type of radionuclide distribution in the sewage one can estimate minimum frequency of sampling or number of samples sufficient for assuring reliability of the conclusion on the excess or non-excess of permissible radioactive contamination levels, as well as analysis assigned accuracy. By irregular contaminated sewage-discharge and possibility of short-term releases of different form and duration, sampling should be accomplished through automatic devices of continuons or periodic operation

  16. A participatory approach to design monitoring indicators of production diseases in organic dairy farms.

    Science.gov (United States)

    Duval, J E; Fourichon, C; Madouasse, A; Sjöström, K; Emanuelson, U; Bareille, N

    2016-06-01

    Production diseases have an important negative effect on the health and welfare of dairy cows. Although organic animal production systems aim for high animal health levels, compliance with European organic farming regulations does not guarantee that this is achieved. Herd health and production management (HHPM) programs aim at optimizing herd health by preventing disease and production problems, but as yet they have not been consistently implemented by farmers. We hypothesize that one reason is the mismatch between what scientists propose as indicators for herd health monitoring and what farmers would like to use. Herd health monitoring is a key element in HHPM programs as it permits a regular assessment of the functioning of the different components of the production process. Planned observations or measurements of these components are indispensable for this monitoring. In this study, a participatory approach was used to create an environment in which farmers could adapt the indicators proposed by scientists for monitoring the five main production diseases on dairy cattle farms. The adaptations of the indicators were characterized and the farmers' explanations for the changes made were described. The study was conducted in France and Sweden, which differ in terms of their national organic regulations and existing advisory services. In both countries, twenty certified organic dairy farmers and their animal health management advisors participated in the study. All of the farmers adapted the initial monitoring plan proposed by scientists to specific production and animal health situation on their farm. This resulted in forty unique and farm-specific combinations of indicators for herd health monitoring. All but three farmers intended to monitor five health topics simultaneously using the constructed indicators. The qualitative analysis of the explanations given by farmers for their choices enabled an understanding of farmers' reasons for selecting and adapting

  17. Optimization approaches to volumetric modulated arc therapy planning

    Energy Technology Data Exchange (ETDEWEB)

    Unkelbach, Jan, E-mail: junkelbach@mgh.harvard.edu; Bortfeld, Thomas; Craft, David [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts 02114 (United States); Alber, Markus [Department of Medical Physics and Department of Radiation Oncology, Aarhus University Hospital, Aarhus C DK-8000 (Denmark); Bangert, Mark [Department of Medical Physics in Radiation Oncology, German Cancer Research Center, Heidelberg D-69120 (Germany); Bokrantz, Rasmus [RaySearch Laboratories, Stockholm SE-111 34 (Sweden); Chen, Danny [Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, Indiana 46556 (United States); Li, Ruijiang; Xing, Lei [Department of Radiation Oncology, Stanford University, Stanford, California 94305 (United States); Men, Chunhua [Department of Research, Elekta, Maryland Heights, Missouri 63043 (United States); Nill, Simeon [Joint Department of Physics at The Institute of Cancer Research and The Royal Marsden NHS Foundation Trust, London SM2 5NG (United Kingdom); Papp, Dávid [Department of Mathematics, North Carolina State University, Raleigh, North Carolina 27695 (United States); Romeijn, Edwin [H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332 (United States); Salari, Ehsan [Department of Industrial and Manufacturing Engineering, Wichita State University, Wichita, Kansas 67260 (United States)

    2015-03-15

    Volumetric modulated arc therapy (VMAT) has found widespread clinical application in recent years. A large number of treatment planning studies have evaluated the potential for VMAT for different disease sites based on the currently available commercial implementations of VMAT planning. In contrast, literature on the underlying mathematical optimization methods used in treatment planning is scarce. VMAT planning represents a challenging large scale optimization problem. In contrast to fluence map optimization in intensity-modulated radiotherapy planning for static beams, VMAT planning represents a nonconvex optimization problem. In this paper, the authors review the state-of-the-art in VMAT planning from an algorithmic perspective. Different approaches to VMAT optimization, including arc sequencing methods, extensions of direct aperture optimization, and direct optimization of leaf trajectories are reviewed. Their advantages and limitations are outlined and recommendations for improvements are discussed.

  18. Optical biosensor optimized for continuous in-line glucose monitoring in animal cell culture.

    Science.gov (United States)

    Tric, Mircea; Lederle, Mario; Neuner, Lisa; Dolgowjasow, Igor; Wiedemann, Philipp; Wölfl, Stefan; Werner, Tobias

    2017-09-01

    Biosensors for continuous glucose monitoring in bioreactors could provide a valuable tool for optimizing culture conditions in biotechnological applications. We have developed an optical biosensor for long-term continuous glucose monitoring and demonstrated a tight glucose level control during cell culture in disposable bioreactors. The in-line sensor is based on a commercially available oxygen sensor that is coated with cross-linked glucose oxidase (GOD). The dynamic range of the sensor was tuned by a hydrophilic perforated diffusion membrane with an optimized permeability for glucose and oxygen. The biosensor was thoroughly characterized by experimental data and numerical simulations, which enabled insights into the internal concentration profile of the deactivating by-product hydrogen peroxide. The simulations were carried out with a one-dimensional biosensor model and revealed that, in addition to the internal hydrogen peroxide concentration, the turnover rate of the enzyme GOD plays a crucial role for biosensor stability. In the light of this finding, the glucose sensor was optimized to reach a long functional stability (>52 days) under continuous glucose monitoring conditions with a dynamic range of 0-20 mM and a response time of t 90  ≤ 10 min. In addition, we demonstrated that the sensor was sterilizable with beta and UV irradiation and only subjected to minor cross sensitivity to oxygen, when an oxygen reference sensor was applied. Graphical abstract Measuring setup of a glucose biosensor in a shake flask for continuous glucose monitoring in mammalian cell culture.

  19. Two-Layer Hierarchy Optimization Model for Communication Protocol in Railway Wireless Monitoring Networks

    Directory of Open Access Journals (Sweden)

    Xiaoping Ma

    2018-01-01

    Full Text Available The wireless monitoring system is always destroyed by the insufficient energy of the sensors in railway. Hence, how to optimize the communication protocol and extend the system lifetime is crucial to ensure the stability of system. However, the existing studies focused primarily on cluster-based or multihop protocols individually, which are ineffective in coping with the complex communication scenarios in the railway wireless monitoring system (RWMS. This study proposes a hybrid protocol which combines the cluster-based and multihop protocols (CMCP to minimize and balance the energy consumption in different sections of the RWMS. In the first hierarchy, the total energy consumption is minimized by optimizing the cluster quantities in the cluster-based protocol and the number of hops and the corresponding hop distances in the multihop protocol. In the second hierarchy, the energy consumption is balanced through rotating the cluster head (CH in the subnetworks and further optimizing the hops and the corresponding hop distances in the backbone network. On this basis, the system lifetime is maximized with the minimum and balance energy consumption among the sensors. Furthermore, the hybrid particle swarm optimization and genetic algorithm (PSO-GA are adopted to optimize the energy consumption from the two-layer hierarchy. Finally, the effectiveness of the proposed CMCP is verified in the simulation. The performances of the proposed CMCP in system lifetime, residual energy, and the corresponding variance are all superior to the LEACH protocol widely applied in the previous research. The effective protocol proposed in this study can facilitate the application of the wireless monitoring network in the railway system and enhance safety operation of the railway.

  20. Health technology assessment to optimize health technology utilization: using implementation initiatives and monitoring processes.

    Science.gov (United States)

    Frønsdal, Katrine B; Facey, Karen; Klemp, Marianne; Norderhaug, Inger Natvig; Mørland, Berit; Røttingen, John-Arne

    2010-07-01

    The way in which a health technology is used in any particular health system depends on the decisions and actions of a variety of stakeholders, the local culture, and context. In 2009, the HTAi Policy Forum considered how health technology assessment (HTA) could be improved to optimize the use of technologies (in terms of uptake, change in use, or disinvestment) in such complex systems. In scoping, it was agreed to focus on initiatives to implement evidence-based guidance and monitoring activities. A review identified systematic reviews of implementation initiatives and monitoring activities. A two-day deliberative workshop was held to discuss key papers, members' experiences, and collectively address key questions. This consensus paper was developed by email and finalized at a postworkshop meeting. Evidence suggests that the impact and use of HTA could be increased by ensuring timely delivery of relevant reports to clearly determined policy receptor (decision-making) points. To achieve this, the breadth of assessment, implementation initiatives such as incentives and targeted, intelligent dissemination of HTA result, needs to be considered. HTA stakeholders undertake a variety of monitoring activities, which could inform optimal use of a technology. However, the quality of these data varies and is often not submitted to an HTA. Monitoring data should be sufficiently robust so that they can be used in HTA to inform optimal use of technology. Evidence-based implementation initiatives should be developed for HTA, to better inform decision makers at all levels in a health system about the optimal use of technology.

  1. Realizing an Optimization Approach Inspired from Piaget’s Theory on Cognitive Development

    Directory of Open Access Journals (Sweden)

    Utku Kose

    2015-09-01

    Full Text Available The objective of this paper is to introduce an artificial intelligence based optimization approach, which is inspired from Piaget’s theory on cognitive development. The approach has been designed according to essential processes that an individual may experience while learning something new or improving his / her knowledge. These processes are associated with the Piaget’s ideas on an individual’s cognitive development. The approach expressed in this paper is a simple algorithm employing swarm intelligence oriented tasks in order to overcome single-objective optimization problems. For evaluating effectiveness of this early version of the algorithm, test operations have been done via some benchmark functions. The obtained results show that the approach / algorithm can be an alternative to the literature in terms of single-objective optimization.The authors have suggested the name: Cognitive Development Optimization Algorithm (CoDOA for the related intelligent optimization approach.

  2. Solving Unconstrained Global Optimization Problems via Hybrid Swarm Intelligence Approaches

    Directory of Open Access Journals (Sweden)

    Jui-Yu Wu

    2013-01-01

    Full Text Available Stochastic global optimization (SGO algorithms such as the particle swarm optimization (PSO approach have become popular for solving unconstrained global optimization (UGO problems. The PSO approach, which belongs to the swarm intelligence domain, does not require gradient information, enabling it to overcome this limitation of traditional nonlinear programming methods. Unfortunately, PSO algorithm implementation and performance depend on several parameters, such as cognitive parameter, social parameter, and constriction coefficient. These parameters are tuned by using trial and error. To reduce the parametrization of a PSO method, this work presents two efficient hybrid SGO approaches, namely, a real-coded genetic algorithm-based PSO (RGA-PSO method and an artificial immune algorithm-based PSO (AIA-PSO method. The specific parameters of the internal PSO algorithm are optimized using the external RGA and AIA approaches, and then the internal PSO algorithm is applied to solve UGO problems. The performances of the proposed RGA-PSO and AIA-PSO algorithms are then evaluated using a set of benchmark UGO problems. Numerical results indicate that, besides their ability to converge to a global minimum for each test UGO problem, the proposed RGA-PSO and AIA-PSO algorithms outperform many hybrid SGO algorithms. Thus, the RGA-PSO and AIA-PSO approaches can be considered alternative SGO approaches for solving standard-dimensional UGO problems.

  3. Systematic approach to personnel neutron monitoring

    International Nuclear Information System (INIS)

    Griffith, R.V.; Hankins, D.E.

    1980-01-01

    NTA film and albedo detectors represent the major portion of personnel dosimeters now used for occupational neutron monitoring. However, recent attention to the spectral response of these systems has demonstrated the need for detectors that have a better match to the fields being monitored. Recent developments in direct recoil track etch dosimeters present some intriguing alternatives, and careful use of 237 Np fission fragment detectors offers the advantage of a good dose equivalent spectral match. Work continues on a number of other new detector mechanisms, but problems with sensitivity, energy response, gamma interference, etc., continue to prevent development of most mechanisms into viable personnel dosimeters. Current dosimeter limitations make a systematic approach to personnel neutron monitoring particularly important. Techniques have been developed and tested, using available portable survey instruments, that significantly improve the quality of dosimeter interpretation. Even simple spectrometry can be done with modest effort, significantly improving the health physicists ability to provide accurate neutron monitoring

  4. Optimal unit sizing for small-scale integrated energy systems using multi-objective interval optimization and evidential reasoning approach

    International Nuclear Information System (INIS)

    Wei, F.; Wu, Q.H.; Jing, Z.X.; Chen, J.J.; Zhou, X.X.

    2016-01-01

    This paper proposes a comprehensive framework including a multi-objective interval optimization model and evidential reasoning (ER) approach to solve the unit sizing problem of small-scale integrated energy systems, with uncertain wind and solar energies integrated. In the multi-objective interval optimization model, interval variables are introduced to tackle the uncertainties of the optimization problem. Aiming at simultaneously considering the cost and risk of a business investment, the average and deviation of life cycle cost (LCC) of the integrated energy system are formulated. In order to solve the problem, a novel multi-objective optimization algorithm, MGSOACC (multi-objective group search optimizer with adaptive covariance matrix and chaotic search), is developed, employing adaptive covariance matrix to make the search strategy adaptive and applying chaotic search to maintain the diversity of group. Furthermore, ER approach is applied to deal with multiple interests of an investor at the business decision making stage and to determine the final unit sizing solution from the Pareto-optimal solutions. This paper reports on the simulation results obtained using a small-scale direct district heating system (DH) and a small-scale district heating and cooling system (DHC) optimized by the proposed framework. The results demonstrate the superiority of the multi-objective interval optimization model and ER approach in tackling the unit sizing problem of integrated energy systems considering the integration of uncertian wind and solar energies. - Highlights: • Cost and risk of investment in small-scale integrated energy systems are considered. • A multi-objective interval optimization model is presented. • A novel multi-objective optimization algorithm (MGSOACC) is proposed. • The evidential reasoning (ER) approach is used to obtain the final optimal solution. • The MGSOACC and ER can tackle the unit sizing problem efficiently.

  5. A Study on the Optimal Positions of ECG Electrodes in a Garment for the Design of ECG-Monitoring Clothing for Male.

    Science.gov (United States)

    Cho, Hakyung; Lee, Joo Hyeon

    2015-09-01

    Smart clothing is a sort of wearable device used for ubiquitous health monitoring. It provides comfort and efficiency in vital sign measurements and has been studied and developed in various types of monitoring platforms such as T-shirt and sports bra. However, despite these previous approaches, smart clothing for electrocardiography (ECG) monitoring has encountered a serious shortcoming relevant to motion artifacts caused by wearer movement. In effect, motion artifacts are one of the major problems in practical implementation of most wearable health-monitoring devices. In the ECG measurements collected by a garment, motion artifacts are usually caused by improper location of the electrode, leading to lack of contact between the electrode and skin with body motion. The aim of this study was to suggest a design for ECG-monitoring clothing contributing to reduction of motion artifacts. Based on the clothing science theory, it was assumed in this study that the stability of the electrode in a dynamic state differed depending on the electrode location in an ECG-monitoring garment. Founded on this assumption, effects of 56 electrode positions were determined by sectioning the surface of the garment into grids with 6 cm intervals in the front and back of the bodice. In order to determine the optimal locations of the ECG electrodes from the 56 positions, ECG measurements were collected from 10 participants at every electrode position in the garment while the wearer was in motion. The electrode locations indicating both an ECG measurement rate higher than 80.0 % and a large amplitude during motion were selected as the optimal electrode locations. The results of this analysis show four electrode locations with consistently higher ECG measurement rates and larger amplitudes amongst the 56 locations. These four locations were abstracted to be least affected by wearer movement in this research. Based on this result, a design of the garment-formed ECG monitoring platform

  6. An Efficient PageRank Approach for Urban Traffic Optimization

    Directory of Open Access Journals (Sweden)

    Florin Pop

    2012-01-01

    to determine optimal decisions for each traffic light, based on the solution given by Larry Page for page ranking in Web environment (Page et al. (1999. Our approach is similar with work presented by Sheng-Chung et al. (2009 and Yousef et al. (2010. We consider that the traffic lights are controlled by servers and a score for each road is computed based on efficient PageRank approach and is used in cost function to determine optimal decisions. We demonstrate that the cumulative contribution of each car in the traffic respects the main constrain of PageRank approach, preserving all the properties of matrix consider in our model.

  7. Geometry optimization of molecules within an LCGTO local-density functional approach

    International Nuclear Information System (INIS)

    Mintmire, J.W.

    1990-01-01

    We describe our implementation of geometry optimization techniques within the linear combination of Gaussian-type orbitals (LCGTO) approach to local-density functional theory. The algorithm for geometry optimization is based on the evaluation of the gradient of the total energy with respect to internal coordinates within the local-density functional scheme. We present optimization results for a range of small molecules which serve as test cases for our approach

  8. Optimal design of monitoring networks for multiple groundwater quality parameters using a Kalman filter: application to the Irapuato-Valle aquifer.

    Science.gov (United States)

    Júnez-Ferreira, H E; Herrera, G S; González-Hita, L; Cardona, A; Mora-Rodríguez, J

    2016-01-01

    A new method for the optimal design of groundwater quality monitoring networks is introduced in this paper. Various indicator parameters were considered simultaneously and tested for the Irapuato-Valle aquifer in Mexico. The steps followed in the design were (1) establishment of the monitoring network objectives, (2) definition of a groundwater quality conceptual model for the study area, (3) selection of the parameters to be sampled, and (4) selection of a monitoring network by choosing the well positions that minimize the estimate error variance of the selected indicator parameters. Equal weight for each parameter was given to most of the aquifer positions and a higher weight to priority zones. The objective for the monitoring network in the specific application was to obtain a general reconnaissance of the water quality, including water types, water origin, and first indications of contamination. Water quality indicator parameters were chosen in accordance with this objective, and for the selection of the optimal monitoring sites, it was sought to obtain a low-uncertainty estimate of these parameters for the entire aquifer and with more certainty in priority zones. The optimal monitoring network was selected using a combination of geostatistical methods, a Kalman filter and a heuristic optimization method. Results show that when monitoring the 69 locations with higher priority order (the optimal monitoring network), the joint average standard error in the study area for all the groundwater quality parameters was approximately 90 % of the obtained with the 140 available sampling locations (the set of pilot wells). This demonstrates that an optimal design can help to reduce monitoring costs, by avoiding redundancy in data acquisition.

  9. An integrated sampling and analysis approach for improved biodiversity monitoring

    Science.gov (United States)

    DeWan, Amielle A.; Zipkin, Elise F.

    2010-01-01

    Successful biodiversity conservation requires high quality monitoring data and analyses to ensure scientifically defensible policy, legislation, and management. Although monitoring is a critical component in assessing population status and trends, many governmental and non-governmental organizations struggle to develop and implement effective sampling protocols and statistical analyses because of the magnitude and diversity of species in conservation concern. In this article we describe a practical and sophisticated data collection and analysis framework for developing a comprehensive wildlife monitoring program that includes multi-species inventory techniques and community-level hierarchical modeling. Compared to monitoring many species individually, the multi-species approach allows for improved estimates of individual species occurrences, including rare species, and an increased understanding of the aggregated response of a community to landscape and habitat heterogeneity. We demonstrate the benefits and practicality of this approach to address challenges associated with monitoring in the context of US state agencies that are legislatively required to monitor and protect species in greatest conservation need. We believe this approach will be useful to regional, national, and international organizations interested in assessing the status of both common and rare species.

  10. Horsetail matching: a flexible approach to optimization under uncertainty

    Science.gov (United States)

    Cook, L. W.; Jarrett, J. P.

    2018-04-01

    It is important to design engineering systems to be robust with respect to uncertainties in the design process. Often, this is done by considering statistical moments, but over-reliance on statistical moments when formulating a robust optimization can produce designs that are stochastically dominated by other feasible designs. This article instead proposes a formulation for optimization under uncertainty that minimizes the difference between a design's cumulative distribution function and a target. A standard target is proposed that produces stochastically non-dominated designs, but the formulation also offers enough flexibility to recover existing approaches for robust optimization. A numerical implementation is developed that employs kernels to give a differentiable objective function. The method is applied to algebraic test problems and a robust transonic airfoil design problem where it is compared to multi-objective, weighted-sum and density matching approaches to robust optimization; several advantages over these existing methods are demonstrated.

  11. Monitoring and Reporting HACs - A Federalist Approach

    Data.gov (United States)

    U.S. Department of Health & Human Services — Findings from a study entitled, Monitoring and Reporting Hospital-Acquired Conditions - A Federalist Approach, published in Volume 4, Issue 4 of Medicare and...

  12. Big Data Reduction and Optimization in Sensor Monitoring Network

    Directory of Open Access Journals (Sweden)

    Bin He

    2014-01-01

    Full Text Available Wireless sensor networks (WSNs are increasingly being utilized to monitor the structural health of the underground subway tunnels, showing many promising advantages over traditional monitoring schemes. Meanwhile, with the increase of the network size, the system is incapable of dealing with big data to ensure efficient data communication, transmission, and storage. Being considered as a feasible solution to these issues, data compression can reduce the volume of data travelling between sensor nodes. In this paper, an optimization algorithm based on the spatial and temporal data compression is proposed to cope with these issues appearing in WSNs in the underground tunnel environment. The spatial and temporal correlation functions are introduced for the data compression and data recovery. It is verified that the proposed algorithm is applicable to WSNs in the underground tunnel.

  13. Assessing and optimizing infra-sound networks to monitor volcanic eruptions

    International Nuclear Information System (INIS)

    Tailpied, Dorianne

    2016-01-01

    Understanding infra-sound signals is essential to monitor compliance with the Comprehensive Nuclear-Test ban Treaty, and also to demonstrate the potential of the global monitoring infra-sound network for civil and scientific applications. The main objective of this thesis is to develop a robust tool to estimate and optimize the performance of any infra-sound network to monitor explosive sources such as volcanic eruptions. Unlike previous studies, the developed method has the advantage to consider realistic atmospheric specifications along the propagation path, source frequency and noise levels at the stations. It allows to predict the attenuation and the minimum detectable source amplitude. By simulating the performances of any infra-sound networks, it is then possible to define the optimal configuration of the network to monitor a specific region, during a given period. When carefully adding a station to the existing network, performance can be improved by a factor of 2. However, it is not always possible to complete the network. A good knowledge of detection capabilities at large distances is thus essential. To provide a more realistic picture of the performance, we integrate the atmospheric longitudinal variability along the infra-sound propagation path in our simulations. This thesis also contributes in providing a confidence index taking into account the uncertainties related to propagation and atmospheric models. At high frequencies, the error can reach 40 dB. Volcanic eruptions are natural, powerful and valuable calibrating sources of infra-sound, worldwide detected. In this study, the well instrumented volcanoes Yasur, in Vanuatu, and Etna, in Italy, offer a unique opportunity to validate our attenuation model. In particular, accurate comparisons between near-field recordings and far-field detections of these volcanoes have helped to highlight the potential of our simulation tool to remotely monitor volcanoes. Such work could significantly help to prevent

  14. Participative approach to elicit water quality monitoring needs from stakeholder groups - An application of integrated watershed management.

    Science.gov (United States)

    Behmel, S; Damour, M; Ludwig, R; Rodriguez, M J

    2018-07-15

    Water quality monitoring programs (WQMPs) must be based on monitoring objectives originating from the real knowledge needs of all stakeholders in a watershed and users of the resource. This paper proposes a participative approach to elicit knowledge needs and preferred modes of communication from citizens and representatives of organized stakeholders (ROS) on water quality and quantity issues. The participative approach includes six steps and is adaptable and transferable to different types of watersheds. These steps are: (1) perform a stakeholder analysis; (2) conduct an adaptable survey accompanied by a user-friendly public participation geographical information system (PPGIS); (3) hold workshops to meet with ROS to inform them of the results of the survey and PPGIS; discuss attainment of past monitoring objectives; exchange views on new knowledge needs and concerns on water quality and quantity; (4) meet with citizens to obtain the same type of input (as from ROS); (5) analyze the data and information collected to identify new knowledge needs and modes of communication and (6) identify, in collaboration with the individuals in charge of the WQMPs, the short-, medium- and long-term monitoring objectives and communication strategies to be pursued. The participative approach was tested on two distinct watersheds in the province of Quebec, Canada. It resulted in a series of optimization objectives of the existing WQMPs, new monitoring objectives and recommendations regarding communication strategies of the WQMPs' results. The results of this study show that the proposed methodology is appreciated by all parties and that the outcomes and monitoring objectives are acceptable. We also conclude that successful integrated watershed management is a question of scale, and that every aspect of integrated watershed management needs to be adapted to the surface watershed, the groundwater watershed (aquifers) and the human catchment area. Copyright © 2018 Elsevier Ltd. All

  15. Optimization of remediation strategies using vadose zone monitoring systems

    Science.gov (United States)

    Dahan, Ofer

    2016-04-01

    In-situ bio-remediation of the vadose zone depends mainly on the ability to change the subsurface hydrological, physical and chemical conditions in order to enable development of specific, indigenous, pollutants degrading bacteria. As such the remediation efficiency is much dependent on the ability to implement optimal hydraulic and chemical conditions in deep sections of the vadose zone. These conditions are usually determined in laboratory experiments where parameters such as the chemical composition of the soil water solution, redox potential and water content of the sediment are fully controlled. Usually, implementation of desired optimal degradation conditions in deep vadose zone at full scale field setups is achieved through infiltration of water enriched with chemical additives on the land surface. It is assumed that deep percolation into the vadose zone would create chemical conditions that promote biodegradation of specific compounds. However, application of water with specific chemical conditions near land surface dose not necessarily results in promoting of desired chemical and hydraulic conditions in deep sections of the vadose zone. A vadose-zone monitoring system (VMS) that was recently developed allows continuous monitoring of the hydrological and chemical properties of deep sections of the unsaturated zone. The VMS includes flexible time-domain reflectometry (FTDR) probes which allow continuous monitoring of the temporal variation of the vadose zone water content, and vadose-zone sampling ports (VSPs) which are designed to allow frequent sampling of the sediment pore-water and gas at multiple depths. Implementation of the vadose zone monitoring system in sites that undergoes active remediation provides real time information on the actual chemical and hydrological conditions in the vadose zone as the remediation process progresses. Up-to-date the system has been successfully implemented in several studies on water flow and contaminant transport in

  16. Sampling optimization trade-offs for long-term monitoring of gamma dose rates

    NARCIS (Netherlands)

    Melles, S.J.; Heuvelink, G.B.M.; Twenhöfel, C.J.W.; Stöhlker, U.

    2008-01-01

    This paper applies a recently developed optimization method to examine the design of networks that monitor radiation under routine conditions. Annual gamma dose rates were modelled by combining regression with interpolation of the regression residuals using spatially exhaustive predictors and an

  17. Systems engineering approach towards performance monitoring of emergency diesel generator

    International Nuclear Information System (INIS)

    Nurhayati Ramli; Lee, Y.K.

    2013-01-01

    Full-text: Systems engineering is an interdisciplinary approach and means to enable the realization of successful systems. In this study, systems engineering approach towards the performance monitoring of Emergency Diesel Generator (EDG) is presented. Performance monitoring is part and parcel of predictive maintenance where the systems and components conditions can be detected before they result into failures. In an effort to identify the proposal for addressing performance monitoring, the EDG boundary has been defined. Based on the Probabilistic Safety Analysis (PSA) results and industry operating experiences, the most critical component is identified. This paper proposed a systems engineering concept development framework towards EDG performance monitoring. The expected output of this study is that the EDG reliability can be improved by the performance monitoring alternatives through the systems engineering concept development effort. (author)

  18. Optimization of a Coastal Environmental Monitoring Network Based on the Kriging Method: A Case Study of Quanzhou Bay, China

    Science.gov (United States)

    Chen, Kai; Ni, Minjie; Wang, Jun; Huang, Dongren; Chen, Huorong; Wang, Xiao; Liu, Mengyang

    2016-01-01

    Environmental monitoring is fundamental in assessing environmental quality and to fulfill protection and management measures with permit conditions. However, coastal environmental monitoring work faces many problems and challenges, including the fact that monitoring information cannot be linked up with evaluation, monitoring data cannot well reflect the current coastal environmental condition, and monitoring activities are limited by cost constraints. For these reasons, protection and management measures cannot be developed and implemented well by policy makers who intend to solve this issue. In this paper, Quanzhou Bay in southeastern China was selected as a case study; and the Kriging method and a geographic information system were employed to evaluate and optimize the existing monitoring network in a semienclosed bay. This study used coastal environmental monitoring data from 15 sites (including COD, DIN, and PO4-P) to adequately analyze the water quality from 2009 to 2012 by applying the Trophic State Index. The monitoring network in Quanzhou Bay was evaluated and optimized, with the number of sites increased from 15 to 24, and the monitoring precision improved by 32.9%. The results demonstrated that the proposed advanced monitoring network optimization was appropriate for environmental monitoring in Quanzhou Bay. It might provide technical support for coastal management and pollutant reduction in similar areas. PMID:27777951

  19. Optimization of a Coastal Environmental Monitoring Network Based on the Kriging Method: A Case Study of Quanzhou Bay, China

    Directory of Open Access Journals (Sweden)

    Kai Chen

    2016-01-01

    Full Text Available Environmental monitoring is fundamental in assessing environmental quality and to fulfill protection and management measures with permit conditions. However, coastal environmental monitoring work faces many problems and challenges, including the fact that monitoring information cannot be linked up with evaluation, monitoring data cannot well reflect the current coastal environmental condition, and monitoring activities are limited by cost constraints. For these reasons, protection and management measures cannot be developed and implemented well by policy makers who intend to solve this issue. In this paper, Quanzhou Bay in southeastern China was selected as a case study; and the Kriging method and a geographic information system were employed to evaluate and optimize the existing monitoring network in a semienclosed bay. This study used coastal environmental monitoring data from 15 sites (including COD, DIN, and PO4-P to adequately analyze the water quality from 2009 to 2012 by applying the Trophic State Index. The monitoring network in Quanzhou Bay was evaluated and optimized, with the number of sites increased from 15 to 24, and the monitoring precision improved by 32.9%. The results demonstrated that the proposed advanced monitoring network optimization was appropriate for environmental monitoring in Quanzhou Bay. It might provide technical support for coastal management and pollutant reduction in similar areas.

  20. Design of pressure vessels using shape optimization: An integrated approach

    Energy Technology Data Exchange (ETDEWEB)

    Carbonari, R.C., E-mail: ronny@usp.br [Department of Mechatronic Engineering, Escola Politecnica da Universidade de Sao Paulo, Av. Prof. Mello Moraes, 2231 Sao Paulo, SP 05508-900 (Brazil); Munoz-Rojas, P.A., E-mail: pablo@joinville.udesc.br [Department of Mechanical Engineering, Universidade do Estado de Santa Catarina, Bom Retiro, Joinville, SC 89223-100 (Brazil); Andrade, E.Q., E-mail: edmundoq@petrobras.com.br [CENPES, PDP/Metodos Cientificos, Petrobras (Brazil); Paulino, G.H., E-mail: paulino@uiuc.edu [Newmark Laboratory, Department of Civil and Environmental Engineering, University of Illinois at Urbana-Champaign, 205 North Mathews Av., Urbana, IL 61801 (United States); Department of Mechanical Science and Engineering, University of Illinois at Urbana-Champaign, 158 Mechanical Engineering Building, 1206 West Green Street, Urbana, IL 61801-2906 (United States); Nishimoto, K., E-mail: knishimo@usp.br [Department of Naval Architecture and Ocean Engineering, Escola Politecnica da Universidade de Sao Paulo, Av. Prof. Mello Moraes, 2231 Sao Paulo, SP 05508-900 (Brazil); Silva, E.C.N., E-mail: ecnsilva@usp.br [Department of Mechatronic Engineering, Escola Politecnica da Universidade de Sao Paulo, Av. Prof. Mello Moraes, 2231 Sao Paulo, SP 05508-900 (Brazil)

    2011-05-15

    Previous papers related to the optimization of pressure vessels have considered the optimization of the nozzle independently from the dished end. This approach generates problems such as thickness variation from nozzle to dished end (coupling cylindrical region) and, as a consequence, it reduces the optimality of the final result which may also be influenced by the boundary conditions. Thus, this work discusses shape optimization of axisymmetric pressure vessels considering an integrated approach in which the entire pressure vessel model is used in conjunction with a multi-objective function that aims to minimize the von-Mises mechanical stress from nozzle to head. Representative examples are examined and solutions obtained for the entire vessel considering temperature and pressure loading. It is noteworthy that different shapes from the usual ones are obtained. Even though such different shapes may not be profitable considering present manufacturing processes, they may be competitive for future manufacturing technologies, and contribute to a better understanding of the actual influence of shape in the behavior of pressure vessels. - Highlights: > Shape optimization of entire pressure vessel considering an integrated approach. > By increasing the number of spline knots, the convergence stability is improved. > The null angle condition gives lower stress values resulting in a better design. > The cylinder stresses are very sensitive to the cylinder length. > The shape optimization of the entire vessel must be considered for cylinder length.

  1. Combining Modeling and Monitoring to Produce a New Paradigm of an Integrated Approach to Providing Long-Term Control of Contaminants

    Science.gov (United States)

    Fogwell, T. W.

    2009-12-01

    Sir David King, Chief Science Advisor to the British government and Cambridge University Professor, stated in October 2005, "The scientific community is considerably more capable than it has been in the past to assist governments to avoid and reduce risk to their own populations. Prime ministers and presidents ignore the advice from the science community at the peril of their own populations." Some of these greater capabilities can be found in better monitoring techniques applied to better modeling methods. These modeling methods can be combined with the information derived from monitoring data in order to decrease the risk of population exposure to dangerous substances and to promote efficient control or cleanup of the contaminants. An introduction is presented of the types of problems that exist for long-term control of radionuclides at DOE sites. A breakdown of the distributions at specific sites is given, together with the associated difficulties. A paradigm for remediation showing the integration of monitoring with modeling is presented. It is based on a feedback system that allows for the monitoring to act as principal sensors in a control system. The resulting system can be optimized to improve performance. Optimizing monitoring automatically entails linking the monitoring with modeling. If monitoring designs were required to be more efficient, thus requiring optimization, then the monitoring automatically becomes linked to modeling. Records of decision could be written to accommodate revisions in monitoring as better modeling evolves. Currently the establishment of a very prescriptive monitoring program fails to have a mechanism for improving models and improving control of the contaminants. The technical pieces of the required paradigm are already available; they just need to be implemented and applied to solve the long-term control of the contaminants. An integration of the various parts of the system is presented. Each part is described, and examples are

  2. A combined stochastic programming and optimal control approach to personal finance and pensions

    DEFF Research Database (Denmark)

    Konicz, Agnieszka Karolina; Pisinger, David; Rasmussen, Kourosh Marjani

    2015-01-01

    The paper presents a model that combines a dynamic programming (stochastic optimal control) approach and a multi-stage stochastic linear programming approach (SLP), integrated into one SLP formulation. Stochastic optimal control produces an optimal policy that is easy to understand and implement....

  3. A comparison of two closely-related approaches to aerodynamic design optimization

    Science.gov (United States)

    Shubin, G. R.; Frank, P. D.

    1991-01-01

    Two related methods for aerodynamic design optimization are compared. The methods, called the implicit gradient approach and the variational (or optimal control) approach, both attempt to obtain gradients necessary for numerical optimization at a cost significantly less than that of the usual black-box approach that employs finite difference gradients. While the two methods are seemingly quite different, they are shown to differ (essentially) in that the order of discretizing the continuous problem, and of applying calculus, is interchanged. Under certain circumstances, the two methods turn out to be identical. We explore the relationship between these methods by applying them to a model problem for duct flow that has many features in common with transonic flow over an airfoil. We find that the gradients computed by the variational method can sometimes be sufficiently inaccurate to cause the optimization to fail.

  4. Energy Efficiency - Spectral Efficiency Trade-off: A Multiobjective Optimization Approach

    KAUST Repository

    Amin, Osama

    2015-04-23

    In this paper, we consider the resource allocation problem for energy efficiency (EE) - spectral efficiency (SE) trade-off. Unlike traditional research that uses the EE as an objective function and imposes constraints either on the SE or achievable rate, we propound a multiobjective optimization approach that can flexibly switch between the EE and SE functions or change the priority level of each function using a trade-off parameter. Our dynamic approach is more tractable than the conventional approaches and more convenient to realistic communication applications and scenarios. We prove that the multiobjective optimization of the EE and SE is equivalent to a simple problem that maximizes the achievable rate/SE and minimizes the total power consumption. Then we apply the generalized framework of the resource allocation for the EE-SE trade-off to optimally allocate the subcarriers’ power for orthogonal frequency division multiplexing (OFDM) with imperfect channel estimation. Finally, we use numerical results to discuss the choice of the trade-off parameter and study the effect of the estimation error, transmission power budget and channel-to-noise ratio on the multiobjective optimization.

  5. Energy Efficiency - Spectral Efficiency Trade-off: A Multiobjective Optimization Approach

    KAUST Repository

    Amin, Osama; Bedeer, Ebrahim; Ahmed, Mohamed; Dobre, Octavia

    2015-01-01

    In this paper, we consider the resource allocation problem for energy efficiency (EE) - spectral efficiency (SE) trade-off. Unlike traditional research that uses the EE as an objective function and imposes constraints either on the SE or achievable rate, we propound a multiobjective optimization approach that can flexibly switch between the EE and SE functions or change the priority level of each function using a trade-off parameter. Our dynamic approach is more tractable than the conventional approaches and more convenient to realistic communication applications and scenarios. We prove that the multiobjective optimization of the EE and SE is equivalent to a simple problem that maximizes the achievable rate/SE and minimizes the total power consumption. Then we apply the generalized framework of the resource allocation for the EE-SE trade-off to optimally allocate the subcarriers’ power for orthogonal frequency division multiplexing (OFDM) with imperfect channel estimation. Finally, we use numerical results to discuss the choice of the trade-off parameter and study the effect of the estimation error, transmission power budget and channel-to-noise ratio on the multiobjective optimization.

  6. Using models for the optimization of hydrologic monitoring

    Science.gov (United States)

    Fienen, Michael N.; Hunt, Randall J.; Doherty, John E.; Reeves, Howard W.

    2011-01-01

    Hydrologists are often asked what kind of monitoring network can most effectively support science-based water-resources management decisions. Currently (2011), hydrologic monitoring locations often are selected by addressing observation gaps in the existing network or non-science issues such as site access. A model might then be calibrated to available data and applied to a prediction of interest (regardless of how well-suited that model is for the prediction). However, modeling tools are available that can inform which locations and types of data provide the most 'bang for the buck' for a specified prediction. Put another way, the hydrologist can determine which observation data most reduce the model uncertainty around a specified prediction. An advantage of such an approach is the maximization of limited monitoring resources because it focuses on the difference in prediction uncertainty with or without additional collection of field data. Data worth can be calculated either through the addition of new data or subtraction of existing information by reducing monitoring efforts (Beven, 1993). The latter generally is not widely requested as there is explicit recognition that the worth calculated is fundamentally dependent on the prediction specified. If a water manager needs a new prediction, the benefits of reducing the scope of a monitoring effort, based on an old prediction, may be erased by the loss of information important for the new prediction. This fact sheet focuses on the worth or value of new data collection by quantifying the reduction in prediction uncertainty achieved be adding a monitoring observation. This calculation of worth can be performed for multiple potential locations (and types) of observations, which then can be ranked for their effectiveness for reducing uncertainty around the specified prediction. This is implemented using a Bayesian approach with the PREDUNC utility in the parameter estimation software suite PEST (Doherty, 2010). The

  7. Plug-and-play monitoring and performance optimization for industrial automation processes

    CERN Document Server

    Luo, Hao

    2017-01-01

    Dr.-Ing. Hao Luo demonstrates the developments of advanced plug-and-play (PnP) process monitoring and control systems for industrial automation processes. With aid of the so-called Youla parameterization, a novel PnP process monitoring and control architecture (PnP-PMCA) with modularized components is proposed. To validate the developments, a case study on an industrial rolling mill benchmark is performed, and the real-time implementation on a laboratory brushless DC motor is presented. Contents PnP Process Monitoring and Control Architecture Real-Time Configuration Techniques for PnP Process Monitoring Real-Time Configuration Techniques for PnP Performance Optimization Benchmark Study and Real-Time Implementation Target Groups Researchers and students of Automation and Control Engineering Practitioners in the area of Industrial and Production Engineering The Author Hao Luo received the Ph.D. degree at the Institute for Automatic Control and Complex Systems (AKS) at the University of Duisburg-Essen, Germany, ...

  8. Optimal investment for enhancing social concern about biodiversity conservation: a dynamic approach.

    Science.gov (United States)

    Lee, Joung Hun; Iwasa, Yoh

    2012-11-01

    To maintain biodiversity conservation areas, we need to invest in activities, such as monitoring the condition of the ecosystem, preventing illegal exploitation, and removing harmful alien species. These require a constant supply of resources, the level of which is determined by the concern of the society about biodiversity conservation. In this paper, we study the optimal fraction of the resources to invest in activities for enhancing the social concern y(t) by environmental education, museum displays, publications, and media exposure. We search for the strategy that maximizes the time-integral of the quality of the conservation area x(t) with temporal discounting. Analyses based on dynamic programming and Pontryagin's maximum principle show that the optimal control consists of two phases: (1) in the first phase, the social concern level approaches to the final optimal value y(∗), (2) in the second phase, resources are allocated to both activities, and the social concern level is kept constant y(t) = y(∗). If the social concern starts from a low initial level, the optimal path includes a period in which the quality of the conservation area declines temporarily, because all the resources are invested to enhance the social concern. When the support rate increases with the quality of the conservation area itself x(t) as well as with the level of social concern y(t), both variables may increase simultaneously in the second phase. We discuss the implication of the results to good management of biodiversity conservation areas. 2012 Elsevier Inc. All rights reserved

  9. Optimization of PZT ceramic IDT sensors for health monitoring of structures.

    Science.gov (United States)

    Takpara, Rafatou; Duquennoy, Marc; Ouaftouh, Mohammadi; Courtois, Christian; Jenot, Frédéric; Rguiti, Mohamed

    2017-08-01

    Surface acoustic waves (SAW) are particularly suited to effectively monitoring and characterizing structural surfaces (condition of the surface, coating, thin layer, micro-cracks…) as their energy is localized on the surface, within approximately one wavelength. Conventionally, in non-destructive testing, wedge sensors are used to the generation guided waves but they are especially suited to flat surfaces and sized for a given type material (angle of refraction). Additionally, these sensors are quite expensive so it is quite difficult to leave the sensors permanently on the structure for its health monitoring. Therefore we are considering in this study, another type of ultrasonic sensors, able to generate SAW. These sensors are interdigital sensors or IDT sensors for InterDigital Transducer. This paper focuses on optimization of IDT sensors for non-destructive structural testing by using PZT ceramics. The challenge was to optimize the dimensional parameters of the IDT sensors in order to efficiently generate surface waves. Acoustic tests then confirmed these parameters. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. A neuro-fuzzy inference system tuned by particle swarm optimization algorithm for sensor monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Mauro Vitor de [Instituto de Engenharia Nuclear (IEN), Rio de Janeiro, RJ (Brazil). Div. de Instrumentacao e Confiabilidade Humana]. E-mail: mvitor@ien.gov.br; Schirru, Roberto [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Lab. de Monitoracao de Processos

    2005-07-01

    A neuro-fuzzy inference system (ANFIS) tuned by particle swarm optimization (PSO) algorithm has been developed for monitor the relevant sensor in a nuclear plant using the information of other sensors. The antecedent parameters of the ANFIS that estimates the relevant sensor signal are optimized by a PSO algorithm and consequent parameters use a least-squares algorithm. The proposed sensor-monitoring algorithm was demonstrated through the estimation of the nuclear power value in a pressurized water reactor using as input to the ANFIS six other correlated signals. The obtained results are compared to two similar ANFIS using one gradient descendent (GD) and other genetic algorithm (GA), as antecedent parameters training algorithm. (author)

  11. A neuro-fuzzy inference system tuned by particle swarm optimization algorithm for sensor monitoring

    International Nuclear Information System (INIS)

    Oliveira, Mauro Vitor de; Schirru, Roberto

    2005-01-01

    A neuro-fuzzy inference system (ANFIS) tuned by particle swarm optimization (PSO) algorithm has been developed for monitor the relevant sensor in a nuclear plant using the information of other sensors. The antecedent parameters of the ANFIS that estimates the relevant sensor signal are optimized by a PSO algorithm and consequent parameters use a least-squares algorithm. The proposed sensor-monitoring algorithm was demonstrated through the estimation of the nuclear power value in a pressurized water reactor using as input to the ANFIS six other correlated signals. The obtained results are compared to two similar ANFIS using one gradient descendent (GD) and other genetic algorithm (GA), as antecedent parameters training algorithm. (author)

  12. Collaboration pathway(s) using new tools for optimizing `operational' climate monitoring from space

    Science.gov (United States)

    Helmuth, Douglas B.; Selva, Daniel; Dwyer, Morgan M.

    2015-09-01

    Consistently collecting the earth's climate signatures remains a priority for world governments and international scientific organizations. Architecting a long term solution requires transforming scientific missions into an optimized robust `operational' constellation that addresses the collective needs of policy makers, scientific communities and global academic users for trusted data. The application of new tools offers pathways for global architecture collaboration. Recent rule-based expert system (RBES) optimization modeling of the intended NPOESS architecture becomes a surrogate for global operational climate monitoring architecture(s). These rulebased systems tools provide valuable insight for global climate architectures, by comparison/evaluation of alternatives and the sheer range of trade space explored. Optimization of climate monitoring architecture(s) for a partial list of ECV (essential climate variables) is explored and described in detail with dialogue on appropriate rule-based valuations. These optimization tool(s) suggest global collaboration advantages and elicit responses from the audience and climate science community. This paper will focus on recent research exploring joint requirement implications of the high profile NPOESS architecture and extends the research and tools to optimization for a climate centric case study. This reflects work from SPIE RS Conferences 2013 and 2014, abridged for simplification30, 32. First, the heavily securitized NPOESS architecture; inspired the recent research question - was Complexity (as a cost/risk factor) overlooked when considering the benefits of aggregating different missions into a single platform. Now years later a complete reversal; should agencies considering Disaggregation as the answer. We'll discuss what some academic research suggests. Second, using the GCOS requirements of earth climate observations via ECV (essential climate variables) many collected from space-based sensors; and accepting their

  13. Stochastic optimization in insurance a dynamic programming approach

    CERN Document Server

    Azcue, Pablo

    2014-01-01

    The main purpose of the book is to show how a viscosity approach can be used to tackle control problems in insurance. The problems covered are the maximization of survival probability as well as the maximization of dividends in the classical collective risk model. The authors consider the possibility of controlling the risk process by reinsurance as well as by investments. They show that optimal value functions are characterized as either the unique or the smallest viscosity solution of the associated Hamilton-Jacobi-Bellman equation; they also study the structure of the optimal strategies and show how to find them. The viscosity approach was widely used in control problems related to mathematical finance but until quite recently it was not used to solve control problems related to actuarial mathematical science. This book is designed to familiarize the reader on how to use this approach. The intended audience is graduate students as well as researchers in this area.

  14. Optimization of Remediation Conditions using Vadose Zone Monitoring Technology

    Science.gov (United States)

    Dahan, O.; Mandelbaum, R.; Ronen, Z.

    2010-12-01

    Success of in-situ bio-remediation of the vadose zone depends mainly on the ability to change and control hydrological, physical and chemical conditions of subsurface. These manipulations enables the development of specific, indigenous, pollutants degrading bacteria or set the environmental conditions for seeded bacteria. As such, the remediation efficiency is dependent on the ability to implement optimal hydraulic and chemical conditions in deep sections of the vadose zone. Enhanced bioremediation of the vadose zone is achieved under field conditions through infiltration of water enriched with chemical additives. Yet, water percolation and solute transport in unsaturated conditions is a complex process and application of water with specific chemical conditions near land surface dose not necessarily result in promoting of desired chemical and hydraulic conditions in deeper sections of the vadose zone. A newly developed vadose-zone monitoring system (VMS) allows continuous monitoring of the hydrological and chemical properties of the percolating water along deep sections of the vadose zone. Implementation of the VMS at sites that undergoes active remediation provides real time information on the chemical and hydrological conditions in the vadose zone as the remediation process progresses. Manipulating subsurface conditions for optimal biodegradation of hydrocarbons is demonstrated through enhanced bio-remediation of the vadose zone at a site that has been contaminated with gasoline products in Tel Aviv. The vadose zone at the site is composed of 6 m clay layer overlying a sandy formation extending to the water table at depth of 20 m bls. The upper 5 m of contaminated soil were removed for ex-situ treatment, and the remaining 15 m vadose zone is treated in-situ through enhanced bioremedaition. Underground drip irrigation system was installed below the surface on the bottom of the excavation. Oxygen and nutrients releasing powder (EHCO, Adventus) was spread below the

  15. A practical multiscale approach for optimization of structural damping

    DEFF Research Database (Denmark)

    Andreassen, Erik; Jensen, Jakob Søndergaard

    2016-01-01

    A simple and practical multiscale approach suitable for topology optimization of structural damping in a component ready for additive manufacturing is presented.The approach consists of two steps: First, the homogenized loss factor of a two-phase material is maximized. This is done in order...

  16. Multiobjective design of aquifer monitoring networks for optimal spatial prediction and geostatistical parameter estimation

    Science.gov (United States)

    Alzraiee, Ayman H.; Bau, Domenico A.; Garcia, Luis A.

    2013-06-01

    Effective sampling of hydrogeological systems is essential in guiding groundwater management practices. Optimal sampling of groundwater systems has previously been formulated based on the assumption that heterogeneous subsurface properties can be modeled using a geostatistical approach. Therefore, the monitoring schemes have been developed to concurrently minimize the uncertainty in the spatial distribution of systems' states and parameters, such as the hydraulic conductivity K and the hydraulic head H, and the uncertainty in the geostatistical model of system parameters using a single objective function that aggregates all objectives. However, it has been shown that the aggregation of possibly conflicting objective functions is sensitive to the adopted aggregation scheme and may lead to distorted results. In addition, the uncertainties in geostatistical parameters affect the uncertainty in the spatial prediction of K and H according to a complex nonlinear relationship, which has often been ineffectively evaluated using a first-order approximation. In this study, we propose a multiobjective optimization framework to assist the design of monitoring networks of K and H with the goal of optimizing their spatial predictions and estimating the geostatistical parameters of the K field. The framework stems from the combination of a data assimilation (DA) algorithm and a multiobjective evolutionary algorithm (MOEA). The DA algorithm is based on the ensemble Kalman filter, a Monte-Carlo-based Bayesian update scheme for nonlinear systems, which is employed to approximate the posterior uncertainty in K, H, and the geostatistical parameters of K obtained by collecting new measurements. Multiple MOEA experiments are used to investigate the trade-off among design objectives and identify the corresponding monitoring schemes. The methodology is applied to design a sampling network for a shallow unconfined groundwater system located in Rocky Ford, Colorado. Results indicate that

  17. Optimizing liquid effluent monitoring at a large nuclear complex.

    Science.gov (United States)

    Chou, Charissa J; Barnett, D Brent; Johnson, Vernon G; Olson, Phil M

    2003-12-01

    Effluent monitoring typically requires a large number of analytes and samples during the initial or startup phase of a facility. Once a baseline is established, the analyte list and sampling frequency may be reduced. Although there is a large body of literature relevant to the initial design, few, if any, published papers exist on updating established effluent monitoring programs. This paper statistically evaluates four years of baseline data to optimize the liquid effluent monitoring efficiency of a centralized waste treatment and disposal facility at a large defense nuclear complex. Specific objectives were to: (1) assess temporal variability in analyte concentrations, (2) determine operational factors contributing to waste stream variability, (3) assess the probability of exceeding permit limits, and (4) streamline the sampling and analysis regime. Results indicated that the probability of exceeding permit limits was one in a million under normal facility operating conditions, sampling frequency could be reduced, and several analytes could be eliminated. Furthermore, indicators such as gross alpha and gross beta measurements could be used in lieu of more expensive specific isotopic analyses (radium, cesium-137, and strontium-90) for routine monitoring. Study results were used by the state regulatory agency to modify monitoring requirements for a new discharge permit, resulting in an annual cost savings of US dollars 223,000. This case study demonstrates that statistical evaluation of effluent contaminant variability coupled with process knowledge can help plant managers and regulators streamline analyte lists and sampling frequencies based on detection history and environmental risk.

  18. An optimized strategy for real-time hemorrhage monitoring with electrical impedance tomography

    International Nuclear Information System (INIS)

    Xu, Canhua; Dai, Meng; You, Fusheng; Shi, Xuetao; Fu, Feng; Liu, Ruigang; Dong, Xiuzhen

    2011-01-01

    Delayed detection of an internal hemorrhage may result in serious disabilities and possibly death for a patient. Currently, there are no portable medical imaging instruments that are suitable for long-term monitoring of patients at risk of internal hemorrhage. Electrical impedance tomography (EIT) has the potential to monitor patients continuously as a novel functional image modality and instantly detect the occurrence of an internal hemorrhage. However, the low spatial resolution and high sensitivity to noise of this technique have limited its application in clinics. In addition, due to the circular boundary display mode used in current EIT images, it is difficult for clinicians to identify precisely which organ is bleeding using this technique. The aim of this study was to propose an optimized strategy for EIT reconstruction to promote the use of EIT for clinical studies, which mainly includes the use of anatomically accurate boundary shapes, rapid selection of optimal regularization parameters and image fusion of EIT and computed tomography images. The method was evaluated on retroperitoneal and intraperitoneal bleeding piglet data. Both traditional backprojection images and optimized images among different boundary shapes were reconstructed and compared. The experimental results demonstrated that EIT images with precise anatomical information can be reconstructed in which the image resolution and resistance to noise can be improved effectively

  19. A penalty guided stochastic fractal search approach for system reliability optimization

    International Nuclear Information System (INIS)

    Mellal, Mohamed Arezki; Zio, Enrico

    2016-01-01

    Modern industry requires components and systems with high reliability levels. In this paper, we address the system reliability optimization problem. A penalty guided stochastic fractal search approach is developed for solving reliability allocation, redundancy allocation, and reliability–redundancy allocation problems. Numerical results of ten case studies are presented as benchmark problems for highlighting the superiority of the proposed approach compared to others from literature. - Highlights: • System reliability optimization is investigated. • A penalty guided stochastic fractal search approach is developed. • Results of ten case studies are compared with previously published methods. • Performance of the approach is demonstrated.

  20. MVMO-based approach for optimal placement and tuning of supplementary damping controller

    NARCIS (Netherlands)

    Rueda Torres, J.L.; Gonzalez-Longatt, F.

    2015-01-01

    This paper introduces an approach based on the Swarm Variant of the Mean-Variance Mapping Optimization (MVMO-S) to solve the multi-scenario formulation of the optimal placement and coordinated tuning of power system supplementary damping controllers (POCDCs). The effectiveness of the approach is

  1. Adjoint current-based approaches to prostate brachytherapy optimization

    International Nuclear Information System (INIS)

    Roberts, J. A.; Henderson, D. L.

    2009-01-01

    This paper builds on previous work done at the Univ. of Wisconsin - Madison to employ the adjoint concept of nuclear reactor physics in the so-called greedy heuristic of brachytherapy optimization. Whereas that previous work focused on the adjoint flux, i.e. the importance, this work has included use of the adjoint current to increase the amount of information available in optimizing. Two current-based approaches were developed for 2-D problems, and each was compared to the most recent form of the flux-based methodology. The first method aimed to take a treatment plan from the flux-based greedy heuristic and adjust via application of the current-displacement, or a vector displacement based on a combination of tissue (adjoint) and seed (forward) currents acting as forces on a seed. This method showed promise in improving key urethral and rectal dosimetric quantities. The second method uses the normed current-displacement as the greedy criterion such that seeds are placed in regions of least force. This method, coupled with the dose-update scheme, generated treatment plans with better target irradiation and sparing of the urethra and normal tissues than the flux-based approach. Tables of these parameters are given for both approaches. In summary, these preliminary results indicate adjoint current methods are useful in optimization and further work in 3-D should be performed. (authors)

  2. An adaptive multi-agent-based approach to smart grids control and optimization

    Energy Technology Data Exchange (ETDEWEB)

    Carvalho, Marco [Florida Institute of Technology, Melbourne, FL (United States); Perez, Carlos; Granados, Adrian [Institute for Human and Machine Cognition, Ocala, FL (United States)

    2012-03-15

    In this paper, we describe a reinforcement learning-based approach to power management in smart grids. The scenarios we consider are smart grid settings where renewable power sources (e.g. Photovoltaic panels) have unpredictable variations in power output due, for example, to weather or cloud transient effects. Our approach builds on a multi-agent system (MAS)-based infrastructure for the monitoring and coordination of smart grid environments with renewable power sources and configurable energy storage devices (battery banks). Software agents are responsible for tracking and reporting power flow variations at different points in the grid, and to optimally coordinate the engagement of battery banks (i.e. charge/idle/discharge modes) to maintain energy requirements to end-users. Agents are able to share information and coordinate control actions through a parallel communications infrastructure, and are also capable of learning, from experience, how to improve their response strategies for different operational conditions. In this paper we describe our approach and address some of the challenges associated with the communications infrastructure for distributed coordination. We also present some preliminary results of our first simulations using the GridLAB-D simulation environment, created by the US Department of Energy (DoE) at Pacific Northwest National Laboratory (PNNL). (orig.)

  3. OSSA - An optimized approach to severe accident management: EPR application

    International Nuclear Information System (INIS)

    Sauvage, E. C.; Prior, R.; Coffey, K.; Mazurkiewicz, S. M.

    2006-01-01

    There is a recognized need to provide nuclear power plant technical staff with structured guidance for response to a potential severe accident condition involving core damage and potential release of fission products to the environment. Over the past ten years, many plants worldwide have implemented such guidance for their emergency technical support center teams either by following one of the generic approaches, or by developing fully independent approaches. There are many lessons to be learned from the experience of the past decade, in developing, implementing, and validating severe accident management guidance. Also, though numerous basic approaches exist which share common principles, there are differences in the methodology and application of the guidelines. AREVA/Framatome-ANP is developing an optimized approach to severe accident management guidance in a project called OSSA ('Operating Strategies for Severe Accidents'). There are still numerous operating power plants which have yet to implement severe accident management programs. For these, the option to use an updated approach which makes full use of lessons learned and experience, is seen as a major advantage. Very few of the current approaches covers all operating plant states, including shutdown states with the primary system closed and open. Although it is not necessary to develop an entirely new approach in order to add this capability, the opportunity has been taken to develop revised full scope guidance covering all plant states in addition to the fuel in the fuel building. The EPR includes at the design phase systems and measures to minimize the risk of severe accident and to mitigate such potential scenarios. This presents a difference in comparison with existing plant, for which severe accidents where not considered in the design. Thought developed for all type of plants, OSSA will also be applied on the EPR, with adaptations designed to take into account its favourable situation in that field

  4. New approaches to optimization in aerospace conceptual design

    Science.gov (United States)

    Gage, Peter J.

    1995-01-01

    Aerospace design can be viewed as an optimization process, but conceptual studies are rarely performed using formal search algorithms. Three issues that restrict the success of automatic search are identified in this work. New approaches are introduced to address the integration of analyses and optimizers, to avoid the need for accurate gradient information and a smooth search space (required for calculus-based optimization), and to remove the restrictions imposed by fixed complexity problem formulations. (1) Optimization should be performed in a flexible environment. A quasi-procedural architecture is used to conveniently link analysis modules and automatically coordinate their execution. It efficiently controls a large-scale design tasks. (2) Genetic algorithms provide a search method for discontinuous or noisy domains. The utility of genetic optimization is demonstrated here, but parameter encodings and constraint-handling schemes must be carefully chosen to avoid premature convergence to suboptimal designs. The relationship between genetic and calculus-based methods is explored. (3) A variable-complexity genetic algorithm is created to permit flexible parameterization, so that the level of description can change during optimization. This new optimizer automatically discovers novel designs in structural and aerodynamic tasks.

  5. Commonalities and complementarities among approaches to conservation monitoring and evaluation

    DEFF Research Database (Denmark)

    Mascia, Michael B.; Pailler, Sharon; Thieme, Michele L.

    2014-01-01

    Commonalities and complementarities among approaches to conservation monitoring and evaluation (M&E) are not well articulated, creating the potential for confusion, misuse, and missed opportunities to inform conservation policy and practice. We examine the relationships among five approaches...... to conservation M&E, characterizing each approach in eight domains: the focal question driving each approach, when in the project cycle each approach is employed, scale of data collection, the methods of data collection and analysis, the implementers of data collection and analysis, the users of M&E outputs......, and the decisions informed by these outputs. Ambient monitoring measures status and change in ambient social and ecological conditions, independent of any conservation intervention. Management assessment measures management inputs, activities, and outputs, as the basis for investments to build management capacity...

  6. Compact approach to monitored retrievable storage of spent fuel

    International Nuclear Information System (INIS)

    Muir, D.W.

    1984-09-01

    Recent federal waste-management legislation has raised national interest in monitored retrievable storage (MRS) of unprocessed spent fuel from civilian nuclear power plants. We have reviewed the current MRS design approaches, and we have examined an alternative concept that is extremely compact in terms of total land use. This approach may offer substantial advantages in the areas of monitoring and in safeguards against theft, as well as in reducing the chances of groundwater contamination. Total facility costs are roughly estimated and found to be generally competitive with other MRS concepts. 4 references, 3 figures, 3 tables

  7. Methodological approach to strategic performance optimization

    OpenAIRE

    Hell, Marko; Vidačić, Stjepan; Garača, Željko

    2009-01-01

    This paper presents a matrix approach to the measuring and optimization of organizational strategic performance. The proposed model is based on the matrix presentation of strategic performance, which follows the theoretical notions of the balanced scorecard (BSC) and strategy map methodologies, initially developed by Kaplan and Norton. Development of a quantitative record of strategic objectives provides an arena for the application of linear programming (LP), which is a mathematical tech...

  8. Nonlinear Cointegration Approach for Condition Monitoring of Wind Turbines

    Directory of Open Access Journals (Sweden)

    Konrad Zolna

    2015-01-01

    Full Text Available Monitoring of trends and removal of undesired trends from operational/process parameters in wind turbines is important for their condition monitoring. This paper presents the homoscedastic nonlinear cointegration for the solution to this problem. The cointegration approach used leads to stable variances in cointegration residuals. The adapted Breusch-Pagan test procedure is developed to test for the presence of heteroscedasticity in cointegration residuals obtained from the nonlinear cointegration analysis. Examples using three different time series data sets—that is, one with a nonlinear quadratic deterministic trend, another with a nonlinear exponential deterministic trend, and experimental data from a wind turbine drivetrain—are used to illustrate the method and demonstrate possible practical applications. The results show that the proposed approach can be used for effective removal of nonlinear trends form various types of data, allowing for possible condition monitoring applications.

  9. A proposed approach to monitor private-sector policies and practices related to food environments, obesity and non-communicable disease prevention.

    Science.gov (United States)

    Sacks, G; Swinburn, B; Kraak, V; Downs, S; Walker, C; Barquera, S; Friel, S; Hawkes, C; Kelly, B; Kumanyika, S; L'Abbé, M; Lee, A; Lobstein, T; Ma, J; Macmullan, J; Mohan, S; Monteiro, C; Neal, B; Rayner, M; Sanders, D; Snowdon, W; Vandevijvere, S

    2013-10-01

    Private-sector organizations play a critical role in shaping the food environments of individuals and populations. However, there is currently very limited independent monitoring of private-sector actions related to food environments. This paper reviews previous efforts to monitor the private sector in this area, and outlines a proposed approach to monitor private-sector policies and practices related to food environments, and their influence on obesity and non-communicable disease (NCD) prevention. A step-wise approach to data collection is recommended, in which the first ('minimal') step is the collation of publicly available food and nutrition-related policies of selected private-sector organizations. The second ('expanded') step assesses the nutritional composition of each organization's products, their promotions to children, their labelling practices, and the accessibility, availability and affordability of their products. The third ('optimal') step includes data on other commercial activities that may influence food environments, such as political lobbying and corporate philanthropy. The proposed approach will be further developed and piloted in countries of varying size and income levels. There is potential for this approach to enable national and international benchmarking of private-sector policies and practices, and to inform efforts to hold the private sector to account for their role in obesity and NCD prevention. © 2013 The Authors. Obesity Reviews published by John Wiley & Sons Ltd on behalf of the International Association for the Study of Obesity.

  10. A PSO approach for preventive maintenance scheduling optimization

    International Nuclear Information System (INIS)

    Pereira, C.M.N.A.; Lapa, C.M.F.; Mol, A.C.A.; Luz, A.F. da

    2009-01-01

    This work presents a Particle Swarm Optimization (PSO) approach for preventive maintenance policy optimization, focused in reliability and cost. The probabilistic model for reliability and cost evaluation is developed in such a way that flexible intervals between maintenance are allowed. As PSO is skilled for realcoded continuous spaces, a non-conventional codification has been developed in order to allow PSO to solve scheduling problems (which is discrete) with variable number of maintenance interventions. In order to evaluate the proposed methodology, the High Pressure Injection System (HPIS) of a typical 4-loop PWR has been considered. Results demonstrate ability in finding optimal solutions, for which expert knowledge had to be automatically discovered by PSO. (author)

  11. An iterative approach for the optimization of pavement maintenance management at the network level.

    Science.gov (United States)

    Torres-Machí, Cristina; Chamorro, Alondra; Videla, Carlos; Pellicer, Eugenio; Yepes, Víctor

    2014-01-01

    Pavement maintenance is one of the major issues of public agencies. Insufficient investment or inefficient maintenance strategies lead to high economic expenses in the long term. Under budgetary restrictions, the optimal allocation of resources becomes a crucial aspect. Two traditional approaches (sequential and holistic) and four classes of optimization methods (selection based on ranking, mathematical optimization, near optimization, and other methods) have been applied to solve this problem. They vary in the number of alternatives considered and how the selection process is performed. Therefore, a previous understanding of the problem is mandatory to identify the most suitable approach and method for a particular network. This study aims to assist highway agencies, researchers, and practitioners on when and how to apply available methods based on a comparative analysis of the current state of the practice. Holistic approach tackles the problem considering the overall network condition, while the sequential approach is easier to implement and understand, but may lead to solutions far from optimal. Scenarios defining the suitability of these approaches are defined. Finally, an iterative approach gathering the advantages of traditional approaches is proposed and applied in a case study. The proposed approach considers the overall network condition in a simpler and more intuitive manner than the holistic approach.

  12. An Iterative Approach for the Optimization of Pavement Maintenance Management at the Network Level

    Directory of Open Access Journals (Sweden)

    Cristina Torres-Machí

    2014-01-01

    Full Text Available Pavement maintenance is one of the major issues of public agencies. Insufficient investment or inefficient maintenance strategies lead to high economic expenses in the long term. Under budgetary restrictions, the optimal allocation of resources becomes a crucial aspect. Two traditional approaches (sequential and holistic and four classes of optimization methods (selection based on ranking, mathematical optimization, near optimization, and other methods have been applied to solve this problem. They vary in the number of alternatives considered and how the selection process is performed. Therefore, a previous understanding of the problem is mandatory to identify the most suitable approach and method for a particular network. This study aims to assist highway agencies, researchers, and practitioners on when and how to apply available methods based on a comparative analysis of the current state of the practice. Holistic approach tackles the problem considering the overall network condition, while the sequential approach is easier to implement and understand, but may lead to solutions far from optimal. Scenarios defining the suitability of these approaches are defined. Finally, an iterative approach gathering the advantages of traditional approaches is proposed and applied in a case study. The proposed approach considers the overall network condition in a simpler and more intuitive manner than the holistic approach.

  13. Optimal speech motor control and token-to-token variability: a Bayesian modeling approach.

    Science.gov (United States)

    Patri, Jean-François; Diard, Julien; Perrier, Pascal

    2015-12-01

    The remarkable capacity of the speech motor system to adapt to various speech conditions is due to an excess of degrees of freedom, which enables producing similar acoustical properties with different sets of control strategies. To explain how the central nervous system selects one of the possible strategies, a common approach, in line with optimal motor control theories, is to model speech motor planning as the solution of an optimality problem based on cost functions. Despite the success of this approach, one of its drawbacks is the intrinsic contradiction between the concept of optimality and the observed experimental intra-speaker token-to-token variability. The present paper proposes an alternative approach by formulating feedforward optimal control in a probabilistic Bayesian modeling framework. This is illustrated by controlling a biomechanical model of the vocal tract for speech production and by comparing it with an existing optimal control model (GEPPETO). The essential elements of this optimal control model are presented first. From them the Bayesian model is constructed in a progressive way. Performance of the Bayesian model is evaluated based on computer simulations and compared to the optimal control model. This approach is shown to be appropriate for solving the speech planning problem while accounting for variability in a principled way.

  14. Groundwater detection monitoring system design under conditions of uncertainty

    NARCIS (Netherlands)

    Yenigül, N.B.

    2006-01-01

    Landfills represent a wide-spread and significant threat to groundwater quality. In this thesis a methodology was developed for the design of optimal groundwater moni-toring system design at landfill sites under conditions of uncertainty. First a decision analysis approach was presented for optimal

  15. A Novel Measurement Matrix Optimization Approach for Hyperspectral Unmixing

    Directory of Open Access Journals (Sweden)

    Su Xu

    2017-01-01

    Full Text Available Each pixel in the hyperspectral unmixing process is modeled as a linear combination of endmembers, which can be expressed in the form of linear combinations of a number of pure spectral signatures that are known in advance. However, the limitation of Gaussian random variables on its computational complexity or sparsity affects the efficiency and accuracy. This paper proposes a novel approach for the optimization of measurement matrix in compressive sensing (CS theory for hyperspectral unmixing. Firstly, a new Toeplitz-structured chaotic measurement matrix (TSCMM is formed by pseudo-random chaotic elements, which can be implemented by a simple hardware; secondly, rank revealing QR factorization with eigenvalue decomposition is presented to speed up the measurement time; finally, orthogonal gradient descent method for measurement matrix optimization is used to achieve optimal incoherence. Experimental results demonstrate that the proposed approach can lead to better CS reconstruction performance with low extra computational cost in hyperspectral unmixing.

  16. Dynamic Programming Approach for Exact Decision Rule Optimization

    KAUST Repository

    Amin, Talha M.; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata

    2013-01-01

    This chapter is devoted to the study of an extension of dynamic programming approach that allows sequential optimization of exact decision rules relative to the length and coverage. It contains also results of experiments with decision tables from

  17. OPTIMAL TRAFFIC MANAGEMENT FOR AIRCRAFT APPROACHING THE AERODROME LANDING AREA

    Directory of Open Access Journals (Sweden)

    Igor B. Ivenin

    2018-01-01

    Full Text Available The research proposes a mathematical optimization approach of arriving aircraft traffic at the aerodrome zone. The airfield having two parallel runways, capable of operating independently of each other, is modeled. The incoming traffic of aircraft is described by a Poisson flow of random events. The arriving aircraft are distributed by the air traffic controller between two runways. There is one approach flight path for each runway. Both approach paths have a common starting point. Each approach path has a different length. The approach trajectories do not overlap. For each of the two approach procedures, the air traffic controller sets the average speed of the aircraft. The given model of airfield and airfield zone is considered as the two-channel system of mass service with refusals in service. Each of the two servicing units includes an approach trajectory, a glide path and a runway. The servicing unit can be in one of two states – free and busy. The probabilities of the states of the servicing units are described by the Kolmogorov system of differential equations. The number of refusals in service on the simulated time interval is used as criterion for assessment of mass service system quality of functioning. This quality of functioning criterion is described by an integral functional. The functions describing the distribution of aircraft flows between the runways, as well as the functions describing the average speed of the aircraft, are control parameters. The optimization problem consists in finding such values of the control parameters for which the value of the criterion functional is minimal. To solve the formulated optimization problem, the L.S. Pontryagin maximum principle is applied. The form of the Hamiltonian function and the conjugate system of differential equations is given. The structure of optimal control has been studied for two different cases of restrictions on the control of the distribution of incoming aircraft

  18. Self-optimizing approach for automated laser resonator alignment

    Science.gov (United States)

    Brecher, C.; Schmitt, R.; Loosen, P.; Guerrero, V.; Pyschny, N.; Pavim, A.; Gatej, A.

    2012-02-01

    Nowadays, the assembly of laser systems is dominated by manual operations, involving elaborate alignment by means of adjustable mountings. From a competition perspective, the most challenging problem in laser source manufacturing is price pressure, a result of cost competition exerted mainly from Asia. From an economical point of view, an automated assembly of laser systems defines a better approach to produce more reliable units at lower cost. However, the step from today's manual solutions towards an automated assembly requires parallel developments regarding product design, automation equipment and assembly processes. This paper introduces briefly the idea of self-optimizing technical systems as a new approach towards highly flexible automation. Technically, the work focuses on the precision assembly of laser resonators, which is one of the final and most crucial assembly steps in terms of beam quality and laser power. The paper presents a new design approach for miniaturized laser systems and new automation concepts for a robot-based precision assembly, as well as passive and active alignment methods, which are based on a self-optimizing approach. Very promising results have already been achieved, considerably reducing the duration and complexity of the laser resonator assembly. These results as well as future development perspectives are discussed.

  19. A Hybrid Genetic Algorithm Approach for Optimal Power Flow

    Directory of Open Access Journals (Sweden)

    Sydulu Maheswarapu

    2011-08-01

    Full Text Available This paper puts forward a reformed hybrid genetic algorithm (GA based approach to the optimal power flow. In the approach followed here, continuous variables are designed using real-coded GA and discrete variables are processed as binary strings. The outcomes are compared with many other methods like simple genetic algorithm (GA, adaptive genetic algorithm (AGA, differential evolution (DE, particle swarm optimization (PSO and music based harmony search (MBHS on a IEEE30 bus test bed, with a total load of 283.4 MW. Its found that the proposed algorithm is found to offer lowest fuel cost. The proposed method is found to be computationally faster, robust, superior and promising form its convergence characteristics.

  20. Monitoring and optimizing the co-composting of dewatered sludge: a mixture experimental design approach.

    Science.gov (United States)

    Komilis, Dimitrios; Evangelou, Alexandros; Voudrias, Evangelos

    2011-09-01

    The management of dewatered wastewater sludge is a major issue worldwide. Sludge disposal to landfills is not sustainable and thus alternative treatment techniques are being sought. The objective of this work was to determine optimal mixing ratios of dewatered sludge with other organic amendments in order to maximize the degradability of the mixtures during composting. This objective was achieved using mixture experimental design principles. An additional objective was to study the impact of the initial C/N ratio and moisture contents on the co-composting process of dewatered sludge. The composting process was monitored through measurements of O(2) uptake rates, CO(2) evolution, temperature profile and solids reduction. Eight (8) runs were performed in 100 L insulated air-tight bioreactors under a dynamic air flow regime. The initial mixtures were prepared using dewatered wastewater sludge, mixed paper wastes, food wastes, tree branches and sawdust at various initial C/N ratios and moisture contents. According to empirical modeling, mixtures of sludge and food waste mixtures at 1:1 ratio (ww, wet weight) maximize degradability. Structural amendments should be maintained below 30% to reach thermophilic temperatures. The initial C/N ratio and initial moisture content of the mixture were not found to influence the decomposition process. The bio C/bio N ratio started from around 10, for all runs, decreased during the middle of the process and increased to up to 20 at the end of the process. The solid carbon reduction of the mixtures without the branches ranged from 28% to 62%, whilst solid N reductions ranged from 30% to 63%. Respiratory quotients had a decreasing trend throughout the composting process. Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. Optimal planning of multiple distributed generation sources in distribution networks: A new approach

    Energy Technology Data Exchange (ETDEWEB)

    AlRashidi, M.R., E-mail: malrash2002@yahoo.com [Department of Electrical Engineering, College of Technological Studies, Public Authority for Applied Education and Training (PAAET) (Kuwait); AlHajri, M.F., E-mail: mfalhajri@yahoo.com [Department of Electrical Engineering, College of Technological Studies, Public Authority for Applied Education and Training (PAAET) (Kuwait)

    2011-10-15

    Highlights: {yields} A new hybrid PSO for optimal DGs placement and sizing. {yields} Statistical analysis to fine tune PSO parameters. {yields} Novel constraint handling mechanism to handle different constraints types. - Abstract: An improved particle swarm optimization algorithm (PSO) is presented for optimal planning of multiple distributed generation sources (DG). This problem can be divided into two sub-problems: the DG optimal size (continuous optimization) and location (discrete optimization) to minimize real power losses. The proposed approach addresses the two sub-problems simultaneously using an enhanced PSO algorithm capable of handling multiple DG planning in a single run. A design of experiment is used to fine tune the proposed approach via proper analysis of PSO parameters interaction. The proposed algorithm treats the problem constraints differently by adopting a radial power flow algorithm to satisfy the equality constraints, i.e. power flows in distribution networks, while the inequality constraints are handled by making use of some of the PSO features. The proposed algorithm was tested on the practical 69-bus power distribution system. Different test cases were considered to validate the proposed approach consistency in detecting optimal or near optimal solution. Results are compared with those of Sequential Quadratic Programming.

  2. Optimal planning of multiple distributed generation sources in distribution networks: A new approach

    International Nuclear Information System (INIS)

    AlRashidi, M.R.; AlHajri, M.F.

    2011-01-01

    Highlights: → A new hybrid PSO for optimal DGs placement and sizing. → Statistical analysis to fine tune PSO parameters. → Novel constraint handling mechanism to handle different constraints types. - Abstract: An improved particle swarm optimization algorithm (PSO) is presented for optimal planning of multiple distributed generation sources (DG). This problem can be divided into two sub-problems: the DG optimal size (continuous optimization) and location (discrete optimization) to minimize real power losses. The proposed approach addresses the two sub-problems simultaneously using an enhanced PSO algorithm capable of handling multiple DG planning in a single run. A design of experiment is used to fine tune the proposed approach via proper analysis of PSO parameters interaction. The proposed algorithm treats the problem constraints differently by adopting a radial power flow algorithm to satisfy the equality constraints, i.e. power flows in distribution networks, while the inequality constraints are handled by making use of some of the PSO features. The proposed algorithm was tested on the practical 69-bus power distribution system. Different test cases were considered to validate the proposed approach consistency in detecting optimal or near optimal solution. Results are compared with those of Sequential Quadratic Programming.

  3. Optimization of the x-ray monitoring angle for creating a correlation model between internal and external respiratory signals

    Energy Technology Data Exchange (ETDEWEB)

    Akimoto, Mami; Nakamura, Mitsuhiro; Mukumoto, Nobutaka; Yamada, Masahiro; Ueki, Nami; Matsuo, Yukinori; Sawada, Akira; Mizowaki, Takashi; Kokubo, Masaki; Hiraoka, Masahiro [Department of Radiation Oncology and Image-applied Therapy, Graduate School of Medicine, Kyoto University, Kyoto 606-8507 (Japan); Department of Radiation Oncology and Image-applied Therapy, Graduate School of Medicine, Kyoto University, Kyoto 606-8507, Japan and Department of Radiological Technology, Faculty of Medical Science, Kyoto College of Medical Science, Nantan, Kyoto 622-0041 (Japan); Department of Radiation Oncology and Image-applied Therapy, Graduate School of Medicine, Kyoto University, Kyoto 606-8507 (Japan); Department of Radiation Oncology, Kobe City Medical Center General Hospital, Kobe, Hyogo 650-0047, Japan and Division of Radiation Oncology, Institute of Biomedical Research and Innovation, Kobe, Hyogo 650-0047 (Japan); Department of Radiation Oncology and Image-applied Therapy, Graduate School of Medicine, Kyoto University, Kyoto 606-8507 (Japan)

    2012-10-15

    Purpose: To perform dynamic tumor tracking irradiation with the Vero4DRT (MHI-TM2000), a correlation model [four dimensional (4D) model] between the displacement of infrared markers on the abdominal wall and the three-dimensional position of a tumor indicated by a minimum of three implanted gold markers is required. However, the gold markers cannot be detected successfully on fluoroscopic images under the following situations: (1) overlapping of the gold markers; and (2) a low intensity ratio of the gold marker to its surroundings. In the present study, the authors proposed a method to readily determine the optimal x-ray monitoring angle for creating a 4D model utilizing computed tomography (CT) images. Methods: The Vero4DRT mounting two orthogonal kV x-ray imaging subsystems can separately rotate the gantry along an O-shaped guide-lane and the O-ring along its vertical axis. The optimal x-ray monitoring angle was determined on CT images by minimizing the root-sum-square of water equivalent path lengths (WEPLs) on the orthogonal lines passing all of the gold markers while rotating the O-ring and the gantry. The x-ray monitoring angles at which the distances between the gold markers were within 5 mm at the isocenter level were excluded to prevent false detection of the gold markers in consideration of respiratory motions. First, the relationship between the WEPLs (unit: mm) and the intensity ratios of the gold markers was examined to assess the validity of our proposed method. Second, our proposed method was applied to the 4D-CT images at the end-expiration phase for 11 lung cancer patients who had four to five gold markers. To prove the necessity of the x-ray monitoring angle optimization, the intensity ratios of the least visible markers (minimum intensity ratios) that were estimated from the WEPLs were compared under the following conditions: the optimal x-ray monitoring angle and the angles used for setup verification. Additionally, the intra- and

  4. A robust optimization based approach for microgrid operation in deregulated environment

    International Nuclear Information System (INIS)

    Gupta, R.A.; Gupta, Nand Kishor

    2015-01-01

    Highlights: • RO based approach developed for optimal MG operation in deregulated environment. • Wind uncertainty modeled by interval forecasting through ARIMA model. • Proposed approach evaluated using two realistic case studies. • Proposed approach evaluated the impact of degree of robustness. • Proposed approach gives a significant reduction in operation cost of microgrid. - Abstract: Micro Grids (MGs) are clusters of Distributed Energy Resource (DER) units and loads. MGs are self-sustainable and generally operated in two modes: (1) grid connected and (2) grid isolated. In deregulated environment, the operation of MG is managed by the Microgrid Operator (MO) with an objective to minimize the total cost of operation. The MG management is crucial in the deregulated power system due to (i) integration of intermittent renewable sources such as wind and Photo Voltaic (PV) generation, and (ii) volatile grid prices. This paper presents robust optimization based approach for optimal MG management considering wind power uncertainty. Time series based Autoregressive Integrated Moving Average (ARIMA) model is used to characterize the wind power uncertainty through interval forecasting. The proposed approach is illustrated through a case study having both dispatchable and non-dispatchable generators through different modes of operation. Further the impact of degree of robustness is analyzed in both cases on the total cost of operation of the MG. A comparative analysis between obtained results using proposed approach and other existing approach shows the strength of proposed approach in cost minimization in MG management

  5. A risk-based approach to liquid effluent monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Hull, L.C.

    1995-10-01

    DOE Order 5400.1 identifies six objectives of a liquid effluent monitoring program. A strategy is proposed that meets these objective in one of two ways: (1) by showing that effluent concentrations are below concentration limits set by permits or are below concentrations that could cause environmental problems or (2) by showing that concentrations in effluent have not changed from a period when treatment processes were in control and there were no unplanned releases. The intensity of liquid effluent monitoring should be graded to the importance of the source being monitored. This can be accomplished by determining the risk posed by the source. A definition of risk is presented that defines risk in terms of the statistical probability of exceeding a release limit and the time available to recover from an exceedance of a release limit. Three examples are presented that show this approach to grading an effluent monitoring program can be implemented at the Idaho National Engineering Laboratory and will reduce monitoring requirements.

  6. A risk-based approach to liquid effluent monitoring

    International Nuclear Information System (INIS)

    Hull, L.C.

    1995-10-01

    DOE Order 5400.1 identifies six objectives of a liquid effluent monitoring program. A strategy is proposed that meets these objective in one of two ways: (1) by showing that effluent concentrations are below concentration limits set by permits or are below concentrations that could cause environmental problems or (2) by showing that concentrations in effluent have not changed from a period when treatment processes were in control and there were no unplanned releases. The intensity of liquid effluent monitoring should be graded to the importance of the source being monitored. This can be accomplished by determining the risk posed by the source. A definition of risk is presented that defines risk in terms of the statistical probability of exceeding a release limit and the time available to recover from an exceedance of a release limit. Three examples are presented that show this approach to grading an effluent monitoring program can be implemented at the Idaho National Engineering Laboratory and will reduce monitoring requirements

  7. Using Geoscience and Geostatistics to Optimize Groundwater Monitoring Networks at the Savannah River Site

    International Nuclear Information System (INIS)

    Tuckfield, R.C.

    2001-01-01

    A team of scientists, engineers, and statisticians was assembled to review the operation efficiency of groundwater monitoring networks at US Department of Energy Savannah River Site (SRS). Subsequent to a feasibility study, this team selected and conducted an analysis of the A/M area groundwater monitoring well network. The purpose was to optimize the number of groundwater wells requisite for monitoring the plumes of the principal constituent of concern, viz., trichloroethylene (TCE). The project gathered technical expertise from the Savannah River Technology Center (SRTC), the Environmental Restoration Division (ERD), and the Environmental Protection Department (EPD) of SRS

  8. Uncertainty Aware Structural Topology Optimization Via a Stochastic Reduced Order Model Approach

    Science.gov (United States)

    Aguilo, Miguel A.; Warner, James E.

    2017-01-01

    This work presents a stochastic reduced order modeling strategy for the quantification and propagation of uncertainties in topology optimization. Uncertainty aware optimization problems can be computationally complex due to the substantial number of model evaluations that are necessary to accurately quantify and propagate uncertainties. This computational complexity is greatly magnified if a high-fidelity, physics-based numerical model is used for the topology optimization calculations. Stochastic reduced order model (SROM) methods are applied here to effectively 1) alleviate the prohibitive computational cost associated with an uncertainty aware topology optimization problem; and 2) quantify and propagate the inherent uncertainties due to design imperfections. A generic SROM framework that transforms the uncertainty aware, stochastic topology optimization problem into a deterministic optimization problem that relies only on independent calls to a deterministic numerical model is presented. This approach facilitates the use of existing optimization and modeling tools to accurately solve the uncertainty aware topology optimization problems in a fraction of the computational demand required by Monte Carlo methods. Finally, an example in structural topology optimization is presented to demonstrate the effectiveness of the proposed uncertainty aware structural topology optimization approach.

  9. A Statistical Approach to Optimizing Concrete Mixture Design

    OpenAIRE

    Ahmad, Shamsad; Alghamdi, Saeid A.

    2014-01-01

    A step-by-step statistical approach is proposed to obtain optimum proportioning of concrete mixtures using the data obtained through a statistically planned experimental program. The utility of the proposed approach for optimizing the design of concrete mixture is illustrated considering a typical case in which trial mixtures were considered according to a full factorial experiment design involving three factors and their three levels (33). A total of 27 concrete mixtures with three replicate...

  10. Does intense monitoring matter? A quantile regression approach

    Directory of Open Access Journals (Sweden)

    Fekri Ali Shawtari

    2017-06-01

    Full Text Available Corporate governance has become a centre of attention in corporate management at both micro and macro levels due to adverse consequences and repercussion of insufficient accountability. In this study, we include the Malaysian stock market as sample to explore the impact of intense monitoring on the relationship between intellectual capital performance and market valuation. The objectives of the paper are threefold: i to investigate whether intense monitoring affects the intellectual capital performance of listed companies; ii to explore the impact of intense monitoring on firm value; iii to examine the extent to which the directors serving more than two board committees affects the linkage between intellectual capital performance and firms' value. We employ two approaches, namely, the Ordinary Least Square (OLS and the quantile regression approach. The purpose of the latter is to estimate and generate inference about conditional quantile functions. This method is useful when the conditional distribution does not have a standard shape such as an asymmetric, fat-tailed, or truncated distribution. In terms of variables, the intellectual capital is measured using the value added intellectual coefficient (VAIC, while the market valuation is proxied by firm's market capitalization. The findings of the quantile regression shows that some of the results do not coincide with the results of OLS. We found that intensity of monitoring does not influence the intellectual capital of all firms. It is also evident that intensity of monitoring does not influence the market valuation. However, to some extent, it moderates the relationship between intellectual capital performance and market valuation. This paper contributes to the existing literature as it presents new empirical evidences on the moderating effects of the intensity of monitoring of the board committees on the relationship between performance and intellectual capital.

  11. Reliability-redundancy optimization by means of a chaotic differential evolution approach

    International Nuclear Information System (INIS)

    Coelho, Leandro dos Santos

    2009-01-01

    The reliability design is related to the performance analysis of many engineering systems. The reliability-redundancy optimization problems involve selection of components with multiple choices and redundancy levels that produce maximum benefits, can be subject to the cost, weight, and volume constraints. Classical mathematical methods have failed in handling nonconvexities and nonsmoothness in optimization problems. As an alternative to the classical optimization approaches, the meta-heuristics have been given much attention by many researchers due to their ability to find an almost global optimal solution in reliability-redundancy optimization problems. Evolutionary algorithms (EAs) - paradigms of evolutionary computation field - are stochastic and robust meta-heuristics useful to solve reliability-redundancy optimization problems. EAs such as genetic algorithm, evolutionary programming, evolution strategies and differential evolution are being used to find global or near global optimal solution. A differential evolution approach based on chaotic sequences using Lozi's map for reliability-redundancy optimization problems is proposed in this paper. The proposed method has a fast convergence rate but also maintains the diversity of the population so as to escape from local optima. An application example in reliability-redundancy optimization based on the overspeed protection system of a gas turbine is given to show its usefulness and efficiency. Simulation results show that the application of deterministic chaotic sequences instead of random sequences is a possible strategy to improve the performance of differential evolution.

  12. Supplemental Assessment of the Y-12 Groundwater Protection Program Using Monitoring and Remediation Optimization System Software

    Energy Technology Data Exchange (ETDEWEB)

    Elvado Environmental LLC; GSI Environmental LLC

    2009-01-01

    A supplemental quantitative assessment of the Groundwater Protection Program (GWPP) at the Y-12 National Security Complex (Y-12) in Oak Ridge, TN was performed using the Monitoring and Remediation Optimization System (MAROS) software. This application was previously used as part of a similar quantitative assessment of the GWPP completed in December 2005, hereafter referenced as the 'baseline' MAROS assessment (BWXT Y-12 L.L.C. [BWXT] 2005). The MAROS software contains modules that apply statistical analysis techniques to an existing GWPP analytical database in conjunction with hydrogeologic factors, regulatory framework, and the location of potential receptors, to recommend an improved groundwater monitoring network and optimum sampling frequency for individual monitoring locations. The goal of this supplemental MAROS assessment of the Y-12 GWPP is to review and update monitoring network optimization recommendations resulting from the 2005 baseline report using data collected through December 2007. The supplemental MAROS assessment is based on the findings of the baseline MAROS assessment and includes only the groundwater sampling locations (wells and natural springs) currently granted 'Active' status in accordance with the Y-12 GWPP Monitoring Optimization Plan (MOP). The results of the baseline MAROS assessment provided technical rationale regarding the 'Active' status designations defined in the MOP (BWXT 2006). One objective of the current report is to provide a quantitative review of data collected from Active but infrequently sampled wells to confirm concentrations at these locations. This supplemental MAROS assessment does not include the extensive qualitative evaluations similar to those presented in the baseline report.

  13. Monitoring of the xrootd federations

    CERN Multimedia

    CERN. Geneva

    2013-01-01

    Construction of the data federations and understanding the impact of the new approach to data management on user analysis requires complete and detailed monitoring. Monitoring functionality should cover the status of all components of the federated storage, measuring data traffic and data access performance, as well as being able to detect any kind of inefficiencies and to provide hints for resource optimization and effective data distribution policy. Data mining of the collected monito...

  14. An Optimization Approach to the Dynamic Allocation of Economic Capital

    NARCIS (Netherlands)

    Laeven, R.J.A.; Goovaerts, M.J.

    2004-01-01

    We propose an optimization approach to allocating economic capital, distinguishing between an allocation or raising principle and a measure for the risk residual. The approach is applied both at the aggregate (conglomerate) level and at the individual (subsidiary) level and yields an integrated

  15. Optimized Scheduling of Smart Meter Data Access for Real-time Voltage Quality Monitoring

    DEFF Research Database (Denmark)

    Kemal, Mohammed Seifu; Olsen, Rasmus Løvenstein; Schwefel, Hans-Peter

    2018-01-01

    Abstract—Active low-voltage distribution grids that support high integration of distributed generation such as photovoltaics and wind turbines require real-time voltage monitoring. At the same time, countries in Europe such as Denmark have close to 100% rollout of smart metering infrastructure....... The metering infrastructure has limitations to provide real-time measurements with small-time granularity. This paper presents an algorithm for optimized scheduling of smart meter data access to provide real-time voltage quality monitoring. The algorithm is analyzed using a real distribution grid in Denmark...

  16. Revisiting support optimization at the Driskos tunnel using a quantitative risk approach

    Directory of Open Access Journals (Sweden)

    J. Connor Langford

    2016-04-01

    Full Text Available With the scale and cost of geotechnical engineering projects increasing rapidly over the past few decades, there is a clear need for the careful consideration of calculated risks in design. While risk is typically dealt with subjectively through the use of conservative design parameters, with the advent of reliability-based methods, this no longer needs to be the case. Instead, a quantitative risk approach can be considered that incorporates uncertainty in ground conditions directly into the design process to determine the variable ground response and support loads. This allows for the optimization of support on the basis of both worker safety and economic risk. This paper presents the application of such an approach to review the design of the initial lining system along a section of the Driskos twin tunnels as part of the Egnatia Odos highway in northern Greece. Along this section of tunnel, weak rock masses were encountered as well as high in situ stress conditions, which led to excessive deformations and failure of the as built temporary support. Monitoring data were used to validate the rock mass parameters selected in this area and a risk approach was used to determine, in hindsight, the most appropriate support category with respect to the cost of installation and expected cost of failure. Different construction sequences were also considered in the context of both convenience and risk cost.

  17. Fast engineering optimization: A novel highly effective control parameterization approach for industrial dynamic processes.

    Science.gov (United States)

    Liu, Ping; Li, Guodong; Liu, Xinggao

    2015-09-01

    Control vector parameterization (CVP) is an important approach of the engineering optimization for the industrial dynamic processes. However, its major defect, the low optimization efficiency caused by calculating the relevant differential equations in the generated nonlinear programming (NLP) problem repeatedly, limits its wide application in the engineering optimization for the industrial dynamic processes. A novel highly effective control parameterization approach, fast-CVP, is first proposed to improve the optimization efficiency for industrial dynamic processes, where the costate gradient formulae is employed and a fast approximate scheme is presented to solve the differential equations in dynamic process simulation. Three well-known engineering optimization benchmark problems of the industrial dynamic processes are demonstrated as illustration. The research results show that the proposed fast approach achieves a fine performance that at least 90% of the computation time can be saved in contrast to the traditional CVP method, which reveals the effectiveness of the proposed fast engineering optimization approach for the industrial dynamic processes. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  18. PARETO: A novel evolutionary optimization approach to multiobjective IMRT planning.

    Science.gov (United States)

    Fiege, Jason; McCurdy, Boyd; Potrebko, Peter; Champion, Heather; Cull, Andrew

    2011-09-01

    In radiation therapy treatment planning, the clinical objectives of uniform high dose to the planning target volume (PTV) and low dose to the organs-at-risk (OARs) are invariably in conflict, often requiring compromises to be made between them when selecting the best treatment plan for a particular patient. In this work, the authors introduce Pareto-Aware Radiotherapy Evolutionary Treatment Optimization (pareto), a multiobjective optimization tool to solve for beam angles and fluence patterns in intensity-modulated radiation therapy (IMRT) treatment planning. pareto is built around a powerful multiobjective genetic algorithm (GA), which allows us to treat the problem of IMRT treatment plan optimization as a combined monolithic problem, where all beam fluence and angle parameters are treated equally during the optimization. We have employed a simple parameterized beam fluence representation with a realistic dose calculation approach, incorporating patient scatter effects, to demonstrate feasibility of the proposed approach on two phantoms. The first phantom is a simple cylindrical phantom containing a target surrounded by three OARs, while the second phantom is more complex and represents a paraspinal patient. pareto results in a large database of Pareto nondominated solutions that represent the necessary trade-offs between objectives. The solution quality was examined for several PTV and OAR fitness functions. The combination of a conformity-based PTV fitness function and a dose-volume histogram (DVH) or equivalent uniform dose (EUD) -based fitness function for the OAR produced relatively uniform and conformal PTV doses, with well-spaced beams. A penalty function added to the fitness functions eliminates hotspots. Comparison of resulting DVHs to those from treatment plans developed with a single-objective fluence optimizer (from a commercial treatment planning system) showed good correlation. Results also indicated that pareto shows promise in optimizing the number

  19. A Swarm Optimization approach for clinical knowledge mining.

    Science.gov (United States)

    Christopher, J Jabez; Nehemiah, H Khanna; Kannan, A

    2015-10-01

    Rule-based classification is a typical data mining task that is being used in several medical diagnosis and decision support systems. The rules stored in the rule base have an impact on classification efficiency. Rule sets that are extracted with data mining tools and techniques are optimized using heuristic or meta-heuristic approaches in order to improve the quality of the rule base. In this work, a meta-heuristic approach called Wind-driven Swarm Optimization (WSO) is used. The uniqueness of this work lies in the biological inspiration that underlies the algorithm. WSO uses Jval, a new metric, to evaluate the efficiency of a rule-based classifier. Rules are extracted from decision trees. WSO is used to obtain different permutations and combinations of rules whereby the optimal ruleset that satisfies the requirement of the developer is used for predicting the test data. The performance of various extensions of decision trees, namely, RIPPER, PART, FURIA and Decision Tables are analyzed. The efficiency of WSO is also compared with the traditional Particle Swarm Optimization. Experiments were carried out with six benchmark medical datasets. The traditional C4.5 algorithm yields 62.89% accuracy with 43 rules for liver disorders dataset where as WSO yields 64.60% with 19 rules. For Heart disease dataset, C4.5 is 68.64% accurate with 98 rules where as WSO is 77.8% accurate with 34 rules. The normalized standard deviation for accuracy of PSO and WSO are 0.5921 and 0.5846 respectively. WSO provides accurate and concise rulesets. PSO yields results similar to that of WSO but the novelty of WSO lies in its biological motivation and it is customization for rule base optimization. The trade-off between the prediction accuracy and the size of the rule base is optimized during the design and development of rule-based clinical decision support system. The efficiency of a decision support system relies on the content of the rule base and classification accuracy. Copyright

  20. Spatiotemporal radiotherapy planning using a global optimization approach

    Science.gov (United States)

    Adibi, Ali; Salari, Ehsan

    2018-02-01

    This paper aims at quantifying the extent of potential therapeutic gain, measured using biologically effective dose (BED), that can be achieved by altering the radiation dose distribution over treatment sessions in fractionated radiotherapy. To that end, a spatiotemporally integrated planning approach is developed, where the spatial and temporal dose modulations are optimized simultaneously. The concept of equivalent uniform BED (EUBED) is used to quantify and compare the clinical quality of spatiotemporally heterogeneous dose distributions in target and critical structures. This gives rise to a large-scale non-convex treatment-plan optimization problem, which is solved using global optimization techniques. The proposed spatiotemporal planning approach is tested on two stylized cancer cases resembling two different tumor sites and sensitivity analysis is performed for radio-biological and EUBED parameters. Numerical results validate that spatiotemporal plans are capable of delivering a larger BED to the target volume without increasing the BED in critical structures compared to conventional time-invariant plans. In particular, this additional gain is attributed to the irradiation of different regions of the target volume at different treatment sessions. Additionally, the trade-off between the potential therapeutic gain and the number of distinct dose distributions is quantified, which suggests a diminishing marginal gain as the number of dose distributions increases.

  1. Optimized Autonomous Space In-situ Sensor-Web for volcano monitoring

    Science.gov (United States)

    Song, W.-Z.; Shirazi, B.; Kedar, S.; Chien, S.; Webb, F.; Tran, D.; Davis, A.; Pieri, D.; LaHusen, R.; Pallister, J.; Dzurisin, D.; Moran, S.; Lisowski, M.

    2008-01-01

    In response to NASA's announced requirement for Earth hazard monitoring sensor-web technology, a multidisciplinary team involving sensor-network experts (Washington State University), space scientists (JPL), and Earth scientists (USGS Cascade Volcano Observatory (CVO)), is developing a prototype dynamic and scaleable hazard monitoring sensor-web and applying it to volcano monitoring. The combined Optimized Autonomous Space -In-situ Sensor-web (OASIS) will have two-way communication capability between ground and space assets, use both space and ground data for optimal allocation of limited power and bandwidth resources on the ground, and use smart management of competing demands for limited space assets. It will also enable scalability and seamless infusion of future space and in-situ assets into the sensor-web. The prototype will be focused on volcano hazard monitoring at Mount St. Helens, which has been active since October 2004. The system is designed to be flexible and easily configurable for many other applications as well. The primary goals of the project are: 1) integrating complementary space (i.e., Earth Observing One (EO-1) satellite) and in-situ (ground-based) elements into an interactive, autonomous sensor-web; 2) advancing sensor-web power and communication resource management technology; and 3) enabling scalability for seamless infusion of future space and in-situ assets into the sensor-web. To meet these goals, we are developing: 1) a test-bed in-situ array with smart sensor nodes capable of making autonomous data acquisition decisions; 2) efficient self-organization algorithm of sensor-web topology to support efficient data communication and command control; 3) smart bandwidth allocation algorithms in which sensor nodes autonomously determine packet priorities based on mission needs and local bandwidth information in real-time; and 4) remote network management and reprogramming tools. The space and in-situ control components of the system will be

  2. A Multivariate Quality Loss Function Approach for Optimization of Spinning Processes

    Science.gov (United States)

    Chakraborty, Shankar; Mitra, Ankan

    2018-05-01

    Recent advancements in textile industry have given rise to several spinning techniques, such as ring spinning, rotor spinning etc., which can be used to produce a wide variety of textile apparels so as to fulfil the end requirements of the customers. To achieve the best out of these processes, they should be utilized at their optimal parametric settings. However, in presence of multiple yarn characteristics which are often conflicting in nature, it becomes a challenging task for the spinning industry personnel to identify the best parametric mix which would simultaneously optimize all the responses. Hence, in this paper, the applicability of a new systematic approach in the form of multivariate quality loss function technique is explored for optimizing multiple quality characteristics of yarns while identifying the ideal settings of two spinning processes. It is observed that this approach performs well against the other multi-objective optimization techniques, such as desirability function, distance function and mean squared error methods. With slight modifications in the upper and lower specification limits of the considered quality characteristics, and constraints of the non-linear optimization problem, it can be successfully applied to other processes in textile industry to determine their optimal parametric settings.

  3. Application of probabilistic risk based optimization approaches in environmental restoration

    International Nuclear Information System (INIS)

    Goldammer, W.

    1995-01-01

    The paper presents a general approach to site-specific risk assessments and optimization procedures. In order to account for uncertainties in the assessment of the current situation and future developments, optimization parameters are treated as probabilistic distributions. The assessments are performed within the framework of a cost-benefit analysis. Radiation hazards and conventional risks are treated within an integrated approach. Special consideration is given to consequences of low probability events such as, earthquakes or major floods. Risks and financial costs are combined to an overall figure of detriment allowing one to distinguish between benefits of available reclamation options. The probabilistic analysis uses a Monte Carlo simulation technique. The paper demonstrates the applicability of this approach in aiding the reclamation planning using an example from the German reclamation program for uranium mining and milling sites

  4. Optimal Sensor Selection for Health Monitoring Systems

    Science.gov (United States)

    Santi, L. Michael; Sowers, T. Shane; Aguilar, Robert B.

    2005-01-01

    Sensor data are the basis for performance and health assessment of most complex systems. Careful selection and implementation of sensors is critical to enable high fidelity system health assessment. A model-based procedure that systematically selects an optimal sensor suite for overall health assessment of a designated host system is described. This procedure, termed the Systematic Sensor Selection Strategy (S4), was developed at NASA John H. Glenn Research Center in order to enhance design phase planning and preparations for in-space propulsion health management systems (HMS). Information and capabilities required to utilize the S4 approach in support of design phase development of robust health diagnostics are outlined. A merit metric that quantifies diagnostic performance and overall risk reduction potential of individual sensor suites is introduced. The conceptual foundation for this merit metric is presented and the algorithmic organization of the S4 optimization process is described. Representative results from S4 analyses of a boost stage rocket engine previously under development as part of NASA's Next Generation Launch Technology (NGLT) program are presented.

  5. Development of a Multi-Event Trajectory Optimization Tool for Noise-Optimized Approach Route Design

    NARCIS (Netherlands)

    Braakenburg, M.L.; Hartjes, S.; Visser, H.G.; Hebly, S.J.

    2011-01-01

    This paper presents preliminary results from an ongoing research effort towards the development of a multi-event trajectory optimization methodology that allows to synthesize RNAV approach routes that minimize a cumulative measure of noise, taking into account the total noise effect aggregated for

  6. Land Degradation Monitoring in the Ordos Plateau of China Using an Expert Knowledge and BP-ANN-Based Approach

    Directory of Open Access Journals (Sweden)

    Yaojie Yue

    2016-11-01

    Full Text Available Land degradation monitoring is of vital importance to provide scientific information for promoting sustainable land utilization. This paper presents an expert knowledge and BP-ANN-based approach to detect and monitor land degradation in an effort to overcome the deficiencies of image classification and vegetation index-based approaches. The proposed approach consists of three generic steps: (1 extraction of knowledge on the relationship between land degradation degree and predisposing factors, which are NDVI and albedo, from domain experts; (2 establishment of a land degradation detecting model based on the BP-ANN algorithm; and (3 land degradation dynamic analysis. A comprehensive analysis was conducted on the development of land degradation in the Ordos Plateau of China in 1990, 2000 and 2010. The results indicate that the proposed approach is reliable for monitoring land degradation, with an overall accuracy of 91.2%. From 1990–2010, a reverse trend of land degradation is observed in Ordos Plateau. Regions with relatively high land degradation dynamic were mostly located in the northeast of Ordos Plateau. Additionally, most of the regions have transferred from a hot spot of land degradation to a less changed area. It is suggested that land utilization optimization plays a key role for effective land degradation control. However, it should be highlighted that the goals of such strategies should aim at the main negative factors causing land degradation, and the land use type and its quantity must meet the demand of population and be reconciled with natural conditions. Results from this case study suggest that the expert knowledge and BP-ANN-based approach is effective in mapping land degradation.

  7. Review of Reliability-Based Design Optimization Approach and Its Integration with Bayesian Method

    Science.gov (United States)

    Zhang, Xiangnan

    2018-03-01

    A lot of uncertain factors lie in practical engineering, such as external load environment, material property, geometrical shape, initial condition, boundary condition, etc. Reliability method measures the structural safety condition and determine the optimal design parameter combination based on the probabilistic theory. Reliability-based design optimization (RBDO) is the most commonly used approach to minimize the structural cost or other performance under uncertainty variables which combines the reliability theory and optimization. However, it cannot handle the various incomplete information. The Bayesian approach is utilized to incorporate this kind of incomplete information in its uncertainty quantification. In this paper, the RBDO approach and its integration with Bayesian method are introduced.

  8. Computational Approaches to Simulation and Optimization of Global Aircraft Trajectories

    Science.gov (United States)

    Ng, Hok Kwan; Sridhar, Banavar

    2016-01-01

    This study examines three possible approaches to improving the speed in generating wind-optimal routes for air traffic at the national or global level. They are: (a) using the resources of a supercomputer, (b) running the computations on multiple commercially available computers and (c) implementing those same algorithms into NASAs Future ATM Concepts Evaluation Tool (FACET) and compares those to a standard implementation run on a single CPU. Wind-optimal aircraft trajectories are computed using global air traffic schedules. The run time and wait time on the supercomputer for trajectory optimization using various numbers of CPUs ranging from 80 to 10,240 units are compared with the total computational time for running the same computation on a single desktop computer and on multiple commercially available computers for potential computational enhancement through parallel processing on the computer clusters. This study also re-implements the trajectory optimization algorithm for further reduction of computational time through algorithm modifications and integrates that with FACET to facilitate the use of the new features which calculate time-optimal routes between worldwide airport pairs in a wind field for use with existing FACET applications. The implementations of trajectory optimization algorithms use MATLAB, Python, and Java programming languages. The performance evaluations are done by comparing their computational efficiencies and based on the potential application of optimized trajectories. The paper shows that in the absence of special privileges on a supercomputer, a cluster of commercially available computers provides a feasible approach for national and global air traffic system studies.

  9. Optimal trading strategies—a time series approach

    Science.gov (United States)

    Bebbington, Peter A.; Kühn, Reimer

    2016-05-01

    Motivated by recent advances in the spectral theory of auto-covariance matrices, we are led to revisit a reformulation of Markowitz’ mean-variance portfolio optimization approach in the time domain. In its simplest incarnation it applies to a single traded asset and allows an optimal trading strategy to be found which—for a given return—is minimally exposed to market price fluctuations. The model is initially investigated for a range of synthetic price processes, taken to be either second order stationary, or to exhibit second order stationary increments. Attention is paid to consequences of estimating auto-covariance matrices from small finite samples, and auto-covariance matrix cleaning strategies to mitigate against these are investigated. Finally we apply our framework to real world data.

  10. An Efficient Approach for Solving Mesh Optimization Problems Using Newton’s Method

    Directory of Open Access Journals (Sweden)

    Jibum Kim

    2014-01-01

    Full Text Available We present an efficient approach for solving various mesh optimization problems. Our approach is based on Newton’s method, which uses both first-order (gradient and second-order (Hessian derivatives of the nonlinear objective function. The volume and surface mesh optimization algorithms are developed such that mesh validity and surface constraints are satisfied. We also propose several Hessian modification methods when the Hessian matrix is not positive definite. We demonstrate our approach by comparing our method with nonlinear conjugate gradient and steepest descent methods in terms of both efficiency and mesh quality.

  11. Robust and optimal control a two-port framework approach

    CERN Document Server

    Tsai, Mi-Ching

    2014-01-01

    A Two-port Framework for Robust and Optimal Control introduces an alternative approach to robust and optimal controller synthesis procedures for linear, time-invariant systems, based on the two-port system widespread in electrical engineering. The novel use of the two-port system in this context allows straightforward engineering-oriented solution-finding procedures to be developed, requiring no mathematics beyond linear algebra. A chain-scattering description provides a unified framework for constructing the stabilizing controller set and for synthesizing H2 optimal and H∞ sub-optimal controllers. Simple yet illustrative examples explain each step. A Two-port Framework for Robust and Optimal Control  features: ·         a hands-on, tutorial-style presentation giving the reader the opportunity to repeat the designs presented and easily to modify them for their own programs; ·         an abundance of examples illustrating the most important steps in robust and optimal design; and ·   �...

  12. Dynamic Programming Approach for Exact Decision Rule Optimization

    KAUST Repository

    Amin, Talha

    2013-01-01

    This chapter is devoted to the study of an extension of dynamic programming approach that allows sequential optimization of exact decision rules relative to the length and coverage. It contains also results of experiments with decision tables from UCI Machine Learning Repository. © Springer-Verlag Berlin Heidelberg 2013.

  13. A complex systems approach to planning, optimization and decision making for energy networks

    International Nuclear Information System (INIS)

    Beck, Jessica; Kempener, Ruud; Cohen, Brett; Petrie, Jim

    2008-01-01

    This paper explores a new approach to planning and optimization of energy networks, using a mix of global optimization and agent-based modeling tools. This approach takes account of techno-economic, environmental and social criteria, and engages explicitly with inherent network complexity in terms of the autonomous decision-making capability of individual agents within the network, who may choose not to act as economic rationalists. This is an important consideration from the standpoint of meeting sustainable development goals. The approach attempts to set targets for energy planning, by determining preferred network development pathways through multi-objective optimization. The viability of such plans is then explored through agent-based models. The combined approach is demonstrated for a case study of regional electricity generation in South Africa, with biomass as feedstock

  14. Process monitoring for intelligent manufacturing processes - Methodology and application to Robot Assisted Polishing

    DEFF Research Database (Denmark)

    Pilny, Lukas

    Process monitoring provides important information on the product, process and manufacturing system during part manufacturing. Such information can be used for process optimization and detection of undesired processing conditions to initiate timely actions for avoidance of defects, thereby improving...... quality assurance. This thesis is aimed at a systematic development of process monitoring solutions, constituting a key element of intelligent manufacturing systems towards zero defect manufacturing. A methodological approach of general applicability is presented in this concern.The approach consists...... of six consecutive steps for identification of product Vital Quality Characteristics (VQCs) and Key Process Variables (KPVs), selection and characterization of sensors, optimization of sensors placement, validation of the monitoring solutions, definition of the reference manufacturing performance...

  15. Optimal Sensor Placement for Latticed Shell Structure Based on an Improved Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Xun Zhang

    2014-01-01

    Full Text Available Optimal sensor placement is a key issue in the structural health monitoring of large-scale structures. However, some aspects in existing approaches require improvement, such as the empirical and unreliable selection of mode and sensor numbers and time-consuming computation. A novel improved particle swarm optimization (IPSO algorithm is proposed to address these problems. The approach firstly employs the cumulative effective modal mass participation ratio to select mode number. Three strategies are then adopted to improve the PSO algorithm. Finally, the IPSO algorithm is utilized to determine the optimal sensors number and configurations. A case study of a latticed shell model is implemented to verify the feasibility of the proposed algorithm and four different PSO algorithms. The effective independence method is also taken as a contrast experiment. The comparison results show that the optimal placement schemes obtained by the PSO algorithms are valid, and the proposed IPSO algorithm has better enhancement in convergence speed and precision.

  16. Multi-objective approach in thermoenvironomic optimization of a benchmark cogeneration system

    International Nuclear Information System (INIS)

    Sayyaadi, Hoseyn

    2009-01-01

    Multi-objective optimization for designing of a benchmark cogeneration system known as CGAM cogeneration system has been performed. In optimization approach, the exergetic, economic and environmental aspects have been considered, simultaneously. The thermodynamic modeling has been implemented comprehensively while economic analysis conducted in accordance with the total revenue requirement (TRR) method. The results for the single objective thermoeconomic optimization have been compared with the previous studies in optimization of CGAM problem. In multi-objective optimization of the CGAM problem, the three objective functions including the exergetic efficiency, total levelized cost rate of the system product and the cost rate of environmental impact have been considered. The environmental impact objective function has been defined and expressed in cost terms. This objective has been integrated with the thermoeconomic objective to form a new unique objective function known as a thermoenvironomic objective function. The thermoenvironomic objective has been minimized while the exergetic objective has been maximized. One of the most suitable optimization techniques developed using a particular class of search algorithms known as multi-objective evolutionary algorithms (MOEAs) has been considered here. This approach which is developed based on the genetic algorithm has been applied to find the set of Pareto optimal solutions with respect to the aforementioned objective functions. An example of decision-making has been presented and a final optimal solution has been introduced. The sensitivity of the solutions to the interest rate and the fuel cost has been studied

  17. PARETO: A novel evolutionary optimization approach to multiobjective IMRT planning

    International Nuclear Information System (INIS)

    Fiege, Jason; McCurdy, Boyd; Potrebko, Peter; Champion, Heather; Cull, Andrew

    2011-01-01

    Purpose: In radiation therapy treatment planning, the clinical objectives of uniform high dose to the planning target volume (PTV) and low dose to the organs-at-risk (OARs) are invariably in conflict, often requiring compromises to be made between them when selecting the best treatment plan for a particular patient. In this work, the authors introduce Pareto-Aware Radiotherapy Evolutionary Treatment Optimization (pareto), a multiobjective optimization tool to solve for beam angles and fluence patterns in intensity-modulated radiation therapy (IMRT) treatment planning. Methods: pareto is built around a powerful multiobjective genetic algorithm (GA), which allows us to treat the problem of IMRT treatment plan optimization as a combined monolithic problem, where all beam fluence and angle parameters are treated equally during the optimization. We have employed a simple parameterized beam fluence representation with a realistic dose calculation approach, incorporating patient scatter effects, to demonstrate feasibility of the proposed approach on two phantoms. The first phantom is a simple cylindrical phantom containing a target surrounded by three OARs, while the second phantom is more complex and represents a paraspinal patient. Results: pareto results in a large database of Pareto nondominated solutions that represent the necessary trade-offs between objectives. The solution quality was examined for several PTV and OAR fitness functions. The combination of a conformity-based PTV fitness function and a dose-volume histogram (DVH) or equivalent uniform dose (EUD) -based fitness function for the OAR produced relatively uniform and conformal PTV doses, with well-spaced beams. A penalty function added to the fitness functions eliminates hotspots. Comparison of resulting DVHs to those from treatment plans developed with a single-objective fluence optimizer (from a commercial treatment planning system) showed good correlation. Results also indicated that pareto shows

  18. Comparative Analysis of GF-1 and HJ-1 Data to Derive the Optimal Scale for Monitoring Heavy Metal Stress in Rice.

    Science.gov (United States)

    Wang, Dongmin; Liu, Xiangnan

    2018-03-06

    Remote sensing can actively monitor heavy metal contamination in crops, but with the increase of satellite sensors, the optimal scale for monitoring heavy metal stress in rice is still unknown. This study focused on identifying the optimal scale by comparing the ability to detect heavy metal stress in rice at various spatial scales. The 2 m, 8 m, and 16 m resolution GF-1 (China) data and the 30 m resolution HJ-1 (China) data were used to invert leaf area index (LAI). The LAI was the input parameter of the World Food Studies (WOFOST) model, and we obtained the dry weight of storage organs (WSO) and dry weight of roots (WRT) through the assimilation method; then, the mass ratio of rice storage organs and roots (SORMR) was calculated. Through the comparative analysis of SORMR at each spatial scale of data, we determined the optimal scale to monitor heavy metal stress in rice. The following conclusions were drawn: (1) SORMR could accurately and effectively monitor heavy metal stress; (2) the 8 m and 16 m images from GF-1 were suitable for monitoring heavy metal stress in rice; (3) 16 m was considered the optimal scale to assess heavy metal stress in rice.

  19. Deterministic network interdiction optimization via an evolutionary approach

    International Nuclear Information System (INIS)

    Rocco S, Claudio M.; Ramirez-Marquez, Jose Emmanuel

    2009-01-01

    This paper introduces an evolutionary optimization approach that can be readily applied to solve deterministic network interdiction problems. The network interdiction problem solved considers the minimization of the maximum flow that can be transmitted between a source node and a sink node for a fixed network design when there is a limited amount of resources available to interdict network links. Furthermore, the model assumes that the nominal capacity of each network link and the cost associated with their interdiction can change from link to link. For this problem, the solution approach developed is based on three steps that use: (1) Monte Carlo simulation, to generate potential network interdiction strategies, (2) Ford-Fulkerson algorithm for maximum s-t flow, to analyze strategies' maximum source-sink flow and, (3) an evolutionary optimization technique to define, in probabilistic terms, how likely a link is to appear in the final interdiction strategy. Examples for different sizes of networks and network behavior are used throughout the paper to illustrate the approach. In terms of computational effort, the results illustrate that solutions are obtained from a significantly restricted solution search space. Finally, the authors discuss the need for a reliability perspective to network interdiction, so that solutions developed address more realistic scenarios of such problem

  20. On the equivalent static loads approach for dynamic response structural optimization

    DEFF Research Database (Denmark)

    Stolpe, Mathias

    2014-01-01

    The equivalent static loads algorithm is an increasingly popular approach to solve dynamic response structural optimization problems. The algorithm is based on solving a sequence of related static response structural optimization problems with the same objective and constraint functions...... as the original problem. The optimization theoretical foundation of the algorithm is mainly developed in Park and Kang (J Optim Theory Appl 118(1):191–200, 2003). In that article it is shown, for a certain class of problems, that if the equivalent static loads algorithm terminates then the KKT conditions...

  1. A "Reverse-Schur" Approach to Optimization With Linear PDE Constraints: Application to Biomolecule Analysis and Design.

    Science.gov (United States)

    Bardhan, Jaydeep P; Altman, Michael D; Tidor, B; White, Jacob K

    2009-01-01

    We present a partial-differential-equation (PDE)-constrained approach for optimizing a molecule's electrostatic interactions with a target molecule. The approach, which we call reverse-Schur co-optimization, can be more than two orders of magnitude faster than the traditional approach to electrostatic optimization. The efficiency of the co-optimization approach may enhance the value of electrostatic optimization for ligand-design efforts-in such projects, it is often desirable to screen many candidate ligands for their viability, and the optimization of electrostatic interactions can improve ligand binding affinity and specificity. The theoretical basis for electrostatic optimization derives from linear-response theory, most commonly continuum models, and simple assumptions about molecular binding processes. Although the theory has been used successfully to study a wide variety of molecular binding events, its implications have not yet been fully explored, in part due to the computational expense associated with the optimization. The co-optimization algorithm achieves improved performance by solving the optimization and electrostatic simulation problems simultaneously, and is applicable to both unconstrained and constrained optimization problems. Reverse-Schur co-optimization resembles other well-known techniques for solving optimization problems with PDE constraints. Model problems as well as realistic examples validate the reverse-Schur method, and demonstrate that our technique and alternative PDE-constrained methods scale very favorably compared to the standard approach. Regularization, which ordinarily requires an explicit representation of the objective function, can be included using an approximate Hessian calculated using the new BIBEE/P (boundary-integral-based electrostatics estimation by preconditioning) method.

  2. A "Hybrid" Approach for Synthesizing Optimal Controllers of Hybrid Systems

    DEFF Research Database (Denmark)

    Zhao, Hengjun; Zhan, Naijun; Kapur, Deepak

    2012-01-01

    to discretization manageable and within bounds. A major advantage of our approach is not only that it avoids errors due to numerical computation, but it also gives a better optimal controller. In order to illustrate our approach, we use the real industrial example of an oil pump provided by the German company HYDAC...

  3. A design approach for integrating thermoelectric devices using topology optimization

    DEFF Research Database (Denmark)

    Soprani, Stefano; Haertel, Jan Hendrik Klaas; Lazarov, Boyan Stefanov

    2016-01-01

    Efficient operation of thermoelectric devices strongly relies on the thermal integration into the energy conversion system in which they operate. Effective thermal integration reduces the temperature differences between the thermoelectric module and its thermal reservoirs, allowing the system...... to operate more efficiently. This work proposes and experimentally demonstrates a topology optimization approach as a design tool for efficient integration of thermoelectric modules into systems with specific design constraints. The approach allows thermal layout optimization of thermoelectric systems...... for different operating conditions and objective functions, such as temperature span, efficiency, and power recoveryrate. As a specific application, the integration of a thermoelectric cooler into the electronics section ofa downhole oil well intervention tool is investigated, with the objective of minimizing...

  4. Y-12 Groundwater Protection Program Monitoring Optimization Plan for Groundwater Monitoring Wells at the U.S. Department of Energy Y-12 National Security Complex, Oak Ridge, Tennessee

    Energy Technology Data Exchange (ETDEWEB)

    None

    2003-09-30

    This document is the monitoring optimization plan for groundwater monitoring wells associated with the U.S. Department of Energy (DOE) Y-12 National Security Complex (Y-12) in Oak Ridge, Tennessee (Figure 1). The plan describes the technical approach that will be implemented under the Y-12 Groundwater Protection Program (GWPP) to focus available resources on the monitoring wells at Y-12 which provide the most useful hydrologic and water-quality monitoring data. The technical approach is based on the GWPP status designation for each well (Section 2.0). Under this approach, wells granted ''active'' status are used by the GWPP for hydrologic monitoring and/or groundwater sampling (Section 3.0), whereas well granted ''inactive'' status are not used for either purpose. The status designation also determines the frequency at which the GWPP will inspect applicable wells, the scope of these well inspections, and extent of any maintenance actions initiated by the GWPP (Section 4.0). Details regarding the ancillary activities associated with implementation of this plan (e.g., well inspection) are deferred to the referenced GWPP plans and procedures (Section 5.0). This plan applies to groundwater monitoring wells associated with Y-12 and related waste management facilities located within three hydrogeologic regimes (Figure 1): the Bear Creek Hydrogeologic Regime (Bear Creek Regime), the Upper East Fork Poplar Creek Hydrogeologic Regime (East Fork Regime), and the Chestnut Ridge Hydrogeologic Regime (Chestnut Ridge Regime). The Bear Creek Regime encompasses a section of Bear Creek Valley (BCV) immediately west of Y-12. The East Fork Regime encompasses most of the Y-12 process, operations, and support facilities in BCV and, for the purposes of this plan, includes a section of Union Valley east of the DOE Oak Ridge Reservation (ORR) boundary along Scarboro Road. The Chestnut Ridge Regime is directly south of Y-12 and encompasses a section of Chestnut Ridge that is bound to the

  5. Dynamic programming approach to optimization of approximate decision rules

    KAUST Repository

    Amin, Talha M.; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata

    2013-01-01

    This paper is devoted to the study of an extension of dynamic programming approach which allows sequential optimization of approximate decision rules relative to the length and coverage. We introduce an uncertainty measure R(T) which is the number

  6. Designing optimal greenhouse gas monitoring networks for Australia

    Science.gov (United States)

    Ziehn, T.; Law, R. M.; Rayner, P. J.; Roff, G.

    2016-01-01

    Atmospheric transport inversion is commonly used to infer greenhouse gas (GHG) flux estimates from concentration measurements. The optimal location of ground-based observing stations that supply these measurements can be determined by network design. Here, we use a Lagrangian particle dispersion model (LPDM) in reverse mode together with a Bayesian inverse modelling framework to derive optimal GHG observing networks for Australia. This extends the network design for carbon dioxide (CO2) performed by Ziehn et al. (2014) to also minimise the uncertainty on the flux estimates for methane (CH4) and nitrous oxide (N2O), both individually and in a combined network using multiple objectives. Optimal networks are generated by adding up to five new stations to the base network, which is defined as two existing stations, Cape Grim and Gunn Point, in southern and northern Australia respectively. The individual networks for CO2, CH4 and N2O and the combined observing network show large similarities because the flux uncertainties for each GHG are dominated by regions of biologically productive land. There is little penalty, in terms of flux uncertainty reduction, for the combined network compared to individually designed networks. The location of the stations in the combined network is sensitive to variations in the assumed data uncertainty across locations. A simple assessment of economic costs has been included in our network design approach, considering both establishment and maintenance costs. Our results suggest that, while site logistics change the optimal network, there is only a small impact on the flux uncertainty reductions achieved with increasing network size.

  7. Research on the optimization of air quality monitoring station layout based on spatial grid statistical analysis method.

    Science.gov (United States)

    Li, Tianxin; Zhou, Xing Chen; Ikhumhen, Harrison Odion; Difei, An

    2018-05-01

    In recent years, with the significant increase in urban development, it has become necessary to optimize the current air monitoring stations to reflect the quality of air in the environment. Highlighting the spatial representation of some air monitoring stations using Beijing's regional air monitoring station data from 2012 to 2014, the monthly mean particulate matter concentration (PM10) in the region was calculated and through the IDW interpolation method and spatial grid statistical method using GIS, the spatial distribution of PM10 concentration in the whole region was deduced. The spatial distribution variation of districts in Beijing using the gridding model was performed, and through the 3-year spatial analysis, PM10 concentration data including the variation and spatial overlay (1.5 km × 1.5 km cell resolution grid), the spatial distribution result obtained showed that the total PM10 concentration frequency variation exceeded the standard. It is very important to optimize the layout of the existing air monitoring stations by combining the concentration distribution of air pollutants with the spatial region using GIS.

  8. Terminal Control Area Aircraft Scheduling and Trajectory Optimization Approaches

    Directory of Open Access Journals (Sweden)

    Samà Marcella

    2017-01-01

    Full Text Available Aviation authorities are seeking optimization methods to better use the available infrastructure and better manage aircraft movements. This paper deals with the realtime scheduling of take-off and landing aircraft at a busy terminal control area and with the optimization of aircraft trajectories during the landing procedures. The first problem aims to reduce the propagation of delays, while the second problem aims to either minimize the travel time or reduce the fuel consumption. Both problems are particularly complex, since the first one is NP-hard while the second one is nonlinear and a combined solution needs to be computed in a short-time during operations. This paper proposes a framework for the lexicographic optimization of the two problems. Computational experiments are performed for the Milano Malpensa airport and show the existing gaps between the performance indicators of the two problems when different lexicographic optimization approaches are considered.

  9. Mathematical models and lymphatic filariasis control: monitoring and evaluating interventions.

    Science.gov (United States)

    Michael, Edwin; Malecela-Lazaro, Mwele N; Maegga, Bertha T A; Fischer, Peter; Kazura, James W

    2006-11-01

    Monitoring and evaluation are crucially important to the scientific management of any mass parasite control programme. Monitoring enables the effectiveness of implemented actions to be assessed and necessary adaptations to be identified; it also determines when management objectives are achieved. Parasite transmission models can provide a scientific template for informing the optimal design of such monitoring programmes. Here, we illustrate the usefulness of using a model-based approach for monitoring and evaluating anti-parasite interventions and discuss issues that need addressing. We focus on the use of such an approach for the control and/or elimination of the vector-borne parasitic disease, lymphatic filariasis.

  10. Optimizing computed tomography pulmonary angiography using right atrium bolus monitoring combined with spontaneous respiration

    Energy Technology Data Exchange (ETDEWEB)

    Min, Wang; Jian, Li; Rui, Zhai [Jining No. 1 People' s Hospital, Department of Computed Tomography, Jining City, ShanDong Province (China); Wen, Li [Jining No. 1 People' s Hospital, Department of Gastroenterology, Jining, ShanDong (China); Dai, Lun-Hou [Shandong Chest Hospital, Department of Radiology, Jinan, ShanDong (China)

    2015-09-15

    CT pulmonary angiography (CTPA) aims to provide pulmonary arterial opacification in the absence of significant pulmonary venous filling. This requires accurate timing of the imaging acquisition to ensure synchronization with the peak pulmonary artery contrast concentration. This study was designed to test the utility of right atrium (RA) monitoring in ensuring optimal timing of CTPA acquisition. Sixty patients referred for CTPA were divided into two groups. Group A (n = 30): CTPA was performed using bolus triggering from the pulmonary trunk, suspended respiration and 70 ml of contrast agent (CA). Group B (n = 30): CTPA image acquisition was triggered using RA monitoring with spontaneous respiration and 40 ml of CA. Image quality was compared. Subjective image quality, average CT values of pulmonary arteries and density difference between artery and vein pairs were significantly higher whereas CT values of pulmonary veins were significantly lower in group B (all P < 0.05). There was no significant difference between the groups in the proportion of subjects where sixth grade pulmonary arteries were opacified (P > 0.05). RA monitoring combined with spontaneous respiration to trigger image acquisition in CTPA produces optimal contrast enhancement in pulmonary arterial structures with minimal venous filling even with reduced doses of CA. (orig.)

  11. A statistical approach to optimizing concrete mixture design.

    Science.gov (United States)

    Ahmad, Shamsad; Alghamdi, Saeid A

    2014-01-01

    A step-by-step statistical approach is proposed to obtain optimum proportioning of concrete mixtures using the data obtained through a statistically planned experimental program. The utility of the proposed approach for optimizing the design of concrete mixture is illustrated considering a typical case in which trial mixtures were considered according to a full factorial experiment design involving three factors and their three levels (3(3)). A total of 27 concrete mixtures with three replicates (81 specimens) were considered by varying the levels of key factors affecting compressive strength of concrete, namely, water/cementitious materials ratio (0.38, 0.43, and 0.48), cementitious materials content (350, 375, and 400 kg/m(3)), and fine/total aggregate ratio (0.35, 0.40, and 0.45). The experimental data were utilized to carry out analysis of variance (ANOVA) and to develop a polynomial regression model for compressive strength in terms of the three design factors considered in this study. The developed statistical model was used to show how optimization of concrete mixtures can be carried out with different possible options.

  12. A Statistical Approach to Optimizing Concrete Mixture Design

    Directory of Open Access Journals (Sweden)

    Shamsad Ahmad

    2014-01-01

    Full Text Available A step-by-step statistical approach is proposed to obtain optimum proportioning of concrete mixtures using the data obtained through a statistically planned experimental program. The utility of the proposed approach for optimizing the design of concrete mixture is illustrated considering a typical case in which trial mixtures were considered according to a full factorial experiment design involving three factors and their three levels (33. A total of 27 concrete mixtures with three replicates (81 specimens were considered by varying the levels of key factors affecting compressive strength of concrete, namely, water/cementitious materials ratio (0.38, 0.43, and 0.48, cementitious materials content (350, 375, and 400 kg/m3, and fine/total aggregate ratio (0.35, 0.40, and 0.45. The experimental data were utilized to carry out analysis of variance (ANOVA and to develop a polynomial regression model for compressive strength in terms of the three design factors considered in this study. The developed statistical model was used to show how optimization of concrete mixtures can be carried out with different possible options.

  13. A New Approach to Site Demand-Based Level Inventory Optimization

    Science.gov (United States)

    2016-06-01

    Note: If probability distributions are estimated based on mean and variance , use ˆ qix  and 2ˆ( )qi to generate these. q in , number of...TO SITE DEMAND-BASED LEVEL INVENTORY OPTIMIZATION by Tacettin Ersoz June 2016 Thesis Advisor: Javier Salmeron Second Reader: Emily...DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE A NEW APPROACH TO SITE DEMAND-BASED LEVEL INVENTORY OPTIMIZATION 5. FUNDING NUMBERS 6

  14. Monitoring active volcanoes: The geochemical approach

    Directory of Open Access Journals (Sweden)

    Takeshi Ohba

    2011-06-01

    Full Text Available

    The geochemical surveillance of an active volcano aims to recognize possible signals that are related to changes in volcanic activity. Indeed, as a consequence of the magma rising inside the volcanic "plumbing system" and/or the refilling with new batches of magma, the dissolved volatiles in the magma are progressively released as a function of their relative solubilities. When approaching the surface, these fluids that are discharged during magma degassing can interact with shallow aquifers and/or can be released along the main volcano-tectonic structures. Under these conditions, the following main degassing processes represent strategic sites to be monitored.

    The main purpose of this special volume is to collect papers that cover a wide range of topics in volcanic fluid geochemistry, which include geochemical characterization and geochemical monitoring of active volcanoes using different techniques and at different sites. Moreover, part of this volume has been dedicated to the new geochemistry tools.

  15. An optimization approach for black-and-white and hinge-removal topology designs

    Energy Technology Data Exchange (ETDEWEB)

    Fu, Yongqing; Zhang, Xianmin [South China University of Technology, Guangzhou (China)

    2014-02-15

    An optimization approach for black-and-white and hinge-removal topology designs is studied. To achieve this motive, an optimal topology allowing grey boundaries is found firstly. When a suitable design has been obtained, this solution is then used as a starting point for the follow-up optimization with the goal to free unfavorable intermediate elements. For this purpose, an updated optimality criterion in which a threshold factor is introduced to gradually suppress elements with low density is proposed. The typical optimality method and new technique proposed are applied to the design procedure sequentially. Besides, to circumvent the one-point hinge connection problem producing in the process of freeing intermediate elements, a hinge-removal strategy is also proposed. During the optimization, the binary constraints on design variables are relaxed based on the scheme of solid isotropic material with penalization. Meanwhile, the mesh independency filter is employed to ensure the existence of a solution and remove well-known checkerboards. In this way, a solution that has few intermediate elements and is free of one-point hinge connections is obtained. Finally, different numerical examples including the compliance minimization, compliant mechanisms and vibration problems demonstrate the validity of the proposed approach.

  16. Monitoring with Data Automata

    Science.gov (United States)

    Havelund, Klaus

    2014-01-01

    We present a form of automaton, referred to as data automata, suited for monitoring sequences of data-carrying events, for example emitted by an executing software system. This form of automata allows states to be parameterized with data, forming named records, which are stored in an efficiently indexed data structure, a form of database. This very explicit approach differs from other automaton-based monitoring approaches. Data automata are also characterized by allowing transition conditions to refer to other parameterized states, and by allowing transitions sequences. The presented automaton concept is inspired by rule-based systems, especially the Rete algorithm, which is one of the well-established algorithms for executing rule-based systems. We present an optimized external DSL for data automata, as well as a comparable unoptimized internal DSL (API) in the Scala programming language, in order to compare the two solutions. An evaluation compares these two solutions to several other monitoring systems.

  17. Y-12 Groundwater Protection Program Monitoring Optimization Plan for Groundwater Monitoring Wells at the U.S. Department of Energy Y-12 National Security Complex

    International Nuclear Information System (INIS)

    2006-01-01

    This document is the monitoring optimization plan for groundwater monitoring wells associated with the U.S. Department of Energy (DOE) Y-12 National Security Complex (Y-12) in Oak Ridge, Tennessee (Figure A.1). The plan describes the technical approach that will be implemented under the Y-12 Groundwater Protection Program (GWPP) to focus available resources on the monitoring wells at Y-12 that provide the most useful hydrologic and water-quality monitoring data. The technical approach is based on the GWPP status designation for each well (Section 2.0). Under this approach, wells granted ''active'' status are used by the GWPP for hydrologic monitoring and/or groundwater quality sampling (Section 3.0), whereas wells granted ''inactive'' status are not used for either purpose. The status designation also defines the frequency at which the GWPP will inspect applicable wells, the scope of these well inspections, and extent of any maintenance actions initiated by the GWPP (Section 3.0). Details regarding the ancillary activities associated with implementation of this plan (e.g., well inspection) are deferred to the referenced GWPP plans and procedures (Section 4.0). This plan applies to groundwater wells associated with Y-12 and related waste management areas and facilities located within three hydrogeologic regimes (Figure A.1): the Bear Creek Hydrogeologic Regime (Bear Creek Regime), the Upper East Fork Poplar Creek Hydrogeologic Regime (East Fork Regime), and the Chestnut Ridge Hydrogeologic Regime (Chestnut Ridge Regime). The Bear Creek Regime encompasses a section of Bear Creek Valley (BCV) immediately west of Y-12. The East Fork Regime encompasses most of the Y-12 process, operations, and support facilities in BCV and, for the purposes of this plan, includes a section of Union Valley east of the DOE Oak Ridge Reservation (ORR) boundary along Scarboro Road. The Chestnut Ridge Regime encompasses a section of Chestnut Ridge directly south of Y-12 that is bound on the

  18. Designing monitoring for conservation impact assessment in water funds in Latin America: an approach to address water-data scarcity (Invited)

    Science.gov (United States)

    Nelson, J. L.; Chaplin-Kramer, R.; Ziv, G.; Wolny, S.; Vogl, A. L.; Tallis, H.; Bremer, L.

    2013-12-01

    The risk of water scarcity is a rising threat in a rapidly changing world. Communities and investors are using the new institution of water funds to enact conservation practices in watersheds to bolster a clean, predictable water supply for multiple stakeholders. Water funds finance conservation activities to support water-related ecosystem services, and here we relate our work to develop innovative approaches to experimental design of monitoring programs to track the effectiveness of water funds throughout Latin America. We highlight two examples: the Fund for the Protection of Water (FONAG), in Quito, Ecuador, and Water for Life, Agua por la Vida, in Cali, Colombia. Our approach is meant to test whether a) water funds' restoration and protection actions result in changes in water quality and/or quantity at the site scale and the subwatershed scale, and b) the suite of investments for the whole water fund reach established goals for improving water quality and/or quantity at the basin scale or point of use. Our goal is to create monitoring standards for ecosystem-service assessment and clearly demonstrate translating those standards to field implementation in a statistically robust and cost-effective way. In the gap between data-intensive methods requiring historic, long-term water sampling and more subjective, ad hoc assessments, we have created a quantitative, land-cover-based approach to pairing conservation activity with appropriate controls in order to determine the impact of water-fund actions. To do so, we use a statistical approach in combination with open-source tools developed by the Natural Capital Project to optimize water funds' investments in nature and assess ecosystem-service provision (Resource Investment Optimization System, RIOS, and InVEST). We report on the process of identifying micro-, subwatershed or watershed matches to serve as controls for conservation 'impact' sites, based on globally-available land cover, precipitation, and soil data

  19. An intutionistic fuzzy optimization approach to vendor selection problem

    Directory of Open Access Journals (Sweden)

    Prabjot Kaur

    2016-09-01

    Full Text Available Selecting the right vendor is an important business decision made by any organization. The decision involves multiple criteria and if the objectives vary in preference and scope, then nature of decision becomes multiobjective. In this paper, a vendor selection problem has been formulated as an intutionistic fuzzy multiobjective optimization where appropriate number of vendors is to be selected and order allocated to them. The multiobjective problem includes three objectives: minimizing the net price, maximizing the quality, and maximizing the on time deliveries subject to supplier's constraints. The objection function and the demand are treated as intutionistic fuzzy sets. An intutionistic fuzzy set has its ability to handle uncertainty with additional degrees of freedom. The Intutionistic fuzzy optimization (IFO problem is converted into a crisp linear form and solved using optimization software Tora. The advantage of IFO is that they give better results than fuzzy/crisp optimization. The proposed approach is explained by a numerical example.

  20. Portfolio optimization in enhanced index tracking with goal programming approach

    Science.gov (United States)

    Siew, Lam Weng; Jaaman, Saiful Hafizah Hj.; Ismail, Hamizun bin

    2014-09-01

    Enhanced index tracking is a popular form of passive fund management in stock market. Enhanced index tracking aims to generate excess return over the return achieved by the market index without purchasing all of the stocks that make up the index. This can be done by establishing an optimal portfolio to maximize the mean return and minimize the risk. The objective of this paper is to determine the portfolio composition and performance using goal programming approach in enhanced index tracking and comparing it to the market index. Goal programming is a branch of multi-objective optimization which can handle decision problems that involve two different goals in enhanced index tracking, a trade-off between maximizing the mean return and minimizing the risk. The results of this study show that the optimal portfolio with goal programming approach is able to outperform the Malaysia market index which is FTSE Bursa Malaysia Kuala Lumpur Composite Index because of higher mean return and lower risk without purchasing all the stocks in the market index.

  1. An approach for multi-objective optimization of vehicle suspension system

    Science.gov (United States)

    Koulocheris, D.; Papaioannou, G.; Christodoulou, D.

    2017-10-01

    In this paper, a half car model of with nonlinear suspension systems is selected in order to study the vertical vibrations and optimize its suspension system with respect to ride comfort and road holding. A road bump was used as road profile. At first, the optimization problem is solved with the use of Genetic Algorithms with respect to 6 optimization targets. Then the k - ɛ optimization method was implemented to locate one optimum solution. Furthermore, an alternative approach is presented in this work: the previous optimization targets are separated in main and supplementary ones, depending on their importance in the analysis. The supplementary targets are not crucial to the optimization but they could enhance the main objectives. Thus, the problem was solved again using Genetic Algorithms with respect to the 3 main targets of the optimization. Having obtained the Pareto set of solutions, the k - ɛ optimality method was implemented for the 3 main targets and the supplementary ones, evaluated by the simulation of the vehicle model. The results of both cases are presented and discussed in terms of convergence of the optimization and computational time. The optimum solutions acquired from both cases are compared based on performance metrics as well.

  2. A design approach for integrating thermoelectric devices using topology optimization

    International Nuclear Information System (INIS)

    Soprani, S.; Haertel, J.H.K.; Lazarov, B.S.; Sigmund, O.; Engelbrecht, K.

    2016-01-01

    Highlights: • The integration of a thermoelectric (TE) cooler into a robotic tool is optimized. • Topology optimization is suggested as design tool for TE integrated systems. • A 3D optimization technique using temperature dependent TE properties is presented. • The sensitivity of the optimization process to the boundary conditions is studied. • A working prototype is constructed and compared to the model results. - Abstract: Efficient operation of thermoelectric devices strongly relies on the thermal integration into the energy conversion system in which they operate. Effective thermal integration reduces the temperature differences between the thermoelectric module and its thermal reservoirs, allowing the system to operate more efficiently. This work proposes and experimentally demonstrates a topology optimization approach as a design tool for efficient integration of thermoelectric modules into systems with specific design constraints. The approach allows thermal layout optimization of thermoelectric systems for different operating conditions and objective functions, such as temperature span, efficiency, and power recovery rate. As a specific application, the integration of a thermoelectric cooler into the electronics section of a downhole oil well intervention tool is investigated, with the objective of minimizing the temperature of the cooled electronics. Several challenges are addressed: ensuring effective heat transfer from the load, minimizing the thermal resistances within the integrated system, maximizing the thermal protection of the cooled zone, and enhancing the conduction of the rejected heat to the oil well. The design method incorporates temperature dependent properties of the thermoelectric device and other materials. The 3D topology optimization model developed in this work was used to design a thermoelectric system, complete with insulation and heat sink, that was produced and tested. Good agreement between experimental results and

  3. An approach of optimal sensitivity applied in the tertiary loop of the automatic generation control

    Energy Technology Data Exchange (ETDEWEB)

    Belati, Edmarcio A. [CIMATEC - SENAI, Salvador, BA (Brazil); Alves, Dilson A. [Electrical Engineering Department, FEIS, UNESP - Sao Paulo State University (Brazil); da Costa, Geraldo R.M. [Electrical Engineering Department, EESC, USP - Sao Paulo University (Brazil)

    2008-09-15

    This paper proposes an approach of optimal sensitivity applied in the tertiary loop of the automatic generation control. The approach is based on the theorem of non-linear perturbation. From an optimal operation point obtained by an optimal power flow a new optimal operation point is directly determined after a perturbation, i.e., without the necessity of an iterative process. This new optimal operation point satisfies the constraints of the problem for small perturbation in the loads. The participation factors and the voltage set point of the automatic voltage regulators (AVR) of the generators are determined by the technique of optimal sensitivity, considering the effects of the active power losses minimization and the network constraints. The participation factors and voltage set point of the generators are supplied directly to a computational program of dynamic simulation of the automatic generation control, named by power sensitivity mode. Test results are presented to show the good performance of this approach. (author)

  4. APPROACH ON INTELLIGENT OPTIMIZATION DESIGN BASED ON COMPOUND KNOWLEDGE

    Institute of Scientific and Technical Information of China (English)

    Yao Jianchu; Zhou Ji; Yu Jun

    2003-01-01

    A concept of an intelligent optimal design approach is proposed, which is organized by a kind of compound knowledge model. The compound knowledge consists of modularized quantitative knowledge, inclusive experience knowledge and case-based sample knowledge. By using this compound knowledge model, the abundant quantity information of mathematical programming and the symbolic knowledge of artificial intelligence can be united together in this model. The intelligent optimal design model based on such a compound knowledge and the automatically generated decomposition principles based on it are also presented. Practically, it is applied to the production planning, process schedule and optimization of production process of a refining & chemical work and a great profit is achieved. Specially, the methods and principles are adaptable not only to continuous process industry, but also to discrete manufacturing one.

  5. A “Reverse-Schur” Approach to Optimization With Linear PDE Constraints: Application to Biomolecule Analysis and Design

    Science.gov (United States)

    Bardhan, Jaydeep P.; Altman, Michael D.

    2009-01-01

    We present a partial-differential-equation (PDE)-constrained approach for optimizing a molecule’s electrostatic interactions with a target molecule. The approach, which we call reverse-Schur co-optimization, can be more than two orders of magnitude faster than the traditional approach to electrostatic optimization. The efficiency of the co-optimization approach may enhance the value of electrostatic optimization for ligand-design efforts–in such projects, it is often desirable to screen many candidate ligands for their viability, and the optimization of electrostatic interactions can improve ligand binding affinity and specificity. The theoretical basis for electrostatic optimization derives from linear-response theory, most commonly continuum models, and simple assumptions about molecular binding processes. Although the theory has been used successfully to study a wide variety of molecular binding events, its implications have not yet been fully explored, in part due to the computational expense associated with the optimization. The co-optimization algorithm achieves improved performance by solving the optimization and electrostatic simulation problems simultaneously, and is applicable to both unconstrained and constrained optimization problems. Reverse-Schur co-optimization resembles other well-known techniques for solving optimization problems with PDE constraints. Model problems as well as realistic examples validate the reverse-Schur method, and demonstrate that our technique and alternative PDE-constrained methods scale very favorably compared to the standard approach. Regularization, which ordinarily requires an explicit representation of the objective function, can be included using an approximate Hessian calculated using the new BIBEE/P (boundary-integral-based electrostatics estimation by preconditioning) method. PMID:23055839

  6. Gender approaches to evolutionary multi-objective optimization using pre-selection of criteria

    Science.gov (United States)

    Kowalczuk, Zdzisław; Białaszewski, Tomasz

    2018-01-01

    A novel idea to perform evolutionary computations (ECs) for solving highly dimensional multi-objective optimization (MOO) problems is proposed. Following the general idea of evolution, it is proposed that information about gender is used to distinguish between various groups of objectives and identify the (aggregate) nature of optimality of individuals (solutions). This identification is drawn out of the fitness of individuals and applied during parental crossover in the processes of evolutionary multi-objective optimization (EMOO). The article introduces the principles of the genetic-gender approach (GGA) and virtual gender approach (VGA), which are not just evolutionary techniques, but constitute a completely new rule (philosophy) for use in solving MOO tasks. The proposed approaches are validated against principal representatives of the EMOO algorithms of the state of the art in solving benchmark problems in the light of recognized EC performance criteria. The research shows the superiority of the gender approach in terms of effectiveness, reliability, transparency, intelligibility and MOO problem simplification, resulting in the great usefulness and practicability of GGA and VGA. Moreover, an important feature of GGA and VGA is that they alleviate the 'curse' of dimensionality typical of many engineering designs.

  7. TwitterSensing: An Event-Based Approach for Wireless Sensor Networks Optimization Exploiting Social Media in Smart City Applications.

    Science.gov (United States)

    Costa, Daniel G; Duran-Faundez, Cristian; Andrade, Daniel C; Rocha-Junior, João B; Peixoto, João Paulo Just

    2018-04-03

    Modern cities are subject to periodic or unexpected critical events, which may bring economic losses or even put people in danger. When some monitoring systems based on wireless sensor networks are deployed, sensing and transmission configurations of sensor nodes may be adjusted exploiting the relevance of the considered events, but efficient detection and classification of events of interest may be hard to achieve. In Smart City environments, several people spontaneously post information in social media about some event that is being observed and such information may be mined and processed for detection and classification of critical events. This article proposes an integrated approach to detect and classify events of interest posted in social media, notably in Twitter , and the assignment of sensing priorities to source nodes. By doing so, wireless sensor networks deployed in Smart City scenarios can be optimized for higher efficiency when monitoring areas under the influence of the detected events.

  8. TwitterSensing: An Event-Based Approach for Wireless Sensor Networks Optimization Exploiting Social Media in Smart City Applications

    Directory of Open Access Journals (Sweden)

    Daniel G. Costa

    2018-04-01

    Full Text Available Modern cities are subject to periodic or unexpected critical events, which may bring economic losses or even put people in danger. When some monitoring systems based on wireless sensor networks are deployed, sensing and transmission configurations of sensor nodes may be adjusted exploiting the relevance of the considered events, but efficient detection and classification of events of interest may be hard to achieve. In Smart City environments, several people spontaneously post information in social media about some event that is being observed and such information may be mined and processed for detection and classification of critical events. This article proposes an integrated approach to detect and classify events of interest posted in social media, notably in Twitter, and the assignment of sensing priorities to source nodes. By doing so, wireless sensor networks deployed in Smart City scenarios can be optimized for higher efficiency when monitoring areas under the influence of the detected events.

  9. Hybrid Optimization-Based Approach for Multiple Intelligent Vehicles Requests Allocation

    Directory of Open Access Journals (Sweden)

    Ahmed Hussein

    2018-01-01

    Full Text Available Self-driving cars are attracting significant attention during the last few years, which makes the technology advances jump fast and reach a point of having a number of automated vehicles on the roads. Therefore, the necessity of cooperative driving for these automated vehicles is exponentially increasing. One of the main issues in the cooperative driving world is the Multirobot Task Allocation (MRTA problem. This paper addresses the MRTA problem, specifically for the problem of vehicles and requests allocation. The objective is to introduce a hybrid optimization-based approach to solve the problem of multiple intelligent vehicles requests allocation as an instance of MRTA problem, to find not only a feasible solution, but also an optimized one as per the objective function. Several test scenarios were implemented in order to evaluate the efficiency of the proposed approach. These scenarios are based on well-known benchmarks; thus a comparative study is conducted between the obtained results and the suboptimal results. The analysis of the experimental results shows that the proposed approach was successful in handling various scenarios, especially with the increasing number of vehicles and requests, which displays the proposed approach efficiency and performance.

  10. Monitoring post-fire vegetation rehabilitation projects: A common approach for non-forested ecosystems

    Science.gov (United States)

    Wirth, Troy A.; Pyke, David A.

    2007-01-01

    Emergency Stabilization and Rehabilitation (ES&R) and Burned Area Emergency Response (BAER) treatments are short-term, high-intensity treatments designed to mitigate the adverse effects of wildfire on public lands. The federal government expends significant resources implementing ES&R and BAER treatments after wildfires; however, recent reviews have found that existing data from monitoring and research are insufficient to evaluate the effects of these activities. The purpose of this report is to: (1) document what monitoring methods are generally used by personnel in the field; (2) describe approaches and methods for post-fire vegetation and soil monitoring documented in agency manuals; (3) determine the common elements of monitoring programs recommended in these manuals; and (4) describe a common monitoring approach to determine the effectiveness of future ES&R and BAER treatments in non-forested regions. Both qualitative and quantitative methods to measure effectiveness of ES&R treatments are used by federal land management agencies. Quantitative methods are used in the field depending on factors such as funding, personnel, and time constraints. There are seven vegetation monitoring manuals produced by the federal government that address monitoring methods for (primarily) vegetation and soil attributes. These methods vary in their objectivity and repeatability. The most repeatable methods are point-intercept, quadrat-based density measurements, gap intercepts, and direct measurement of soil erosion. Additionally, these manuals recommend approaches for designing monitoring programs for the state of ecosystems or the effect of management actions. The elements of a defensible monitoring program applicable to ES&R and BAER projects that most of these manuals have in common are objectives, stratification, control areas, random sampling, data quality, and statistical analysis. The effectiveness of treatments can be determined more accurately if data are gathered using

  11. Near-Port Air Quality Assessment Utilizing a Mobile Monitoring Approach

    Data.gov (United States)

    U.S. Environmental Protection Agency — Near-Port Air Quality Assessment Utilizing a Mobile Monitoring Approach. This dataset is associated with the following publication: Steffens, J., S. Kimbrough, R....

  12. The future of monitoring in clinical research - a holistic approach: linking risk-based monitoring with quality management principles.

    Science.gov (United States)

    Ansmann, Eva B; Hecht, Arthur; Henn, Doris K; Leptien, Sabine; Stelzer, Hans Günther

    2013-01-01

    Since several years risk-based monitoring is the new "magic bullet" for improvement in clinical research. Lots of authors in clinical research ranging from industry and academia to authorities are keen on demonstrating better monitoring-efficiency by reducing monitoring visits, monitoring time on site, monitoring costs and so on, always arguing with the use of risk-based monitoring principles. Mostly forgotten is the fact, that the use of risk-based monitoring is only adequate if all mandatory prerequisites at site and for the monitor and the sponsor are fulfilled.Based on the relevant chapter in ICH GCP (International Conference on Harmonisation of technical requirements for registration of pharmaceuticals for human use - Good Clinical Practice) this publication takes a holistic approach by identifying and describing the requirements for future monitoring and the use of risk-based monitoring. As the authors are operational managers as well as QA (Quality Assurance) experts, both aspects are represented to come up with efficient and qualitative ways of future monitoring according to ICH GCP.

  13. Generating evidence on a risk-based monitoring approach in the academic setting – lessons learned

    Directory of Open Access Journals (Sweden)

    Belinda von Niederhäusern

    2017-02-01

    Full Text Available Abstract Background In spite of efforts to employ risk-based strategies to increase monitoring efficiency in the academic setting, empirical evidence on their effectiveness remains sparse. This mixed-methods study aimed to evaluate the risk-based on-site monitoring approach currently followed at our academic institution. Methods We selected all studies monitored by the Clinical Trial Unit (CTU according to Risk ADApted MONitoring (ADAMON at the University Hospital Basel, Switzerland, between 01.01.2012 and 31.12.2014. We extracted study characteristics and monitoring information from the CTU Enterprise Resource Management system and from monitoring reports of all selected studies. We summarized the data descriptively. Additionally, we conducted semi-structured interviews with the three current CTU monitors. Results During the observation period, a total of 214 monitoring visits were conducted in 43 studies resulting in 2961 documented monitoring findings. Our risk-based approach predominantly identified administrative (46.2% and patient right findings (49.1%. We identified observational study design, high ADAMON risk category, industry sponsorship, the presence of an electronic database, experienced site staff, and inclusion of vulnerable study population to be factors associated with lower numbers of findings. The monitors understand the positive aspects of a risk-based approach but fear missing systematic errors due to the low frequency of visits. Conclusions We show that the factors mostly increasing the risk for on-site monitoring findings are underrepresented in the current risk analysis scheme. Our risk-based on-site approach should further be complemented by centralized data checks, allowing monitors to transform their role towards partners for overall trial quality, and success.

  14. From Nonlinear Optimization to Convex Optimization through Firefly Algorithm and Indirect Approach with Applications to CAD/CAM

    Directory of Open Access Journals (Sweden)

    Akemi Gálvez

    2013-01-01

    Full Text Available Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor’s method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently.

  15. Real-time risk monitoring in business processes : a sensor-based approach

    NARCIS (Netherlands)

    Conforti, R.; La Rosa, M.; Fortino, G.; Hofstede, ter A.H.M.; Recker, J.; Adams, M.

    2013-01-01

    This article proposes an approach for real-time monitoring of risks in executable business process models. The approach considers risks in all phases of the business process management lifecycle, from process design, where risks are defined on top of process models, through to process diagnosis,

  16. [Research and design for optimal position of electrocardio-electrodes in monitoring clothing for men].

    Science.gov (United States)

    Liang, Lijun; Hu, Yao; Liu, Hao; Li, Xiaojiu; Li, Jin; He, Yin

    2017-04-01

    In order to reduce the mortality rate of cardiovascular disease patients effectively, improve the electrocardiogram (ECG) accuracy of signal acquisition, and reduce the influence of motion artifacts caused by the electrodes in inappropriate location in the clothing for ECG measurement, we in this article present a research on the optimum place of ECG electrodes in male clothing using three-lead monitoring methods. In the 3-lead ECG monitoring clothing for men we selected test points. Comparing the ECG and power spectrum analysis of the acquired ECG signal quality of each group of points, we determined the best location of ECG electrodes in the male monitoring clothing. The electrode motion artifacts caused by improper location had been significantly improved when electrodes were put in the best position of the clothing for men. The position of electrodes is crucial for ECG monitoring clothing. The stability of the acquired ECG signal could be improved significantly when electrodes are put at optimal locations.

  17. A Bayesian maximum entropy-based methodology for optimal spatiotemporal design of groundwater monitoring networks.

    Science.gov (United States)

    Hosseini, Marjan; Kerachian, Reza

    2017-09-01

    This paper presents a new methodology for analyzing the spatiotemporal variability of water table levels and redesigning a groundwater level monitoring network (GLMN) using the Bayesian Maximum Entropy (BME) technique and a multi-criteria decision-making approach based on ordered weighted averaging (OWA). The spatial sampling is determined using a hexagonal gridding pattern and a new method, which is proposed to assign a removal priority number to each pre-existing station. To design temporal sampling, a new approach is also applied to consider uncertainty caused by lack of information. In this approach, different time lag values are tested by regarding another source of information, which is simulation result of a numerical groundwater flow model. Furthermore, to incorporate the existing uncertainties in available monitoring data, the flexibility of the BME interpolation technique is taken into account in applying soft data and improving the accuracy of the calculations. To examine the methodology, it is applied to the Dehgolan plain in northwestern Iran. Based on the results, a configuration of 33 monitoring stations for a regular hexagonal grid of side length 3600 m is proposed, in which the time lag between samples is equal to 5 weeks. Since the variance estimation errors of the BME method are almost identical for redesigned and existing networks, the redesigned monitoring network is more cost-effective and efficient than the existing monitoring network with 52 stations and monthly sampling frequency.

  18. A general approach for optimal kinematic design of 6-DOF parallel ...

    Indian Academy of Sciences (India)

    Optimal kinematic design of parallel manipulators is a challenging problem. In this work, an attempt has been made to present a generalized approach of kinematic design for a 6-legged parallel manipulator, by considering only the minimally required design parameters. The same approach has been used to design a ...

  19. Method of transient identification based on a possibilistic approach, optimized by genetic algorithm

    International Nuclear Information System (INIS)

    Almeida, Jose Carlos Soares de

    2001-02-01

    This work develops a method for transient identification based on a possible approach, optimized by Genetic Algorithm to optimize the number of the centroids of the classes that represent the transients. The basic idea of the proposed method is to optimize the partition of the search space, generating subsets in the classes within a partition, defined as subclasses, whose centroids are able to distinguish the classes with the maximum correct classifications. The interpretation of the subclasses as fuzzy sets and the possible approach provided a heuristic to establish influence zones of the centroids, allowing to achieve the 'don't know' answer for unknown transients, that is, outside the training set. (author)

  20. Parameterization of Fuel-Optimal Synchronous Approach Trajectories to Tumbling Targets

    Directory of Open Access Journals (Sweden)

    David Charles Sternberg

    2018-04-01

    Full Text Available Docking with potentially tumbling Targets is a common element of many mission architectures, including on-orbit servicing and active debris removal. This paper studies synchronized docking trajectories as a way to ensure the Chaser satellite remains on the docking axis of the tumbling Target, thereby reducing collision risks and enabling persistent onboard sensing of the docking location. Chaser satellites have limited computational power available to them and the time allowed for the determination of a fuel optimal trajectory may be limited. Consequently, parameterized trajectories that approximate the fuel optimal trajectory while following synchronous approaches may be used to provide a computationally efficient means of determining near optimal trajectories to a tumbling Target. This paper presents a method of balancing the computation cost with the added fuel expenditure required for parameterization, including the selection of a parameterization scheme, the number of parameters in the parameterization, and a means of incorporating the dynamics of a tumbling satellite into the parameterization process. Comparisons of the parameterized trajectories are made with the fuel optimal trajectory, which is computed through the numerical propagation of Euler’s equations. Additionally, various tumble types are considered to demonstrate the efficacy of the presented computation scheme. With this parameterized trajectory determination method, Chaser satellites may perform terminal approach and docking maneuvers with both fuel and computational efficiency.

  1. TH-E-209-02: Dose Monitoring and Protocol Optimization: The Pediatric Perspective

    International Nuclear Information System (INIS)

    MacDougall, R.

    2016-01-01

    Radiation dose monitoring solutions have opened up new opportunities for medical physicists to be more involved in modern clinical radiology practices. In particular, with the help of comprehensive radiation dose data, data-driven protocol management and informed case follow up are now feasible. Significant challenges remain however and the problems faced by medical physicists are highly heterogeneous. Imaging systems from multiple vendors and a wide range of vintages co-exist in the same department and employ data communication protocols that are not fully standardized or implemented making harmonization complex. Many different solutions for radiation dose monitoring have been implemented by imaging facilities over the past few years. Such systems are based on commercial software, home-grown IT solutions, manual PACS data dumping, etc., and diverse pathways can be used to bring the data to impact clinical practice. The speakers will share their experiences with creating or tailoring radiation dose monitoring/management systems and procedures over the past few years, which vary significantly in design and scope. Topics to cover: (1) fluoroscopic dose monitoring and high radiation event handling from a large academic hospital; (2) dose monitoring and protocol optimization in pediatric radiology; and (3) development of a home-grown IT solution and dose data analysis framework. Learning Objectives: Describe the scope and range of radiation dose monitoring and protocol management in a modern radiology practice Review examples of data available from a variety of systems and how it managed and conveyed. Reflect on the role of the physicist in radiation dose awareness.

  2. TH-E-209-02: Dose Monitoring and Protocol Optimization: The Pediatric Perspective

    Energy Technology Data Exchange (ETDEWEB)

    MacDougall, R. [Boston Children’s Hospital (United States)

    2016-06-15

    Radiation dose monitoring solutions have opened up new opportunities for medical physicists to be more involved in modern clinical radiology practices. In particular, with the help of comprehensive radiation dose data, data-driven protocol management and informed case follow up are now feasible. Significant challenges remain however and the problems faced by medical physicists are highly heterogeneous. Imaging systems from multiple vendors and a wide range of vintages co-exist in the same department and employ data communication protocols that are not fully standardized or implemented making harmonization complex. Many different solutions for radiation dose monitoring have been implemented by imaging facilities over the past few years. Such systems are based on commercial software, home-grown IT solutions, manual PACS data dumping, etc., and diverse pathways can be used to bring the data to impact clinical practice. The speakers will share their experiences with creating or tailoring radiation dose monitoring/management systems and procedures over the past few years, which vary significantly in design and scope. Topics to cover: (1) fluoroscopic dose monitoring and high radiation event handling from a large academic hospital; (2) dose monitoring and protocol optimization in pediatric radiology; and (3) development of a home-grown IT solution and dose data analysis framework. Learning Objectives: Describe the scope and range of radiation dose monitoring and protocol management in a modern radiology practice Review examples of data available from a variety of systems and how it managed and conveyed. Reflect on the role of the physicist in radiation dose awareness.

  3. Geometry Optimization Approaches of Inductively Coupled Printed Spiral Coils for Remote Powering of Implantable Biomedical Sensors

    Directory of Open Access Journals (Sweden)

    Sondos Mehri

    2016-01-01

    Full Text Available Electronic biomedical implantable sensors need power to perform. Among the main reported approaches, inductive link is the most commonly used method for remote powering of such devices. Power efficiency is the most important characteristic to be considered when designing inductive links to transfer energy to implantable biomedical sensors. The maximum power efficiency is obtained for maximum coupling and quality factors of the coils and is generally limited as the coupling between the inductors is usually very small. This paper is dealing with geometry optimization of inductively coupled printed spiral coils for powering a given implantable sensor system. For this aim, Iterative Procedure (IP and Genetic Algorithm (GA analytic based optimization approaches are proposed. Both of these approaches implement simple mathematical models that approximate the coil parameters and the link efficiency values. Using numerical simulations based on Finite Element Method (FEM and with experimental validation, the proposed analytic approaches are shown to have improved accurate performance results in comparison with the obtained performance of a reference design case. The analytical GA and IP optimization methods are also compared to a purely Finite Element Method based on numerical optimization approach (GA-FEM. Numerical and experimental validations confirmed the accuracy and the effectiveness of the analytical optimization approaches to design the optimal coil geometries for the best values of efficiency.

  4. PID control design for chaotic synchronization using a tribes optimization approach

    Energy Technology Data Exchange (ETDEWEB)

    Santos Coelho, Leandro dos [Industrial and Systems Engineering Graduate Program, LAS/PPGEPS, Pontifical Catholic University of Parana, PUCPR, Imaculada Conceicao, 1155, 80215-901 Curitiba, Parana (Brazil)], E-mail: leandro.coelho@pucpr.br; Andrade Bernert, Diego Luis de [Industrial and Systems Engineering Graduate Program, LAS/PPGEPS, Pontifical Catholic University of Parana, PUCPR, Imaculada Conceicao, 1155, 80215-901 Curitiba, Parana (Brazil)], E-mail: dbernert@gmail.com

    2009-10-15

    Recently, the investigation of synchronization and control problems for discrete chaotic systems has stimulated a wide range of research activity including both theoretical studies and practical applications. This paper deals with the tuning of a proportional-integral-derivative (PID) controller using a modified Tribes optimization algorithm based on truncated chaotic Zaslavskii map (MTribes) for synchronization of two identical discrete chaotic systems subject the different initial conditions. The Tribes algorithm is inspired by the social behavior of bird flocking and is also an optimization adaptive procedure that does not require sociometric or swarm size parameter tuning. Numerical simulations are given to show the effectiveness of the proposed synchronization method. In addition, some comparisons of the MTribes optimization algorithm with other continuous optimization methods, including classical Tribes algorithm and particle swarm optimization approaches, are presented.

  5. PID control design for chaotic synchronization using a tribes optimization approach

    International Nuclear Information System (INIS)

    Santos Coelho, Leandro dos; Andrade Bernert, Diego Luis de

    2009-01-01

    Recently, the investigation of synchronization and control problems for discrete chaotic systems has stimulated a wide range of research activity including both theoretical studies and practical applications. This paper deals with the tuning of a proportional-integral-derivative (PID) controller using a modified Tribes optimization algorithm based on truncated chaotic Zaslavskii map (MTribes) for synchronization of two identical discrete chaotic systems subject the different initial conditions. The Tribes algorithm is inspired by the social behavior of bird flocking and is also an optimization adaptive procedure that does not require sociometric or swarm size parameter tuning. Numerical simulations are given to show the effectiveness of the proposed synchronization method. In addition, some comparisons of the MTribes optimization algorithm with other continuous optimization methods, including classical Tribes algorithm and particle swarm optimization approaches, are presented.

  6. Systems approach for design control at Monitored Retrievable Storage Project

    International Nuclear Information System (INIS)

    Kumar, P.N.; Williams, J.R.

    1994-01-01

    This paper describes the systems approach in establishing design control for the Monitored Retrievable Storage Project design development. Key elements in design control are enumerated and systems engineering aspects are detailed. Application of lessons learned from the Yucca Mountain Project experience is addressed. An integrated approach combining quality assurance and systems engineering requirements is suggested to practice effective design control

  7. Optimizing Maintenance Planning in the Production Industry Using the Markovian Approach

    Directory of Open Access Journals (Sweden)

    B Kareem

    2012-12-01

    Full Text Available Maintenance is an essential activity in every manufacturing establishment, as manufacturing effectiveness counts on the functionality of production equipment and machinery in terms of their productivity and operational life. Maintenance cost minimization can be achieved by adopting an appropriate maintenance planning policy. This paper applies the Markovian approach to maintenance planning decision, thereby generating optimal maintenance policy from the identified alternatives over a specified period of time. Markov chains, transition matrices, decision processes, and dynamic programming models were formulated for the decision problem related to maintenance operations of a cable production company. Preventive and corrective maintenance data based on workloads and costs, were collected from the company and utilized in this study. The result showed variability in the choice of optimal maintenance policy that was adopted in the case study. Post optimality analysis of the process buttressed the claim. The proposed approach is promising for solving the maintenance scheduling decision problems of the company.

  8. System Approach of Logistic Costs Optimization Solution in Supply Chain

    OpenAIRE

    Majerčák, Peter; Masárová, Gabriela; Buc, Daniel; Majerčáková, Eva

    2013-01-01

    This paper is focused on the possibility of using the costs simulation in supply chain, which are on relative high level. Our goal is to determine the costs using logistic costs optimization which must necessarily be used in business activities in the supply chain management. The paper emphasizes the need to perform not isolated optimization in the whole supply chain. Our goal is to compare classic approach, when every part tracks its costs isolated, a try to minimize them, with the system (l...

  9. Inverse Reliability Task: Artificial Neural Networks and Reliability-Based Optimization Approaches

    OpenAIRE

    Lehký , David; Slowik , Ondřej; Novák , Drahomír

    2014-01-01

    Part 7: Genetic Algorithms; International audience; The paper presents two alternative approaches to solve inverse reliability task – to determine the design parameters to achieve desired target reliabilities. The first approach is based on utilization of artificial neural networks and small-sample simulation Latin hypercube sampling. The second approach considers inverse reliability task as reliability-based optimization task using double-loop method and also small-sample simulation. Efficie...

  10. Establishment of a hydrological monitoring network in a tropical African catchment: An integrated participatory approach

    Science.gov (United States)

    Gomani, M. C.; Dietrich, O.; Lischeid, G.; Mahoo, H.; Mahay, F.; Mbilinyi, B.; Sarmett, J.

    Sound decision making for water resources management has to be based on good knowledge of the dominant hydrological processes of a catchment. This information can only be obtained through establishing suitable hydrological monitoring networks. Research catchments are typically established without involving the key stakeholders, which results in instruments being installed at inappropriate places as well as at high risk of theft and vandalism. This paper presents an integrated participatory approach for establishing a hydrological monitoring network. We propose a framework with six steps beginning with (i) inception of idea; (ii) stakeholder identification; (iii) defining the scope of the network; (iv) installation; (v) monitoring; and (vi) feedback mechanism integrated within the participatory framework. The approach is illustrated using an example of the Ngerengere catchment in Tanzania. In applying the approach, the concept of establishing the Ngerengere catchment monitoring network was initiated in 2008 within the Resilient Agro-landscapes to Climate Change in Tanzania (ReACCT) research program. The main stakeholders included: local communities; Sokoine University of Agriculture; Wami Ruvu Basin Water Office and the ReACCT Research team. The scope of the network was based on expert experience in similar projects and lessons learnt from literature review of similar projects from elsewhere integrated with local expert knowledge. The installations involved reconnaissance surveys, detailed surveys, and expert consultations to identify best sites. First, a Digital Elevation Model, land use, and soil maps were used to identify potential monitoring sites. Local and expert knowledge was collected on flow regimes, indicators of shallow groundwater plant species, precipitation pattern, vegetation, and soil types. This information was integrated and used to select sites for installation of an automatic weather station, automatic rain gauges, river flow gauging stations

  11. Individual aircraft life monitoring: An engineering approach for fatigue damage evaluation

    Directory of Open Access Journals (Sweden)

    Rui JIAO

    2018-04-01

    Full Text Available Individual aircraft life monitoring is required to ensure safety and economy of aircraft structure, and fatigue damage evaluation based on collected operational data of aircraft is an integral part of it. To improve the accuracy and facilitate the application, this paper proposes an engineering approach to evaluate fatigue damage and predict fatigue life for critical structures in fatigue monitoring. In this approach, traditional nominal stress method is applied to back calculate the S-N curve parameters of the realistic structure details based on full-scale fatigue test data. Then the S-N curve and Miner’s rule are adopted in damage estimation and fatigue life analysis for critical locations under individual load spectra. The relationship between relative small crack length and fatigue life can also be predicted with this approach. Specimens of 7B04-T74 aluminum alloy and TA15M titanium alloy are fatigue tested under two types of load spectra, and there is a good agreement between the experimental results and analysis results. Furthermore, the issue concerning scatter factor in individual aircraft damage estimation is also discussed. Keywords: Fatigue damage, Fatigue monitoring, Fatigue test, Scatter factor, S-N curve

  12. On Generating Optimal Signal Probabilities for Random Tests: A Genetic Approach

    Directory of Open Access Journals (Sweden)

    M. Srinivas

    1996-01-01

    Full Text Available Genetic Algorithms are robust search and optimization techniques. A Genetic Algorithm based approach for determining the optimal input distributions for generating random test vectors is proposed in the paper. A cost function based on the COP testability measure for determining the efficacy of the input distributions is discussed. A brief overview of Genetic Algorithms (GAs and the specific details of our implementation are described. Experimental results based on ISCAS-85 benchmark circuits are presented. The performance of our GAbased approach is compared with previous results. While the GA generates more efficient input distributions than the previous methods which are based on gradient descent search, the overheads of the GA in computing the input distributions are larger.

  13. An optimization approach for fitting canonical tensor decompositions.

    Energy Technology Data Exchange (ETDEWEB)

    Dunlavy, Daniel M. (Sandia National Laboratories, Albuquerque, NM); Acar, Evrim; Kolda, Tamara Gibson

    2009-02-01

    Tensor decompositions are higher-order analogues of matrix decompositions and have proven to be powerful tools for data analysis. In particular, we are interested in the canonical tensor decomposition, otherwise known as the CANDECOMP/PARAFAC decomposition (CPD), which expresses a tensor as the sum of component rank-one tensors and is used in a multitude of applications such as chemometrics, signal processing, neuroscience, and web analysis. The task of computing the CPD, however, can be difficult. The typical approach is based on alternating least squares (ALS) optimization, which can be remarkably fast but is not very accurate. Previously, nonlinear least squares (NLS) methods have also been recommended; existing NLS methods are accurate but slow. In this paper, we propose the use of gradient-based optimization methods. We discuss the mathematical calculation of the derivatives and further show that they can be computed efficiently, at the same cost as one iteration of ALS. Computational experiments demonstrate that the gradient-based optimization methods are much more accurate than ALS and orders of magnitude faster than NLS.

  14. Optimal design of permanent magnet flux switching generator for wind applications via artificial neural network and multi-objective particle swarm optimization hybrid approach

    International Nuclear Information System (INIS)

    Meo, Santolo; Zohoori, Alireza; Vahedi, Abolfazl

    2016-01-01

    Highlights: • A new optimal design of flux switching permanent magnet generator is developed. • A prototype is employed to validate numerical data used for optimization. • A novel hybrid multi-objective particle swarm optimization approach is proposed. • Optimization targets are weight, cost, voltage and its total harmonic distortion. • The hybrid approach preference is proved compared with other optimization methods. - Abstract: In this paper a new hybrid approach obtained combining a multi-objective particle swarm optimization and artificial neural network is proposed for the design optimization of a direct-drive permanent magnet flux switching generators for low power wind applications. The targets of the proposed multi-objective optimization are to reduce the costs and weight of the machine while maximizing the amplitude of the induced voltage as well as minimizing its total harmonic distortion. The permanent magnet width, the stator and rotor tooth width, the rotor teeth number and stator pole number of the machine define the search space for the optimization problem. Four supervised artificial neural networks are designed for modeling the complex relationships among the weight, the cost, the amplitude and the total harmonic distortion of the output voltage respect to the quantities of the search space. Finite element analysis is adopted to generate training dataset for the artificial neural networks. Finite element analysis based model is verified by experimental results with a 1.5 kW permanent magnet flux switching generator prototype suitable for renewable energy applications, having 6/19 stator poles/rotor teeth. Finally the effectiveness of the proposed hybrid procedure is compared with the results given by conventional multi-objective optimization algorithms. The obtained results show the soundness of the proposed multi objective optimization technique and its feasibility to be adopted as suitable methodology for optimal design of permanent

  15. Monitoring the shorebirds of North America: Towards a unified approach

    Science.gov (United States)

    Skagen, S.K.; Bart, J.; Andres, B.; Brown, S.; Donaldson, G.; Harrington, B.; Johnston, V.; Jones, S.L.; Morrison, R.I.G.

    2003-01-01

    The Program for Regional and International Shorebird Monitoring (PRISM) has recently developed a single blueprint for monitoring shorebirds in Canada and the United States in response to needs identified by recent shorebird conservation plans. The goals of PRISM are to: (1) estimate the size of breeding populations of 74 shorebird taxa in North America; (2) describe the distribution, abundance, and habitat relationships for these taxa; (3) monitor trends in shorebird population size; (4) monitor shorebird numbers at stopover locations, and; (5) assist local managers in meeting their shorebird conservation goals. The initial focus has been on developing methods to estimate trends in population size. A three-part approach for estimating trends includes: (1) breeding surveys in arctic, boreal, and temperate regions, (2) migration surveys, and (3) wintering surveys.

  16. Minimization of the LCA impact of thermodynamic cycles using a combined simulation-optimization approach

    International Nuclear Information System (INIS)

    Brunet, Robert; Cortés, Daniel; Guillén-Gosálbez, Gonzalo; Jiménez, Laureano; Boer, Dieter

    2012-01-01

    This work presents a computational approach for the simultaneous minimization of the total cost and environmental impact of thermodynamic cycles. Our method combines process simulation, multi-objective optimization and life cycle assessment (LCA) within a unified framework that identifies in a systematic manner optimal design and operating conditions according to several economic and LCA impacts. Our approach takes advantages of the complementary strengths of process simulation (in which mass, energy balances and thermodynamic calculations are implemented in an easy manner) and rigorous deterministic optimization tools. We demonstrate the capabilities of this strategy by means of two case studies in which we address the design of a 10 MW Rankine cycle modeled in Aspen Hysys, and a 90 kW ammonia-water absorption cooling cycle implemented in Aspen Plus. Numerical results show that it is possible to achieve environmental and cost savings using our rigorous approach. - Highlights: ► Novel framework for the optimal design of thermdoynamic cycles. ► Combined use of simulation and optimization tools. ► Optimal design and operating conditions according to several economic and LCA impacts. ► Design of a 10MW Rankine cycle in Aspen Hysys, and a 90kW absorption cycle in Aspen Plus.

  17. Minimization of the LCA impact of thermodynamic cycles using a combined simulation-optimization approach

    Energy Technology Data Exchange (ETDEWEB)

    Brunet, Robert; Cortes, Daniel [Departament d' Enginyeria Quimica, Escola Tecnica Superior d' Enginyeria Quimica, Universitat Rovira i Virgili, Campus Sescelades, Avinguda Paisos Catalans 26, 43007 Tarragona (Spain); Guillen-Gosalbez, Gonzalo [Departament d' Enginyeria Quimica, Escola Tecnica Superior d' Enginyeria Quimica, Universitat Rovira i Virgili, Campus Sescelades, Avinguda Paisos Catalans 26, 43007 Tarragona (Spain); Jimenez, Laureano [Departament d' Enginyeria Quimica, Escola Tecnica Superior d' Enginyeria Quimica, Universitat Rovira i Virgili, Campus Sescelades, Avinguda Paisos Catalans 26, 43007 Tarragona (Spain); Boer, Dieter [Departament d' Enginyeria Mecanica, Escola Tecnica Superior d' Enginyeria, Universitat Rovira i Virgili, Campus Sescelades, Avinguda Paisos Catalans 26, 43007, Tarragona (Spain)

    2012-12-15

    This work presents a computational approach for the simultaneous minimization of the total cost and environmental impact of thermodynamic cycles. Our method combines process simulation, multi-objective optimization and life cycle assessment (LCA) within a unified framework that identifies in a systematic manner optimal design and operating conditions according to several economic and LCA impacts. Our approach takes advantages of the complementary strengths of process simulation (in which mass, energy balances and thermodynamic calculations are implemented in an easy manner) and rigorous deterministic optimization tools. We demonstrate the capabilities of this strategy by means of two case studies in which we address the design of a 10 MW Rankine cycle modeled in Aspen Hysys, and a 90 kW ammonia-water absorption cooling cycle implemented in Aspen Plus. Numerical results show that it is possible to achieve environmental and cost savings using our rigorous approach. - Highlights: Black-Right-Pointing-Pointer Novel framework for the optimal design of thermdoynamic cycles. Black-Right-Pointing-Pointer Combined use of simulation and optimization tools. Black-Right-Pointing-Pointer Optimal design and operating conditions according to several economic and LCA impacts. Black-Right-Pointing-Pointer Design of a 10MW Rankine cycle in Aspen Hysys, and a 90kW absorption cycle in Aspen Plus.

  18. Bifurcation-based approach reveals synergism and optimal combinatorial perturbation.

    Science.gov (United States)

    Liu, Yanwei; Li, Shanshan; Liu, Zengrong; Wang, Ruiqi

    2016-06-01

    Cells accomplish the process of fate decisions and form terminal lineages through a series of binary choices in which cells switch stable states from one branch to another as the interacting strengths of regulatory factors continuously vary. Various combinatorial effects may occur because almost all regulatory processes are managed in a combinatorial fashion. Combinatorial regulation is crucial for cell fate decisions because it may effectively integrate many different signaling pathways to meet the higher regulation demand during cell development. However, whether the contribution of combinatorial regulation to the state transition is better than that of a single one and if so, what the optimal combination strategy is, seem to be significant issue from the point of view of both biology and mathematics. Using the approaches of combinatorial perturbations and bifurcation analysis, we provide a general framework for the quantitative analysis of synergism in molecular networks. Different from the known methods, the bifurcation-based approach depends only on stable state responses to stimuli because the state transition induced by combinatorial perturbations occurs between stable states. More importantly, an optimal combinatorial perturbation strategy can be determined by investigating the relationship between the bifurcation curve of a synergistic perturbation pair and the level set of a specific objective function. The approach is applied to two models, i.e., a theoretical multistable decision model and a biologically realistic CREB model, to show its validity, although the approach holds for a general class of biological systems.

  19. In-well time-of-travel approach to evaluate optimal purge duration during low-flow sampling of monitoring wells

    Science.gov (United States)

    Harte, Philip T.

    2017-01-01

    A common assumption with groundwater sampling is that low (time until inflow from the high hydraulic conductivity part of the screened formation can travel vertically in the well to the pump intake. Therefore, the length of the time needed for adequate purging prior to sample collection (called optimal purge duration) is controlled by the in-well, vertical travel times. A preliminary, simple analytical model was used to provide information on the relation between purge duration and capture of formation water for different gross levels of heterogeneity (contrast between low and high hydraulic conductivity layers). The model was then used to compare these time–volume relations to purge data (pumping rates and drawdown) collected at several representative monitoring wells from multiple sites. Results showed that computation of time-dependent capture of formation water (as opposed to capture of preexisting screen water), which were based on vertical travel times in the well, compares favorably with the time required to achieve field parameter stabilization. If field parameter stabilization is an indicator of arrival time of formation water, which has been postulated, then in-well, vertical flow may be an important factor at wells where low-flow sampling is the sample method of choice.

  20. A data fusion approach for track monitoring from multiple in-service trains

    Science.gov (United States)

    Lederman, George; Chen, Siheng; Garrett, James H.; Kovačević, Jelena; Noh, Hae Young; Bielak, Jacobo

    2017-10-01

    We present a data fusion approach for enabling data-driven rail-infrastructure monitoring from multiple in-service trains. A number of researchers have proposed using vibration data collected from in-service trains as a low-cost method to monitor track geometry. The majority of this work has focused on developing novel features to extract information about the tracks from data produced by individual sensors on individual trains. We extend this work by presenting a technique to combine extracted features from multiple passes over the tracks from multiple sensors aboard multiple vehicles. There are a number of challenges in combining multiple data sources, like different relative position coordinates depending on the location of the sensor within the train. Furthermore, as the number of sensors increases, the likelihood that some will malfunction also increases. We use a two-step approach that first minimizes position offset errors through data alignment, then fuses the data with a novel adaptive Kalman filter that weights data according to its estimated reliability. We show the efficacy of this approach both through simulations and on a data-set collected from two instrumented trains operating over a one-year period. Combining data from numerous in-service trains allows for more continuous and more reliable data-driven monitoring than analyzing data from any one train alone; as the number of instrumented trains increases, the proposed fusion approach could facilitate track monitoring of entire rail-networks.

  1. A Robot Trajectory Optimization Approach for Thermal Barrier Coatings Used for Free-Form Components

    Science.gov (United States)

    Cai, Zhenhua; Qi, Beichun; Tao, Chongyuan; Luo, Jie; Chen, Yuepeng; Xie, Changjun

    2017-10-01

    This paper is concerned with a robot trajectory optimization approach for thermal barrier coatings. As the requirements of high reproducibility of complex workpieces increase, an optimal thermal spraying trajectory should not only guarantee an accurate control of spray parameters defined by users (e.g., scanning speed, spray distance, scanning step, etc.) to achieve coating thickness homogeneity but also help to homogenize the heat transfer distribution on the coating surface. A mesh-based trajectory generation approach is introduced in this work to generate path curves on a free-form component. Then, two types of meander trajectories are generated by performing a different connection method. Additionally, this paper presents a research approach for introducing the heat transfer analysis into the trajectory planning process. Combining heat transfer analysis with trajectory planning overcomes the defects of traditional trajectory planning methods (e.g., local over-heating), which helps form the uniform temperature field by optimizing the time sequence of path curves. The influence of two different robot trajectories on the process of heat transfer is estimated by coupled FEM models which demonstrates the effectiveness of the presented optimization approach.

  2. An Optimization-Based Impedance Approach for Robot Force Regulation with Prescribed Force Limits

    Directory of Open Access Journals (Sweden)

    R. de J. Portillo-Vélez

    2015-01-01

    Full Text Available An optimization based approach for the regulation of excessive or insufficient forces at the end-effector level is introduced. The objective is to minimize the interaction force error at the robot end effector, while constraining undesired interaction forces. To that end, a dynamic optimization problem (DOP is formulated considering a dynamic robot impedance model. Penalty functions are considered in the DOP to handle the constraints on the interaction force. The optimization problem is online solved through the gradient flow approach. Convergence properties are presented and the stability is drawn when the force limits are considered in the analysis. The effectiveness of our proposal is validated via experimental results for a robotic grasping task.

  3. Object-oriented Approach to High-level Network Monitoring and Management

    Science.gov (United States)

    Mukkamala, Ravi

    2000-01-01

    An absolute prerequisite for the management of large investigating methods to build high-level monitoring computer networks is the ability to measure their systems that are built on top of existing monitoring performance. Unless we monitor a system, we cannot tools. Due to the heterogeneous nature of the hope to manage and control its performance. In this underlying systems at NASA Langley Research Center, paper, we describe a network monitoring system that we use an object-oriented approach for the design, we are currently designing and implementing. Keeping, first, we use UML (Unified Modeling Language) to in mind the complexity of the task and the required model users' requirements. Second, we identify the flexibility for future changes, we use an object-oriented existing capabilities of the underlying monitoring design methodology. The system is built using the system. Third, we try to map the former with the latter. APIs offered by the HP OpenView system.

  4. Probability approaching method (PAM) and its application on fuel management optimization

    International Nuclear Information System (INIS)

    Liu, Z.; Hu, Y.; Shi, G.

    2004-01-01

    For multi-cycle reloading optimization problem, a new solving scheme is presented. The multi-cycle problem is de-coupled into a number of relatively independent mono-cycle issues, then this non-linear programming problem with complex constraints is solved by an advanced new algorithm -probability approaching method (PAM), which is based on probability theory. The result on simplified core model shows well effect of this new multi-cycle optimization scheme. (authors)

  5. Tool Wear Monitoring Using Time Series Analysis

    Science.gov (United States)

    Song, Dong Yeul; Ohara, Yasuhiro; Tamaki, Haruo; Suga, Masanobu

    A tool wear monitoring approach considering the nonlinear behavior of cutting mechanism caused by tool wear and/or localized chipping is proposed, and its effectiveness is verified through the cutting experiment and actual turning machining. Moreover, the variation in the surface roughness of the machined workpiece is also discussed using this approach. In this approach, the residual error between the actually measured vibration signal and the estimated signal obtained from the time series model corresponding to dynamic model of cutting is introduced as the feature of diagnosis. Consequently, it is found that the early tool wear state (i.e. flank wear under 40µm) can be monitored, and also the optimal tool exchange time and the tool wear state for actual turning machining can be judged by this change in the residual error. Moreover, the variation of surface roughness Pz in the range of 3 to 8µm can be estimated by the monitoring of the residual error.

  6. A Modified Penalty Parameter Approach for Optimal Estimation of UH with Simultaneous Estimation of Infiltration Parameters

    Science.gov (United States)

    Bhattacharjya, Rajib Kumar

    2018-05-01

    The unit hydrograph and the infiltration parameters of a watershed can be obtained from observed rainfall-runoff data by using inverse optimization technique. This is a two-stage optimization problem. In the first stage, the infiltration parameters are obtained and the unit hydrograph ordinates are estimated in the second stage. In order to combine this two-stage method into a single stage one, a modified penalty parameter approach is proposed for converting the constrained optimization problem to an unconstrained one. The proposed approach is designed in such a way that the model initially obtains the infiltration parameters and then searches the optimal unit hydrograph ordinates. The optimization model is solved using Genetic Algorithms. A reduction factor is used in the penalty parameter approach so that the obtained optimal infiltration parameters are not destroyed during subsequent generation of genetic algorithms, required for searching optimal unit hydrograph ordinates. The performance of the proposed methodology is evaluated by using two example problems. The evaluation shows that the model is superior, simple in concept and also has the potential for field application.

  7. Integrated Optimization of Long-Range Underwater Signal Detection, Feature Extraction, and Classification for Nuclear Treaty Monitoring

    NARCIS (Netherlands)

    Tuma, M.; Rorbech, V.; Prior, M.; Igel, C.

    2016-01-01

    We designed and jointly optimized an integrated signal processing chain for detection and classification of long-range passive-acoustic underwater signals recorded by the global geophysical monitoring network of the Comprehensive Nuclear-Test-Ban Treaty Organization. Starting at the level of raw

  8. Tomographic Reconstruction from a Few Views: A Multi-Marginal Optimal Transport Approach

    Energy Technology Data Exchange (ETDEWEB)

    Abraham, I., E-mail: isabelle.abraham@cea.fr [CEA Ile de France (France); Abraham, R., E-mail: romain.abraham@univ-orleans.fr; Bergounioux, M., E-mail: maitine.bergounioux@univ-orleans.fr [Université d’Orléans, UFR Sciences, MAPMO, UMR 7349 (France); Carlier, G., E-mail: carlier@ceremade.dauphine.fr [CEREMADE, UMR CNRS 7534, Université Paris IX Dauphine, Pl. de Lattre de Tassigny (France)

    2017-02-15

    In this article, we focus on tomographic reconstruction. The problem is to determine the shape of the interior interface using a tomographic approach while very few X-ray radiographs are performed. We use a multi-marginal optimal transport approach. Preliminary numerical results are presented.

  9. Taxes, subsidies and unemployment - a unified optimization approach

    Directory of Open Access Journals (Sweden)

    Erik Bajalinov

    2010-12-01

    Full Text Available Like a linear programming (LP problem, linear-fractional programming (LFP problem can be usefully applied in a wide range of real-world applications. In the last few decades a lot of research papers and monographs were published throughout the world where authors (mainly mathematicians investigated different theoretical and algorithmic aspects of LFP problems in various forms. In this paper we consider these two approaches to optimization (based on linear and linear-fractional objective functions on the same feasible set, compare results they lead to and give interpretation in terms of taxes, subsidies and manpower requirement. We show that in certain cases both approaches are closely connected with one another and may be fruitfully utilized simultaneously.

  10. Thematic Network on The role of Monitoring in a Phased Approach to Disposal - EUR 21025, 2004. Conclusions of the EC Thematic Network on The role of Monitoring in a Phased Approach to Geological Disposal of Radioactive Waste

    International Nuclear Information System (INIS)

    Barlow, Stephen

    2005-01-01

    The EC thematic network on the role of monitoring in a phased approach to the geological disposal of radioactive waste brought together expertise from twelve organisations from ten countries. It was started in 2001 following on from an earlier EC study of retrievability and reversibility (EUR 19145 EN), and completed in 2004 with publication of the final report (EUR 21025 EN). The project mainly aimed to: - Understand the approaches to monitoring in each national programme and their dependency on concepts and approaches. - Distil consensus views and recognise alternative approaches to monitoring. - Share technical knowledge and experience. - Communicate views and experiences. Participants from the projects looked at various definitions of monitoring in relation to a phased approach to disposal, and achieved a consensus on the following: 'Continuous or periodic observations and measurements of engineering, environmental, radiological or other parameters and indicators/characteristics, to help evaluate the behaviour of components of the repository system, or the impacts of the repository and its operation on the environment, and to help in making decisions on the implementation of successive phases of the disposal concept'. That definition is mainly based on an IAEA definition with a few modifications and, in particular, by adding the fact that monitoring has a role in making decisions. Various alternative approaches to make decisions and achieve goals were analysed and the need was stressed for a flexible schedule with a degree of concept flexibility. The project achieved a consensus on the following principles: (i) monitoring has a role in underpinning and verification of operational safety (compliance monitoring); (ii) long-term (post-closure) safety must be assured by design - it cannot rely on monitoring, although monitoring may be implemented for other reasons - monitoring must not be detrimental to long-term (post-closure) safety (iii) monitoring

  11. Topology Optimization of Constrained Layer Damping on Plates Using Method of Moving Asymptote (MMA Approach

    Directory of Open Access Journals (Sweden)

    Zheng Ling

    2011-01-01

    Full Text Available Damping treatments have been extensively used as a powerful means to damp out structural resonant vibrations. Usually, damping materials are fully covered on the surface of plates. The drawbacks of this conventional treatment are also obvious due to an added mass and excess material consumption. Therefore, it is not always economical and effective from an optimization design view. In this paper, a topology optimization approach is presented to maximize the modal damping ratio of the plate with constrained layer damping treatment. The governing equation of motion of the plate is derived on the basis of energy approach. A finite element model to describe dynamic performances of the plate is developed and used along with an optimization algorithm in order to determine the optimal topologies of constrained layer damping layout on the plate. The damping of visco-elastic layer is modeled by the complex modulus formula. Considering the vibration and energy dissipation mode of the plate with constrained layer damping treatment, damping material density and volume factor are considered as design variable and constraint respectively. Meantime, the modal damping ratio of the plate is assigned as the objective function in the topology optimization approach. The sensitivity of modal damping ratio to design variable is further derived and Method of Moving Asymptote (MMA is adopted to search the optimized topologies of constrained layer damping layout on the plate. Numerical examples are used to demonstrate the effectiveness of the proposed topology optimization approach. The results show that vibration energy dissipation of the plates can be enhanced by the optimal constrained layer damping layout. This optimal technology can be further extended to vibration attenuation of sandwich cylindrical shells which constitute the major building block of many critical structures such as cabins of aircrafts, hulls of submarines and bodies of rockets and missiles as an

  12. Approaches to monitoring biological outcomes for HPV vaccination: challenges of early adopter countries

    DEFF Research Database (Denmark)

    Wong, Charlene A; Saraiya, Mona; Hariri, Susan

    2011-01-01

    In this review, we describe plans to monitor the impact of human papillomavirus (HPV) vaccine on biologic outcomes in selected international areas (Australia, Canada, Mexico, the Nordic countries, Scotland, and the United States) that have adopted this vaccine. This summary of monitoring plans...... provides a background for discussing the challenges of vaccine monitoring in settings where resources and capacity may vary. A variety of approaches that depend on existing infrastructure and resources are planned or underway for monitoring HPV vaccine impact. Monitoring HPV vaccine impact on biologic...

  13. Optimizing communication satellites payload configuration with exact approaches

    Science.gov (United States)

    Stathakis, Apostolos; Danoy, Grégoire; Bouvry, Pascal; Talbi, El-Ghazali; Morelli, Gianluigi

    2015-12-01

    The satellite communications market is competitive and rapidly evolving. The payload, which is in charge of applying frequency conversion and amplification to the signals received from Earth before their retransmission, is made of various components. These include reconfigurable switches that permit the re-routing of signals based on market demand or because of some hardware failure. In order to meet modern requirements, the size and the complexity of current communication payloads are increasing significantly. Consequently, the optimal payload configuration, which was previously done manually by the engineers with the use of computerized schematics, is now becoming a difficult and time consuming task. Efficient optimization techniques are therefore required to find the optimal set(s) of switch positions to optimize some operational objective(s). In order to tackle this challenging problem for the satellite industry, this work proposes two Integer Linear Programming (ILP) models. The first one is single-objective and focuses on the minimization of the length of the longest channel path, while the second one is bi-objective and additionally aims at minimizing the number of switch changes in the payload switch matrix. Experiments are conducted on a large set of instances of realistic payload sizes using the CPLEX® solver and two well-known exact multi-objective algorithms. Numerical results demonstrate the efficiency and limitations of the ILP approach on this real-world problem.

  14. Micro-scale NMR Experiments for Monitoring the Optimization of Membrane Protein Solutions for Structural Biology.

    Science.gov (United States)

    Horst, Reto; Wüthrich, Kurt

    2015-07-20

    Reconstitution of integral membrane proteins (IMP) in aqueous solutions of detergent micelles has been extensively used in structural biology, using either X-ray crystallography or NMR in solution. Further progress could be achieved by establishing a rational basis for the selection of detergent and buffer conditions, since the stringent bottleneck that slows down the structural biology of IMPs is the preparation of diffracting crystals or concentrated solutions of stable isotope labeled IMPs. Here, we describe procedures to monitor the quality of aqueous solutions of [ 2 H, 15 N]-labeled IMPs reconstituted in detergent micelles. This approach has been developed for studies of β-barrel IMPs, where it was successfully applied for numerous NMR structure determinations, and it has also been adapted for use with α-helical IMPs, in particular GPCRs, in guiding crystallization trials and optimizing samples for NMR studies (Horst et al ., 2013). 2D [ 15 N, 1 H]-correlation maps are used as "fingerprints" to assess the foldedness of the IMP in solution. For promising samples, these "inexpensive" data are then supplemented with measurements of the translational and rotational diffusion coefficients, which give information on the shape and size of the IMP/detergent mixed micelles. Using microcoil equipment for these NMR experiments enables data collection with only micrograms of protein and detergent. This makes serial screens of variable solution conditions viable, enabling the optimization of parameters such as the detergent concentration, sample temperature, pH and the composition of the buffer.

  15. Approaches to the Optimal Nonlinear Analysis of Microcalorimeter Pulses

    Science.gov (United States)

    Fowler, J. W.; Pappas, C. G.; Alpert, B. K.; Doriese, W. B.; O'Neil, G. C.; Ullom, J. N.; Swetz, D. S.

    2018-03-01

    We consider how to analyze microcalorimeter pulses for quantities that are nonlinear in the data, while preserving the signal-to-noise advantages of linear optimal filtering. We successfully apply our chosen approach to compute the electrothermal feedback energy deficit (the "Joule energy") of a pulse, which has been proposed as a linear estimator of the deposited photon energy.

  16. A Wireless Sensor Network-Based Approach with Decision Support for Monitoring Lake Water Quality.

    Science.gov (United States)

    Huang, Xiaoci; Yi, Jianjun; Chen, Shaoli; Zhu, Xiaomin

    2015-11-19

    Online monitoring and water quality analysis of lakes are urgently needed. A feasible and effective approach is to use a Wireless Sensor Network (WSN). Lake water environments, like other real world environments, present many changing and unpredictable situations. To ensure flexibility in such an environment, the WSN node has to be prepared to deal with varying situations. This paper presents a WSN self-configuration approach for lake water quality monitoring. The approach is based on the integration of a semantic framework, where a reasoner can make decisions on the configuration of WSN services. We present a WSN ontology and the relevant water quality monitoring context information, which considers its suitability in a pervasive computing environment. We also propose a rule-based reasoning engine that is used to conduct decision support through reasoning techniques and context-awareness. To evaluate the approach, we conduct usability experiments and performance benchmarks.

  17. NLP model and stochastic multi-start optimization approach for heat exchanger networks

    International Nuclear Information System (INIS)

    Núñez-Serna, Rosa I.; Zamora, Juan M.

    2016-01-01

    Highlights: • An NLP model for the optimal design of heat exchanger networks is proposed. • The NLP model is developed from a stage-wise grid diagram representation. • A two-phase stochastic multi-start optimization methodology is utilized. • Improved network designs are obtained with different heat load distributions. • Structural changes and reductions in the number of heat exchangers are produced. - Abstract: Heat exchanger network synthesis methodologies frequently identify good network structures, which nevertheless, might be accompanied by suboptimal values of design variables. The objective of this work is to develop a nonlinear programming (NLP) model and an optimization approach that aim at identifying the best values for intermediate temperatures, sub-stream flow rate fractions, heat loads and areas for a given heat exchanger network topology. The NLP model that minimizes the total annual cost of the network is constructed based on a stage-wise grid diagram representation. To improve the possibilities of obtaining global optimal designs, a two-phase stochastic multi-start optimization algorithm is utilized for the solution of the developed model. The effectiveness of the proposed optimization approach is illustrated with the optimization of two network designs proposed in the literature for two well-known benchmark problems. Results show that from the addressed base network topologies it is possible to achieve improved network designs, with redistributions in exchanger heat loads that lead to reductions in total annual costs. The results also show that the optimization of a given network design sometimes leads to structural simplifications and reductions in the total number of heat exchangers of the network, thereby exposing alternative viable network topologies initially not anticipated.

  18. A mixed integer linear programming approach for optimal DER portfolio, sizing, and placement in multi-energy microgrids

    International Nuclear Information System (INIS)

    Mashayekh, Salman; Stadler, Michael; Cardoso, Gonçalo; Heleno, Miguel

    2017-01-01

    Highlights: • This paper presents a MILP model for optimal design of multi-energy microgrids. • Our microgrid design includes optimal technology portfolio, placement, and operation. • Our model includes microgrid electrical power flow and heat transfer equations. • The case study shows advantages of our model over aggregate single-node approaches. • The case study shows the accuracy of the integrated linearized power flow model. - Abstract: Optimal microgrid design is a challenging problem, especially for multi-energy microgrids with electricity, heating, and cooling loads as well as sources, and multiple energy carriers. To address this problem, this paper presents an optimization model formulated as a mixed-integer linear program, which determines the optimal technology portfolio, the optimal technology placement, and the associated optimal dispatch, in a microgrid with multiple energy types. The developed model uses a multi-node modeling approach (as opposed to an aggregate single-node approach) that includes electrical power flow and heat flow equations, and hence, offers the ability to perform optimal siting considering physical and operational constraints of electrical and heating/cooling networks. The new model is founded on the existing optimization model DER-CAM, a state-of-the-art decision support tool for microgrid planning and design. The results of a case study that compares single-node vs. multi-node optimal design for an example microgrid show the importance of multi-node modeling. It has been shown that single-node approaches are not only incapable of optimal DER placement, but may also result in sub-optimal DER portfolio, as well as underestimation of investment costs.

  19. Optimization approaches to mpi and area merging-based parallel buffer algorithm

    Directory of Open Access Journals (Sweden)

    Junfu Fan

    Full Text Available On buffer zone construction, the rasterization-based dilation method inevitably introduces errors, and the double-sided parallel line method involves a series of complex operations. In this paper, we proposed a parallel buffer algorithm based on area merging and MPI (Message Passing Interface to improve the performances of buffer analyses on processing large datasets. Experimental results reveal that there are three major performance bottlenecks which significantly impact the serial and parallel buffer construction efficiencies, including the area merging strategy, the task load balance method and the MPI inter-process results merging strategy. Corresponding optimization approaches involving tree-like area merging strategy, the vertex number oriented parallel task partition method and the inter-process results merging strategy were suggested to overcome these bottlenecks. Experiments were carried out to examine the performance efficiency of the optimized parallel algorithm. The estimation results suggested that the optimization approaches could provide high performance and processing ability for buffer construction in a cluster parallel environment. Our method could provide insights into the parallelization of spatial analysis algorithm.

  20. The application of entropy weight TOPSIS method to optimal points in monitoring the Xinjiang radiation environment

    International Nuclear Information System (INIS)

    Feng Guangwen; Hu Youhua; Liu Qian

    2009-01-01

    In this paper, the application of the entropy weight TOPSIS method to optimal layout points in monitoring the Xinjiang radiation environment has been indroduced. With the help of SAS software, It has been found that the method is more ideal and feasible. The method can provide a reference for us to monitor radiation environment in the same regions further. As the method could bring great convenience and greatly reduce the inspecting work, it is very simple, flexible and effective for a comprehensive evaluation. (authors)

  1. Integrated Systems-Based Approach for Reaching Acceptable End Points for Groundwater - 13629

    International Nuclear Information System (INIS)

    Lee, M. Hope; Wellman, Dawn; Truex, Mike; Freshley, Mark D.; Sorenson, Kent S. Jr.; Wymore, Ryan

    2013-01-01

    The sheer mass and nature of contaminated materials at DOE and DoD sites, makes it impractical to completely restore these sites to pre-disposal conditions. DOE faces long-term challenges, particularly with developing monitoring and end state approaches for clean-up that are protective of the environment, technically based and documented, sustainable, and most importantly cost effective. Integrated systems-based monitoring approaches (e.g., tools for characterization and monitoring, multi-component strategies, geophysical modeling) could provide novel approaches and a framework to (a) define risk-informed endpoints and/or conditions that constitute completion of cleanup and (b) provide the understanding for implementation of advanced scientific approaches to meet cleanup goals. Multi-component strategies which combine site conceptual models, biological, chemical, and physical remediation strategies, as well as iterative review and optimization have proven successful at several DOE sites. Novel tools such as enzyme probes and quantitative PCR for DNA and RNA, and innovative modeling approaches for complex subsurface environments, have been successful at facilitating the reduced operation or shutdown of pump and treat facilities and transition of clean-up activities into monitored natural attenuation remedies. Integrating novel tools with site conceptual models and other lines of evidence to characterize, optimize, and monitor long term remedial approaches for complex contaminant plumes are critical for transitioning active remediation into cost effective, yet technically defensible endpoint strategies. (authors)

  2. Implementing and Innovating Marine Monitoring Approaches for Assessing Marine Environmental Status

    KAUST Repository

    Danovaro, Roberto; Carugati, Laura; Berzano, Marco; Cahill, Abigail E.; Carvalho, Susana; Chenuil, Anne; Corinaldesi, Cinzia; Cristina, Sonia; David, Romain; Dell'Anno, Antonio; Dzhembekova, Nina; Garcé s, Esther; Gasol, Joseph M.; Goela, Priscila; Fé ral, Jean-Pierre; Ferrera, Isabel; Forster, Rodney M.; Kurekin, Andrey A.; Rastelli, Eugenio; Marinova, Veselka; Miller, Peter I.; Moncheva, Snejana; Newton, Alice; Pearman, John K.; Pitois, Sophie G.; Reñ é , Albert; Rodrí guez-Ezpeleta, Naiara; Saggiomo, Vincenzo; Simis, Stefan G. H.; Stefanova, Kremena; Wilson, Christian; Lo Martire, Marco; Greco, Silvestro; Cochrane, Sabine K. J.; Mangoni, Olga; Borja, Angel

    2016-01-01

    Marine environmental monitoring has tended to focus on site-specific methods of investigation. These traditional methods have low spatial and temporal resolution and are relatively labor intensive per unit area/time that they cover. To implement the Marine Strategy Framework Directive (MSFD), European Member States are required to improve marine monitoring and design monitoring networks. This can be achieved by developing and testing innovative and cost-effective monitoring systems, as well as indicators of environmental status. Here, we present several recently developed methodologies and technologies to improve marine biodiversity indicators and monitoring methods. The innovative tools are discussed concerning the technologies presently utilized as well as the advantages and disadvantages of their use in routine monitoring. In particular, the present analysis focuses on: (i) molecular approaches, including microarray, Real Time quantitative PCR (qPCR), and metagenetic (metabarcoding) tools; (ii) optical (remote) sensing and acoustic methods; and (iii) in situ monitoring instruments. We also discuss their applications in marine monitoring within the MSFD through the analysis of case studies in order to evaluate their potential utilization in future routine marine monitoring. We show that these recently-developed technologies can present clear advantages in accuracy, efficiency and cost.

  3. Implementing and Innovating Marine Monitoring Approaches for Assessing Marine Environmental Status

    KAUST Repository

    Danovaro, Roberto

    2016-11-23

    Marine environmental monitoring has tended to focus on site-specific methods of investigation. These traditional methods have low spatial and temporal resolution and are relatively labor intensive per unit area/time that they cover. To implement the Marine Strategy Framework Directive (MSFD), European Member States are required to improve marine monitoring and design monitoring networks. This can be achieved by developing and testing innovative and cost-effective monitoring systems, as well as indicators of environmental status. Here, we present several recently developed methodologies and technologies to improve marine biodiversity indicators and monitoring methods. The innovative tools are discussed concerning the technologies presently utilized as well as the advantages and disadvantages of their use in routine monitoring. In particular, the present analysis focuses on: (i) molecular approaches, including microarray, Real Time quantitative PCR (qPCR), and metagenetic (metabarcoding) tools; (ii) optical (remote) sensing and acoustic methods; and (iii) in situ monitoring instruments. We also discuss their applications in marine monitoring within the MSFD through the analysis of case studies in order to evaluate their potential utilization in future routine marine monitoring. We show that these recently-developed technologies can present clear advantages in accuracy, efficiency and cost.

  4. Radiation monitoring for uranium miners: evaluation and optimization. Final report 9 Sep 79-9 Oct 81

    International Nuclear Information System (INIS)

    Schiager, K.J.; Borak, T.B.; Johnson, J.A.

    1981-01-01

    Radiological health risks to uranium miners are reviewed. Radiation measurement methods and monitoring systems that are now, or soon could be, available are reviewed with respect to their reliability and cost for determining annual exposures. Criteria for optimization of radiation monitoring programs are presented and applied to the current exposure conditions and available monitoring methods. The following recommendations are offered: (1) Personal thermoluminescent dosimeters for gamma exposures should be provided to all underground employees in uranium mines. (2) exposures to long-lived radionuclides in respirable dust and to airborne radon progency should be measured by randomized grab sampling. (3) regulations of the Mine Safety and Health Administration should place greater emphasis on exposure reduction, as opposed to documentation

  5. A risk management process for reinforced concrete structures by coupling modelling, monitoring and Bayesian approaches

    International Nuclear Information System (INIS)

    Capra, Bruno; Li, Kefei; Wolff, Valentin; Bernard, Olivier; Gerard, Bruno

    2004-01-01

    The impact of steel corrosion on the durability of reinforced concrete structures has since a long time been a major concern in civil engineering. The main electrochemical mechanisms of the steel corrosion are know well known. The material and structure degradation is attributed to the progressive formation of an expansive corrosion product at the steel-concrete interface. To assess quantitatively the structure lifetime, a two-stage service life model has been accepted widely. So far, the research attention is mainly given to the corrosion in an un-cracked concrete. However. practically one is often confronted to the reinforcement corrosion in an already cracked concrete. How to quantify the corrosion risk is of great interest for the long term durability of these cracked structures. To this end, this paper proposes a service life modeling for the corrosion process by carbonation in a cracked or un-cracked concrete depending on the observation or monitoring data available. Some recent experimental investigations are used to calibrate the models. Then, the models are applied to a shell structure to quantify the corrosion process and determine the optimal maintenance strategy. As corrosion processes are very difficult to model and subjected to material and environmental random variations, an example of structure reassessment is presented taking into account in situ information by the mean of Bayesian approaches. The coupling of monitoring, modelling and updating leads to a new global maintenance strategy of infrastructure. In conclusion: This paper presents an unified methodology coupling predictive models, observations and Bayesian approaches in order to assess the degradation degree of an ageing structure. The particular case of corrosion is treated on an innovative way by the development of a service life model taking into account cracking effects on the kinetics of the phenomena. At a material level, the dominant factors are the crack opening and the crack nature

  6. Managing environmental radioactivity monitoring data: a geographic information system approach

    International Nuclear Information System (INIS)

    Heywood, I.; Cornelius, S.

    1993-01-01

    An overview of the current British approach to environmental radiation monitoring is presented here, followed by a discussion of the major issues which would have to be considered in formulating a geographical information system (GIS) for the management of radiation monitoring data. Finally, examples illustrating the use of spatial data handling and automated cartographic techniques are provided from work undertaken by the authors. These examples are discussed in the context of developing a National Radiological Spatial Information System (NRSIS) demonstrator utilising GIS technology. (Author)

  7. Multimethod, multistate Bayesian hierarchical modeling approach for use in regional monitoring of wolves.

    Science.gov (United States)

    Jiménez, José; García, Emilio J; Llaneza, Luis; Palacios, Vicente; González, Luis Mariano; García-Domínguez, Francisco; Múñoz-Igualada, Jaime; López-Bao, José Vicente

    2016-08-01

    In many cases, the first step in large-carnivore management is to obtain objective, reliable, and cost-effective estimates of population parameters through procedures that are reproducible over time. However, monitoring predators over large areas is difficult, and the data have a high level of uncertainty. We devised a practical multimethod and multistate modeling approach based on Bayesian hierarchical-site-occupancy models that combined multiple survey methods to estimate different population states for use in monitoring large predators at a regional scale. We used wolves (Canis lupus) as our model species and generated reliable estimates of the number of sites with wolf reproduction (presence of pups). We used 2 wolf data sets from Spain (Western Galicia in 2013 and Asturias in 2004) to test the approach. Based on howling surveys, the naïve estimation (i.e., estimate based only on observations) of the number of sites with reproduction was 9 and 25 sites in Western Galicia and Asturias, respectively. Our model showed 33.4 (SD 9.6) and 34.4 (3.9) sites with wolf reproduction, respectively. The number of occupied sites with wolf reproduction was 0.67 (SD 0.19) and 0.76 (0.11), respectively. This approach can be used to design more cost-effective monitoring programs (i.e., to define the sampling effort needed per site). Our approach should inspire well-coordinated surveys across multiple administrative borders and populations and lead to improved decision making for management of large carnivores on a landscape level. The use of this Bayesian framework provides a simple way to visualize the degree of uncertainty around population-parameter estimates and thus provides managers and stakeholders an intuitive approach to interpreting monitoring results. Our approach can be widely applied to large spatial scales in wildlife monitoring where detection probabilities differ between population states and where several methods are being used to estimate different population

  8. An alternative approach to continuous compliance monitoring and turbine plant optimization using a PEMS (predictive emission monitoring system)

    International Nuclear Information System (INIS)

    Swanson, B.G.; Lawrence, P.

    2009-01-01

    This paper reviewed the use of a predictive emissions monitoring system (PEMS) at 3 different gas turbine facilities in the United States and highlighted the costs and benefits of using a PEMS for documenting emissions of priority pollutants and greenhouse gases (GHG). The PEMS interfaces directly to the turbine control system and represents a lower cost alternative to the traditional continuous emission monitoring system (CEMS). The PEMS can track combustion efficiency through modeling of the turbine's operation and emissions. Excess emissions can be tracked and the causes of pollution can be determined and mitigated. The PEMS installed at the 3 turbine plants must meet rigorous performance specification criteria and the sites perform ongoing quality assurance tasks such as periodic audits with portable analyzers. The PEMS is much less expensive to install, operate, and maintain compared to the standard CEMS gas analyzer. Empirical PEMS achieves very high accuracy levels and has demonstrated superior reliability over CEMS for various types of continuous process applications under existing air compliance regulations in the United States. Annual accuracy testing at the gas turbine sites have shown that the PEMS predictions are usually within 5 per cent of the reference method. PEMS can be certified as an alternative to gas analyzer based CEMS for nitrogen oxides and carbon dioxide compliance and for GHG trading purposes. 5 refs., 8 figs.

  9. Aerodynamic Shape Optimization Design of Wing-Body Configuration Using a Hybrid FFD-RBF Parameterization Approach

    Science.gov (United States)

    Liu, Yuefeng; Duan, Zhuoyi; Chen, Song

    2017-10-01

    Aerodynamic shape optimization design aiming at improving the efficiency of an aircraft has always been a challenging task, especially when the configuration is complex. In this paper, a hybrid FFD-RBF surface parameterization approach has been proposed for designing a civil transport wing-body configuration. This approach is simple and efficient, with the FFD technique used for parameterizing the wing shape and the RBF interpolation approach used for handling the wing body junction part updating. Furthermore, combined with Cuckoo Search algorithm and Kriging surrogate model with expected improvement adaptive sampling criterion, an aerodynamic shape optimization design system has been established. Finally, the aerodynamic shape optimization design on DLR F4 wing-body configuration has been carried out as a study case, and the result has shown that the approach proposed in this paper is of good effectiveness.

  10. Reliability optimization using multiobjective ant colony system approaches

    International Nuclear Information System (INIS)

    Zhao Jianhua; Liu Zhaoheng; Dao, M.-T.

    2007-01-01

    The multiobjective ant colony system (ACS) meta-heuristic has been developed to provide solutions for the reliability optimization problem of series-parallel systems. This type of problems involves selection of components with multiple choices and redundancy levels that produce maximum benefits, and is subject to the cost and weight constraints at the system level. These are very common and realistic problems encountered in conceptual design of many engineering systems. It is becoming increasingly important to develop efficient solutions to these problems because many mechanical and electrical systems are becoming more complex, even as development schedules get shorter and reliability requirements become very stringent. The multiobjective ACS algorithm offers distinct advantages to these problems compared with alternative optimization methods, and can be applied to a more diverse problem domain with respect to the type or size of the problems. Through the combination of probabilistic search, multiobjective formulation of local moves and the dynamic penalty method, the multiobjective ACSRAP, allows us to obtain an optimal design solution very frequently and more quickly than with some other heuristic approaches. The proposed algorithm was successfully applied to an engineering design problem of gearbox with multiple stages

  11. A Reliable, Non-Invasive Approach to Data Center Monitoring and Management

    Directory of Open Access Journals (Sweden)

    Moises Levy

    2017-08-01

    Full Text Available Recent standards, legislation, and best practices point to data center infrastructure management systems to control and monitor data center performance. This work presents an innovative approach to address some of the challenges that currently hinder data center management. It explains how monitoring and management systems should be envisioned and implemented. Key parameters associated with data center infrastructure and information technology equipment can be monitored in real-time across an entire facility using low-cost, low-power wireless sensors. Given the data centers’ mission critical nature, the system must be reliable and deployable through a non-invasive process. The need for the monitoring system is also presented through a feedback control systems perspective, which allows higher levels of automation. The data center monitoring and management system enables data gathering, analysis, and decision-making to improve performance, and to enhance asset utilization.

  12. Tuning of PID controller for an automatic regulator voltage system using chaotic optimization approach

    International Nuclear Information System (INIS)

    Santos Coelho, Leandro dos

    2009-01-01

    Despite the popularity, the tuning aspect of proportional-integral-derivative (PID) controllers is a challenge for researchers and plant operators. Various controllers tuning methodologies have been proposed in the literature such as auto-tuning, self-tuning, pattern recognition, artificial intelligence, and optimization methods. Chaotic optimization algorithms as an emergent method of global optimization have attracted much attention in engineering applications. Chaotic optimization algorithms, which have the features of easy implementation, short execution time and robust mechanisms of escaping from local optimum, is a promising tool for engineering applications. In this paper, a tuning method for determining the parameters of PID control for an automatic regulator voltage (AVR) system using a chaotic optimization approach based on Lozi map is proposed. Since chaotic mapping enjoys certainty, ergodicity and the stochastic property, the proposed chaotic optimization introduces chaos mapping using Lozi map chaotic sequences which increases its convergence rate and resulting precision. Simulation results are promising and show the effectiveness of the proposed approach. Numerical simulations based on proposed PID control of an AVR system for nominal system parameters and step reference voltage input demonstrate the good performance of chaotic optimization.

  13. Optimization of mass of plastic scintillator film for flow-cell based tritium monitoring: a Monte Carlo study

    International Nuclear Information System (INIS)

    Roy, Arup Singha; Palani Selvam, T.; Raman, Anand; Raja, V.; Chaudhury, Probal

    2014-01-01

    Over the years, various types of tritium-in-air monitors have been designed and developed based on different principles. Ionization chamber, proportional counter and scintillation detector systems are few among them. A plastic scintillator based, flow-cell type online tritium-in-air monitoring system was developed for online monitoring of tritium in air. The value of the scintillator mass inside the cell-volume, which maximizes the response of the detector system, should be obtained to get maximum efficiency. The present study is aimed to optimize the amount of mass of the plastic scintillator film for the flow-cell based tritium monitoring instrument so that maximum efficiency is achieved. The Monte Carlo based EGSnrc code system has been used for this purpose

  14. Optimal design of supply chain network under uncertainty environment using hybrid analytical and simulation modeling approach

    Science.gov (United States)

    Chiadamrong, N.; Piyathanavong, V.

    2017-12-01

    Models that aim to optimize the design of supply chain networks have gained more interest in the supply chain literature. Mixed-integer linear programming and discrete-event simulation are widely used for such an optimization problem. We present a hybrid approach to support decisions for supply chain network design using a combination of analytical and discrete-event simulation models. The proposed approach is based on iterative procedures until the difference between subsequent solutions satisfies the pre-determined termination criteria. The effectiveness of proposed approach is illustrated by an example, which shows closer to optimal results with much faster solving time than the results obtained from the conventional simulation-based optimization model. The efficacy of this proposed hybrid approach is promising and can be applied as a powerful tool in designing a real supply chain network. It also provides the possibility to model and solve more realistic problems, which incorporate dynamism and uncertainty.

  15. An integrated approach for optimal frequency regulation service procurement in India

    International Nuclear Information System (INIS)

    Parida, S.K.; Singh, S.N.; Srivastava, S.C.

    2009-01-01

    Ancillary services (AS) management has become an important issue to be addressed in the Indian power system after adaption of the restructuring and unbundling processes following the enactment of Indian Electricity Act 2003. In an electricity market, frequency regulation is one of the ancillary services, which must be procured by the system operator (SO) from the market participants by some regulatory mechanism or using market-based approaches. It is important for the SO to optimally procure this service from the AS market. In this paper, an approach for determining the optimal frequency regulation service procurement has been proposed for equitable payment to generators and recovery from the customers. The effectiveness of the proposed method has been demonstrated on a practical Northern Regional Electricity Board (NREB) system of India. (author)

  16. Integrated Monitoring System for Durability Assessment of Concrete Bridges

    Directory of Open Access Journals (Sweden)

    Cristian-Claudiu Comisu

    2005-01-01

    Full Text Available An ageing and deteriorating bridge stock presents the bridge owners with the growing challenge of maintaining the structures at a satisfactory level of safety, performance and aesthetic appearance within the allocated budgets. This task calls for optimized bridge management based on efficient methods of selecting technical and economical optimal maintenance and rehabilitation strategies. One of the crucial points in the assessment of the current condition and future development and performance. Selecting the optimal maintenance and rehabilitation strategy within the actual budget is a key point in bridge management for which an accurate assessment of performance and deterioration rate is necessary. For this assessment, the use of integrated monitoring system has several advantages compared to the traditional approach of scattered visual inspections combined with occasional on site testing with portable equipment and laboratory testing of collected samples. For this reason, attention is more focusing on the development of permanent integrated monitoring system for durability assessment of concrete bridges. It is estimated that with the implementation of such integrated monitoring systems, it should be possible to reduce the operating costs of inspections and maintenance by 25% and the operator of the structures will be able to take protective actions before damaging processes start. This paper indentifies the main bridge owner requirements to integrated monitoring systems and outlines how monitoring systems may be used for performance and deterioration rate assessment to establish a better basis for selecting the optimal maintenance and rehabilitation strategy.

  17. Optimal Subinterval Selection Approach for Power System Transient Stability Simulation

    Directory of Open Access Journals (Sweden)

    Soobae Kim

    2015-10-01

    Full Text Available Power system transient stability analysis requires an appropriate integration time step to avoid numerical instability as well as to reduce computational demands. For fast system dynamics, which vary more rapidly than what the time step covers, a fraction of the time step, called a subinterval, is used. However, the optimal value of this subinterval is not easily determined because the analysis of the system dynamics might be required. This selection is usually made from engineering experiences, and perhaps trial and error. This paper proposes an optimal subinterval selection approach for power system transient stability analysis, which is based on modal analysis using a single machine infinite bus (SMIB system. Fast system dynamics are identified with the modal analysis and the SMIB system is used focusing on fast local modes. An appropriate subinterval time step from the proposed approach can reduce computational burden and achieve accurate simulation responses as well. The performance of the proposed method is demonstrated with the GSO 37-bus system.

  18. Optimization of environmental monitoring

    International Nuclear Information System (INIS)

    Winter, M.

    1986-01-01

    The routine work and tasks related to prevention in environmental monitoring of nuclear facilities range from low level methodology to the necessity of being likewise prepared to perform environmental impact measurements after nuclear incidents and accidents are presented [pt

  19. PARAMETER COORDINATION AND ROBUST OPTIMIZATION FOR MULTIDISCIPLINARY DESIGN

    Institute of Scientific and Technical Information of China (English)

    HU Jie; PENG Yinghong; XIONG Guangleng

    2006-01-01

    A new parameter coordination and robust optimization approach for multidisciplinary design is presented. Firstly, the constraints network model is established to support engineering change, coordination and optimization. In this model, interval boxes are adopted to describe the uncertainty of design parameters quantitatively to enhance the design robustness. Secondly, the parameter coordination method is presented to solve the constraints network model, monitor the potential conflicts due to engineering changes, and obtain the consistency solution space corresponding to the given product specifications. Finally, the robust parameter optimization model is established, and genetic arithmetic is used to obtain the robust optimization parameter. An example of bogie design is analyzed to show the scheme to be effective.

  20. An Augmented Incomplete Factorization Approach for Computing the Schur Complement in Stochastic Optimization

    KAUST Repository

    Petra, Cosmin G.; Schenk, Olaf; Lubin, Miles; Gä ertner, Klaus

    2014-01-01

    We present a scalable approach and implementation for solving stochastic optimization problems on high-performance computers. In this work we revisit the sparse linear algebra computations of the parallel solver PIPS with the goal of improving the shared-memory performance and decreasing the time to solution. These computations consist of solving sparse linear systems with multiple sparse right-hand sides and are needed in our Schur-complement decomposition approach to compute the contribution of each scenario to the Schur matrix. Our novel approach uses an incomplete augmented factorization implemented within the PARDISO linear solver and an outer BiCGStab iteration to efficiently absorb pivot perturbations occurring during factorization. This approach is capable of both efficiently using the cores inside a computational node and exploiting sparsity of the right-hand sides. We report on the performance of the approach on highperformance computers when solving stochastic unit commitment problems of unprecedented size (billions of variables and constraints) that arise in the optimization and control of electrical power grids. Our numerical experiments suggest that supercomputers can be efficiently used to solve power grid stochastic optimization problems with thousands of scenarios under the strict "real-time" requirements of power grid operators. To our knowledge, this has not been possible prior to the present work. © 2014 Society for Industrial and Applied Mathematics.

  1. Heat and mass transfer intensification and shape optimization a multi-scale approach

    CERN Document Server

    2013-01-01

    Is the heat and mass transfer intensification defined as a new paradigm of process engineering, or is it just a common and old idea, renamed and given the current taste? Where might intensification occur? How to achieve intensification? How the shape optimization of thermal and fluidic devices leads to intensified heat and mass transfers? To answer these questions, Heat & Mass Transfer Intensification and Shape Optimization: A Multi-scale Approach clarifies  the definition of the intensification by highlighting the potential role of the multi-scale structures, the specific interfacial area, the distribution of driving force, the modes of energy supply and the temporal aspects of processes.   A reflection on the methods of process intensification or heat and mass transfer enhancement in multi-scale structures is provided, including porous media, heat exchangers, fluid distributors, mixers and reactors. A multi-scale approach to achieve intensification and shape optimization is developed and clearly expla...

  2. Compliance Groundwater Monitoring of Nonpoint Sources - Emerging Approaches

    Science.gov (United States)

    Harter, T.

    2008-12-01

    Groundwater monitoring networks are typically designed for regulatory compliance of discharges from industrial sites. There, the quality of first encountered (shallow-most) groundwater is of key importance. Network design criteria have been developed for purposes of determining whether an actual or potential, permitted or incidental waste discharge has had or will have a degrading effect on groundwater quality. The fundamental underlying paradigm is that such discharge (if it occurs) will form a distinct contamination plume. Networks that guide (post-contamination) mitigation efforts are designed to capture the shape and dynamics of existing, finite-scale plumes. In general, these networks extend over areas less than one to ten hectare. In recent years, regulatory programs such as the EU Nitrate Directive and the U.S. Clean Water Act have forced regulatory agencies to also control groundwater contamination from non-incidental, recharging, non-point sources, particularly agricultural sources (fertilizer, pesticides, animal waste application, biosolids application). Sources and contamination from these sources can stretch over several tens, hundreds, or even thousands of square kilometers with no distinct plumes. A key question in implementing monitoring programs at the local, regional, and national level is, whether groundwater monitoring can be effectively used as a landowner compliance tool, as is currently done at point-source sites. We compare the efficiency of such traditional site-specific compliance networks in nonpoint source regulation with various designs of regional nonpoint source monitoring networks that could be used for compliance monitoring. We discuss advantages and disadvantages of the site vs. regional monitoring approaches with respect to effectively protecting groundwater resources impacted by nonpoint sources: Site-networks provide a tool to enforce compliance by an individual landowner. But the nonpoint source character of the contamination

  3. Optimization of the choice of unmanned aerial vehicles used to monitor the implementation of selected construction projects

    Science.gov (United States)

    Skorupka, Dariusz; Duchaczek, Artur; Waniewska, Agnieszka; Kowacka, Magdalena

    2017-07-01

    Due to their properties unmanned aerial vehicles have huge number of possibilities for application in construction engineering. The nature and extent of construction works performedmakes the decision to purchase the right equipment significant for the possibility for its further use while monitoring the implementation of these works. Technical factors, such as the accuracy and quality of the applied measurement instruments are especially important when monitoring the realization of construction projects. The paper presents the optimization of the choice of unmanned aerial vehicles using the Bellinger method. The decision-making analysis takes into account criteria that are particularly crucial by virtue of the range of monitoring of ongoing construction works.

  4. LMI–based robust controller design approach in aircraft multidisciplinary design optimization problem

    Directory of Open Access Journals (Sweden)

    Qinghua Zeng

    2015-07-01

    Full Text Available This article proposes a linear matrix inequality–based robust controller design approach to implement the synchronous design of aircraft control discipline and other disciplines, in which the variation in design parameters is treated as equivalent perturbations. Considering the complicated mapping relationships between the coefficient arrays of aircraft motion model and the aircraft design parameters, the robust controller designed is directly based on the variation in these coefficient arrays so conservative that the multidisciplinary design optimization problem would be too difficult to solve, or even if there is a solution, the robustness of design result is generally poor. Therefore, this article derives the uncertainty model of disciplinary design parameters based on response surface approximation, converts the design problem of the robust controller into a problem of solving a standard linear matrix inequality, and theoretically gives a less conservative design method of the robust controller which is based on the variation in design parameters. Furthermore, the concurrent subspace approach is applied to the multidisciplinary system with this kind of robust controller in the design loop. A multidisciplinary design optimization of a tailless aircraft as example is shown that control discipline can be synchronous optimal design with other discipline, especially this method will greatly reduce the calculated amount of multidisciplinary design optimization and make multidisciplinary design optimization results more robustness of flight performance.

  5. Optimal Charging of Electric Drive Vehicles: A Dynamic Programming Approach

    DEFF Research Database (Denmark)

    Delikaraoglou, Stefanos; Capion, Karsten Emil; Juul, Nina

    2013-01-01

    , therefore, we propose an ex ante vehicle aggregation approach. We illustrate the results in a Danish case study and find that, although optimal management of the vehicles does not allow for storage and day-to-day flexibility in the electricity system, the market provides incentive for intra-day flexibility....

  6. Condition monitoring and thermo economic optimization of operation for a hybrid plant using artificial neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Assadi, Mohsen; Fast, Magnus (Lund University, Dept. of Energy Sciences, Lund (Sweden))

    2008-05-15

    The project aim is to model the hybrid plant at Vaesthamnsverket in Helsingborg using artificial neural networks (ANN) and integrating the ANN models, for online condition monitoring and thermo economic optimization, on site. The definition of a hybrid plant is that it uses more than one fuel, in this case a natural gas fuelled gas turbine with heat recovery steam generator (HRSG) and a biomass fuelled steam boiler with steam turbine. The thermo economic optimization takes into account current electricity prices, taxes, fuel prices etc. and calculates the current production cost along with the 'predicted' production cost. The tool also has a built in feature of predicting when a compressor wash is economically beneficial. The user interface is developed together with co-workers at Vaesthamnsverket to ensure its usefulness. The user interface includes functions for warnings and alarms when possible deviations in operation occur and also includes a feature for plotting parameter trends (both measured and predicted values) in selected time intervals. The target group is the plant owners and the original equipment manufacturers (OEM). The power plant owners want to acquire a product for condition monitoring and thermo economic optimization of e.g. maintenance. The OEMs main interest lies in investigating the possibilities of delivering ANN models, for condition monitoring, along with their new gas turbines. The project has been carried out at Lund University, Department of Energy Sciences, with support from Vaesthamnsverket AB and Siemens Industrial Turbomachinery AB. Vaesthamnsverket has contributed with operational data from the plant as well as support in plant related questions. They have also been involved in the implementation of the ANN models in their computer system and the development of the user interface. Siemens have contributed with expert knowledge about their SGT800 gas turbine. The implementation of the ANN models, and the accompanying user

  7. A heuristic approach to optimization of structural topology including self-weight

    Science.gov (United States)

    Tajs-Zielińska, Katarzyna; Bochenek, Bogdan

    2018-01-01

    Topology optimization of structures under a design-dependent self-weight load is investigated in this paper. The problem deserves attention because of its significant importance in the engineering practice, especially nowadays as topology optimization is more often applied when designing large engineering structures, for example, bridges or carrying systems of tall buildings. It is worth noting that well-known approaches of topology optimization which have been successfully applied to structures under fixed loads cannot be directly adapted to the case of design-dependent loads, so that topology generation can be a challenge also for numerical algorithms. The paper presents the application of a simple but efficient non-gradient method to topology optimization of elastic structures under self-weight loading. The algorithm is based on the Cellular Automata concept, the application of which can produce effective solutions with low computational cost.

  8. Reliability-oriented multi-objective optimal decision-making approach for uncertainty-based watershed load reduction

    International Nuclear Information System (INIS)

    Dong, Feifei; Liu, Yong; Su, Han; Zou, Rui; Guo, Huaicheng

    2015-01-01

    Water quality management and load reduction are subject to inherent uncertainties in watershed systems and competing decision objectives. Therefore, optimal decision-making modeling in watershed load reduction is suffering due to the following challenges: (a) it is difficult to obtain absolutely “optimal” solutions, and (b) decision schemes may be vulnerable to failure. The probability that solutions are feasible under uncertainties is defined as reliability. A reliability-oriented multi-objective (ROMO) decision-making approach was proposed in this study for optimal decision making with stochastic parameters and multiple decision reliability objectives. Lake Dianchi, one of the three most eutrophic lakes in China, was examined as a case study for optimal watershed nutrient load reduction to restore lake water quality. This study aimed to maximize reliability levels from considerations of cost and load reductions. The Pareto solutions of the ROMO optimization model were generated with the multi-objective evolutionary algorithm, demonstrating schemes representing different biases towards reliability. The Pareto fronts of six maximum allowable emission (MAE) scenarios were obtained, which indicated that decisions may be unreliable under unpractical load reduction requirements. A decision scheme identification process was conducted using the back propagation neural network (BPNN) method to provide a shortcut for identifying schemes at specific reliability levels for decision makers. The model results indicated that the ROMO approach can offer decision makers great insights into reliability tradeoffs and can thus help them to avoid ineffective decisions. - Highlights: • Reliability-oriented multi-objective (ROMO) optimal decision approach was proposed. • The approach can avoid specifying reliability levels prior to optimization modeling. • Multiple reliability objectives can be systematically balanced using Pareto fronts. • Neural network model was used to

  9. Reliability-oriented multi-objective optimal decision-making approach for uncertainty-based watershed load reduction

    Energy Technology Data Exchange (ETDEWEB)

    Dong, Feifei [College of Environmental Science and Engineering, Key Laboratory of Water and Sediment Sciences (MOE), Peking University, Beijing 100871 (China); Liu, Yong, E-mail: yongliu@pku.edu.cn [College of Environmental Science and Engineering, Key Laboratory of Water and Sediment Sciences (MOE), Peking University, Beijing 100871 (China); Institute of Water Sciences, Peking University, Beijing 100871 (China); Su, Han [College of Environmental Science and Engineering, Key Laboratory of Water and Sediment Sciences (MOE), Peking University, Beijing 100871 (China); Zou, Rui [Tetra Tech, Inc., 10306 Eaton Place, Ste 340, Fairfax, VA 22030 (United States); Yunnan Key Laboratory of Pollution Process and Management of Plateau Lake-Watershed, Kunming 650034 (China); Guo, Huaicheng [College of Environmental Science and Engineering, Key Laboratory of Water and Sediment Sciences (MOE), Peking University, Beijing 100871 (China)

    2015-05-15

    Water quality management and load reduction are subject to inherent uncertainties in watershed systems and competing decision objectives. Therefore, optimal decision-making modeling in watershed load reduction is suffering due to the following challenges: (a) it is difficult to obtain absolutely “optimal” solutions, and (b) decision schemes may be vulnerable to failure. The probability that solutions are feasible under uncertainties is defined as reliability. A reliability-oriented multi-objective (ROMO) decision-making approach was proposed in this study for optimal decision making with stochastic parameters and multiple decision reliability objectives. Lake Dianchi, one of the three most eutrophic lakes in China, was examined as a case study for optimal watershed nutrient load reduction to restore lake water quality. This study aimed to maximize reliability levels from considerations of cost and load reductions. The Pareto solutions of the ROMO optimization model were generated with the multi-objective evolutionary algorithm, demonstrating schemes representing different biases towards reliability. The Pareto fronts of six maximum allowable emission (MAE) scenarios were obtained, which indicated that decisions may be unreliable under unpractical load reduction requirements. A decision scheme identification process was conducted using the back propagation neural network (BPNN) method to provide a shortcut for identifying schemes at specific reliability levels for decision makers. The model results indicated that the ROMO approach can offer decision makers great insights into reliability tradeoffs and can thus help them to avoid ineffective decisions. - Highlights: • Reliability-oriented multi-objective (ROMO) optimal decision approach was proposed. • The approach can avoid specifying reliability levels prior to optimization modeling. • Multiple reliability objectives can be systematically balanced using Pareto fronts. • Neural network model was used to

  10. An efficient identification approach for stable and unstable nonlinear systems using Colliding Bodies Optimization algorithm.

    Science.gov (United States)

    Pal, Partha S; Kar, R; Mandal, D; Ghoshal, S P

    2015-11-01

    This paper presents an efficient approach to identify different stable and practically useful Hammerstein models as well as unstable nonlinear process along with its stable closed loop counterpart with the help of an evolutionary algorithm as Colliding Bodies Optimization (CBO) optimization algorithm. The performance measures of the CBO based optimization approach such as precision, accuracy are justified with the minimum output mean square value (MSE) which signifies that the amount of bias and variance in the output domain are also the least. It is also observed that the optimization of output MSE in the presence of outliers has resulted in a very close estimation of the output parameters consistently, which also justifies the effective general applicability of the CBO algorithm towards the system identification problem and also establishes the practical usefulness of the applied approach. Optimum values of the MSEs, computational times and statistical information of the MSEs are all found to be the superior as compared with those of the other existing similar types of stochastic algorithms based approaches reported in different recent literature, which establish the robustness and efficiency of the applied CBO based identification scheme. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  11. Deformation integrity monitoring for GNSS positioning services including local, regional and large scale hazard monitoring - the Karlsruhe approach and software(MONIKA)

    Science.gov (United States)

    Jaeger, R.

    2007-05-01

    GNSS-positioning services like SAPOS/ascos in Germany and many others in Europe, America and worldwide, usually yield in a short time their interdisciplinary and country-wide use for precise geo-referencing, replacing traditional low order geodetic networks. So it becomes necessary that possible changes of the reference stations' coordinates are detected ad hoc. The GNSS-reference-station MONitoring by the KArlsruhe approach and software (MONIKA) are designed for that task. The developments at Karlsruhe University of Applied Sciences in cooperation with the State Survey of Baden-Württemberg are further motivated by a the official resolution of the German state survey departments' association (Arbeitsgemeinschaft der Vermessungsverwaltungen Deutschland (AdV)) 2006 on coordinate monitoring as a quality-control duty of the GNSS-positioning service provider. The presented approach can - besides the coordinate control of GNSS-positioning services - also be used to set up any GNSS-service for the tasks of an area-wide geodynamical and natural disaster-prevention service. The mathematical model of approach, which enables a multivariate and multi-epochal design approach, is based on the GNSS-observations input of the RINEX-data of the GNSS service, followed by fully automatic processing of baselines and/or session, and a near-online setting up of epoch-state vectors and their covariance-matrices in a rigorous 3D network adjustment. In case of large scale and long-term monitoring situations, geodynamical standard trends (datum-drift, plate-movements etc.) are accordingly considered and included in the mathematical model of MONIKA. The coordinate-based deformation monitoring approach, as third step of the stepwise adjustments, is based on the above epoch-state vectors, and - splitting off geodynamics trends - hereby on a multivariate and multi-epochal congruency testing. So far, that no other information exists, all points are assumed as being stable and congruent reference

  12. Towards a more balanced view of the potentials of locally-based monitoring

    DEFF Research Database (Denmark)

    Lund, Jens Friis

    2014-01-01

    The literature on locally-based monitoring in the context of conservation of ecosystems and natural resources in developing countries displays a great deal of optimism about its prospects as a low-cost approach to gather information about conservation outcomes. Yet, this optimism stands in stark...... the information can be perceived by those who monitor to be linked to claims over resource rights and associated benefits. In such situations, trust in locally-based monitoring should be tempered by scepticism and systems of checks and balances....... contrast to studies on co-management between States and local communities showing that such processes—in which communities and the State ostensibly work hand in hand on the monitoring and management of natural resources—are fraught with power struggles within communities as well as between communities...

  13. An approach to maintenance optimization where safety issues are important

    International Nuclear Information System (INIS)

    Vatn, Jorn; Aven, Terje

    2010-01-01

    The starting point for this paper is a traditional approach to maintenance optimization where an object function is used for optimizing maintenance intervals. The object function reflects maintenance cost, cost of loss of production/services, as well as safety costs, and is based on a classical cost-benefit analysis approach where a value of prevented fatality (VPF) is used to weight the importance of safety. However, the rationale for such an approach could be questioned. What is the meaning of such a VPF figure, and is it sufficient to reflect the importance of safety by calculating the expected fatality loss VPF and potential loss of lives (PLL) as being done in the cost-benefit analyses? Should the VPF be the same number for all type of accidents, or should it be increased in case of multiple fatality accidents to reflect gross accident aversion? In this paper, these issues are discussed. We conclude that we have to see beyond the expected values in situations with high safety impacts. A framework is presented which opens up for a broader decision basis, covering considerations on the potential for gross accidents, the type of uncertainties and lack of knowledge of important risk influencing factors. Decisions with a high safety impact are moved from the maintenance department to the 'Safety Board' for a broader discussion. In this way, we avoid that the object function is used in a mechanical way to optimize the maintenance and important safety-related decisions are made implicit and outside the normal arena for safety decisions, e.g. outside the traditional 'Safety Board'. A case study from the Norwegian railways is used to illustrate the discussions.

  14. An approach to maintenance optimization where safety issues are important

    Energy Technology Data Exchange (ETDEWEB)

    Vatn, Jorn, E-mail: jorn.vatn@ntnu.n [NTNU, Production and Quality Engineering, 7491 Trondheim (Norway); Aven, Terje [University of Stavanger (Norway)

    2010-01-15

    The starting point for this paper is a traditional approach to maintenance optimization where an object function is used for optimizing maintenance intervals. The object function reflects maintenance cost, cost of loss of production/services, as well as safety costs, and is based on a classical cost-benefit analysis approach where a value of prevented fatality (VPF) is used to weight the importance of safety. However, the rationale for such an approach could be questioned. What is the meaning of such a VPF figure, and is it sufficient to reflect the importance of safety by calculating the expected fatality loss VPF and potential loss of lives (PLL) as being done in the cost-benefit analyses? Should the VPF be the same number for all type of accidents, or should it be increased in case of multiple fatality accidents to reflect gross accident aversion? In this paper, these issues are discussed. We conclude that we have to see beyond the expected values in situations with high safety impacts. A framework is presented which opens up for a broader decision basis, covering considerations on the potential for gross accidents, the type of uncertainties and lack of knowledge of important risk influencing factors. Decisions with a high safety impact are moved from the maintenance department to the 'Safety Board' for a broader discussion. In this way, we avoid that the object function is used in a mechanical way to optimize the maintenance and important safety-related decisions are made implicit and outside the normal arena for safety decisions, e.g. outside the traditional 'Safety Board'. A case study from the Norwegian railways is used to illustrate the discussions.

  15. Optimization of experimental conditions for the monitoring of nucleation and growth of racemic Diprophylline from the supercooled melt

    Science.gov (United States)

    Lemercier, Aurélien; Viel, Quentin; Brandel, Clément; Cartigny, Yohann; Dargent, Eric; Petit, Samuel; Coquerel, Gérard

    2017-08-01

    Since more and more pharmaceutical substances are developed as amorphous forms, it is nowadays of major relevance to get insights into the nucleation and growth mechanisms from supercooled melts (SCM). A step-by-step approach of recrystallization from a SCM is presented here, designed to elucidate the impact of various experimental parameters. Using the bronchodilator agent Diprophylline (DPL) as a model compound, it is shown that optimal conditions for informative observations of the crystallization behaviour from supercooled racemic DPL require to place samples between two cover slides with a maximum sample thickness of 20 μm, and to monitor recrystallization during an annealing step of 30 min at 70 °C, i.e. about 33 °C above the temperature of glass transition. In these optimized conditions, it could be established that DPL crystallization proceeds in two steps: spontaneous nucleation and growth of large and well-faceted particles of a new crystal form (primary crystals: PC) and subsequent crystallization of a previously known form (RII) that develops from specific surfaces of PC. The formation of PC particles therefore constitutes the key-step of the crystallization events and is shown to be favoured by at least 2.33 wt% of the major chemical impurity, Theophylline.

  16. Optimization and control methods in industrial engineering and construction

    CERN Document Server

    Wang, Xiangyu

    2014-01-01

    This book presents recent advances in optimization and control methods with applications to industrial engineering and construction management. It consists of 15 chapters authored by recognized experts in a variety of fields including control and operation research, industrial engineering, and project management. Topics include numerical methods in unconstrained optimization, robust optimal control problems, set splitting problems, optimum confidence interval analysis, a monitoring networks optimization survey, distributed fault detection, nonferrous industrial optimization approaches, neural networks in traffic flows, economic scheduling of CCHP systems, a project scheduling optimization survey, lean and agile construction project management, practical construction projects in Hong Kong, dynamic project management, production control in PC4P, and target contracts optimization.   The book offers a valuable reference work for scientists, engineers, researchers and practitioners in industrial engineering and c...

  17. A Systematic Approach for Quantitative Analysis of Multidisciplinary Design Optimization Framework

    Science.gov (United States)

    Kim, Sangho; Park, Jungkeun; Lee, Jeong-Oog; Lee, Jae-Woo

    An efficient Multidisciplinary Design and Optimization (MDO) framework for an aerospace engineering system should use and integrate distributed resources such as various analysis codes, optimization codes, Computer Aided Design (CAD) tools, Data Base Management Systems (DBMS), etc. in a heterogeneous environment, and need to provide user-friendly graphical user interfaces. In this paper, we propose a systematic approach for determining a reference MDO framework and for evaluating MDO frameworks. The proposed approach incorporates two well-known methods, Analytic Hierarchy Process (AHP) and Quality Function Deployment (QFD), in order to provide a quantitative analysis of the qualitative criteria of MDO frameworks. Identification and hierarchy of the framework requirements and the corresponding solutions for the reference MDO frameworks, the general one and the aircraft oriented one were carefully investigated. The reference frameworks were also quantitatively identified using AHP and QFD. An assessment of three in-house frameworks was then performed. The results produced clear and useful guidelines for improvement of the in-house MDO frameworks and showed the feasibility of the proposed approach for evaluating an MDO framework without a human interference.

  18. Optimizing the data acquisition rate for a remotely controllable structural monitoring system with parallel operation and self-adaptive sampling

    International Nuclear Information System (INIS)

    Sheng, Wenjuan; Guo, Aihuang; Liu, Yang; Azmi, Asrul Izam; Peng, Gang-Ding

    2011-01-01

    We present a novel technique that optimizes the real-time remote monitoring and control of dispersed civil infrastructures. The monitoring system is based on fiber Bragg gating (FBG) sensors, and transfers data via Ethernet. This technique combines parallel operation and self-adaptive sampling to increase the data acquisition rate in remote controllable structural monitoring systems. The compact parallel operation mode is highly efficient at achieving the highest possible data acquisition rate for the FBG sensor based local data acquisition system. Self-adaptive sampling is introduced to continuously coordinate local acquisition and remote control for data acquisition rate optimization. Key issues which impact the operation of the whole system, such as the real-time data acquisition rate, data processing capability, and buffer usage, are investigated. The results show that, by introducing parallel operation and self-adaptive sampling, the data acquisition rate can be increased by several times without affecting the system operating performance on both local data acquisition and remote process control

  19. Probabilistic safety assessment and optimal control of hazardous technological systems. A marked point process approach

    International Nuclear Information System (INIS)

    Holmberg, J.

    1997-04-01

    The thesis models risk management as an optimal control problem for a stochastic process. The approach classes the decisions made by management into three categories according to the control methods of a point process: (1) planned process lifetime, (2) modification of the design, and (3) operational decisions. The approach is used for optimization of plant shutdown criteria and surveillance test strategies of a hypothetical nuclear power plant

  20. An artificial intelligence approach to onboard fault monitoring and diagnosis for aircraft applications

    Science.gov (United States)

    Schutte, P. C.; Abbott, K. H.

    1986-01-01

    Real-time onboard fault monitoring and diagnosis for aircraft applications, whether performed by the human pilot or by automation, presents many difficult problems. Quick response to failures may be critical, the pilot often must compensate for the failure while diagnosing it, his information about the state of the aircraft is often incomplete, and the behavior of the aircraft changes as the effect of the failure propagates through the system. A research effort was initiated to identify guidelines for automation of onboard fault monitoring and diagnosis and associated crew interfaces. The effort began by determining the flight crew's information requirements for fault monitoring and diagnosis and the various reasoning strategies they use. Based on this information, a conceptual architecture was developed for the fault monitoring and diagnosis process. This architecture represents an approach and a framework which, once incorporated with the necessary detail and knowledge, can be a fully operational fault monitoring and diagnosis system, as well as providing the basis for comparison of this approach to other fault monitoring and diagnosis concepts. The architecture encompasses all aspects of the aircraft's operation, including navigation, guidance and controls, and subsystem status. The portion of the architecture that encompasses subsystem monitoring and diagnosis was implemented for an aircraft turbofan engine to explore and demonstrate the AI concepts involved. This paper describes the architecture and the implementation for the engine subsystem.

  1. Hybrid approaches to clinical trial monitoring: Practical alternatives to 100% source data verification

    Directory of Open Access Journals (Sweden)

    Sourabh De

    2011-01-01

    Full Text Available For years, a vast majority of clinical trial industry has followed the tenet of 100% source data verification (SDV. This has been driven partly by the overcautious approach to linking quality of data to the extent of monitoring and SDV and partly by being on the safer side of regulations. The regulations however, do not state any upper or lower limits of SDV. What it expects from researchers and the sponsors is methodologies which ensure data quality. How the industry does it is open to innovation and application of statistical methods, targeted and remote monitoring, real time reporting, adaptive monitoring schedules, etc. In short, hybrid approaches to monitoring. Coupled with concepts of optimum monitoring and SDV at site and off-site monitoring techniques, it should be possible to save time required to conduct SDV leading to more available time for other productive activities. Organizations stand to gain directly or indirectly from such savings, whether by diverting the funds back to the R&D pipeline; investing more in technology infrastructure to support large trials; or simply increasing sample size of trials. Whether it also affects the work-life balance of monitors who may then need to travel with a less hectic schedule for the same level of quality and productivity can be predicted only when there is more evidence from field.

  2. Hybrid approaches to clinical trial monitoring: Practical alternatives to 100% source data verification.

    Science.gov (United States)

    De, Sourabh

    2011-07-01

    For years, a vast majority of clinical trial industry has followed the tenet of 100% source data verification (SDV). This has been driven partly by the overcautious approach to linking quality of data to the extent of monitoring and SDV and partly by being on the safer side of regulations. The regulations however, do not state any upper or lower limits of SDV. What it expects from researchers and the sponsors is methodologies which ensure data quality. How the industry does it is open to innovation and application of statistical methods, targeted and remote monitoring, real time reporting, adaptive monitoring schedules, etc. In short, hybrid approaches to monitoring. Coupled with concepts of optimum monitoring and SDV at site and off-site monitoring techniques, it should be possible to save time required to conduct SDV leading to more available time for other productive activities. Organizations stand to gain directly or indirectly from such savings, whether by diverting the funds back to the R&D pipeline; investing more in technology infrastructure to support large trials; or simply increasing sample size of trials. Whether it also affects the work-life balance of monitors who may then need to travel with a less hectic schedule for the same level of quality and productivity can be predicted only when there is more evidence from field.

  3. Theoretical approach in optimization of stability of the multicomponent solid waste form

    International Nuclear Information System (INIS)

    Raicevic, S.; Plecas, I.; Mandic, M.

    1998-01-01

    Chemical precipitation of radionuclides and their immobilization into the solid matrix represents an important approach in the radioactive wastewater treatment. Unfortunately, because of the complexity of the system, optimization of this process in terms of its efficacy and safety represents a serious practical problem, even in treatment of the monocomponent nuclear waste. This situation is additionally complicated in the case of the polycomponent nuclear waste because of the synergic effects of interactions between the radioactive components and the solid matrix. Recently, we have proposed a general theoretical approach for optimization of the process of precipitation and immobilization of metal impurities by the solid matrix. One of the main advantages of this approach represents the possibility of treatment of the multicomponent liquid waste, immobilized by the solid matrix. This approach was used here for investigation of the stability of the system hydroxyapatite (HAP) - Pb/Cd, which was selected as a model multicomponent waste system. In this analysis, we have used a structurally dependent term of the cohesive energy as a stability criterion. (author)

  4. Radiation dose optimization research: Exposure technique approaches in CR imaging – A literature review

    International Nuclear Information System (INIS)

    Seeram, Euclid; Davidson, Rob; Bushong, Stewart; Swan, Hans

    2013-01-01

    The purpose of this paper is to review the literature on exposure technique approaches in Computed Radiography (CR) imaging as a means of radiation dose optimization in CR imaging. Specifically the review assessed three approaches: optimization of kVp; optimization of mAs; and optimization of the Exposure Indicator (EI) in practice. Only papers dating back to 2005 were described in this review. The major themes, patterns, and common findings from the literature reviewed showed that important features are related to radiation dose management strategies for digital radiography include identification of the EI as a dose control mechanism and as a “surrogate for dose management”. In addition the use of the EI has been viewed as an opportunity for dose optimization. Furthermore optimization research has focussed mainly on optimizing the kVp in CR imaging as a means of implementing the ALARA philosophy, and studies have concentrated on mainly chest imaging using different CR systems such as those commercially available from Fuji, Agfa, Kodak, and Konica-Minolta. These studies have produced “conflicting results”. In addition, a common pattern was the use of automatic exposure control (AEC) and the measurement of constant effective dose, and the use of a dose-area product (DAP) meter

  5. A market-based optimization approach to sensor and resource management

    Science.gov (United States)

    Schrage, Dan; Farnham, Christopher; Gonsalves, Paul G.

    2006-05-01

    Dynamic resource allocation for sensor management is a problem that demands solutions beyond traditional approaches to optimization. Market-based optimization applies solutions from economic theory, particularly game theory, to the resource allocation problem by creating an artificial market for sensor information and computational resources. Intelligent agents are the buyers and sellers in this market, and they represent all the elements of the sensor network, from sensors to sensor platforms to computational resources. These agents interact based on a negotiation mechanism that determines their bidding strategies. This negotiation mechanism and the agents' bidding strategies are based on game theory, and they are designed so that the aggregate result of the multi-agent negotiation process is a market in competitive equilibrium, which guarantees an optimal allocation of resources throughout the sensor network. This paper makes two contributions to the field of market-based optimization: First, we develop a market protocol to handle heterogeneous goods in a dynamic setting. Second, we develop arbitrage agents to improve the efficiency in the market in light of its dynamic nature.

  6. Optimal Modeling of Wireless LANs: A Decision-Making Multiobjective Approach

    Directory of Open Access Journals (Sweden)

    Tomás de Jesús Mateo Sanguino

    2018-01-01

    Full Text Available Communication infrastructure planning is a critical design task that typically requires handling complex concepts on networking aimed at optimizing performance and resources, thus demanding high analytical and problem-solving skills to engineers. To reduce this gap, this paper describes an optimization algorithm—based on evolutionary strategy—created as an aid for decision-making prior to the real deployment of wireless LANs. The developed algorithm allows automating the design process, traditionally handmade by network technicians, in order to save time and cost by improving the WLAN arrangement. To this end, we implemented a multiobjective genetic algorithm (MOGA with the purpose of meeting two simultaneous design objectives, namely, to minimize the number of APs while maximizing the coverage signal over a whole planning area. Such approach provides efficient and scalable solutions closer to the best network design, so that we integrated the developed algorithm into an engineering tool with the goal of modelling the behavior of WLANs in ICT infrastructures. Called WiFiSim, it allows the investigation of various complex issues concerning the design of IEEE 802.11-based WLANs, thereby facilitating design of the study and design and optimal deployment of wireless LANs through complete modelling software. As a result, we comparatively evaluated three target applications considering small, medium, and large scenarios with a previous approach developed, a monoobjective genetic algorithm.

  7. A geometric approach to multiperiod mean variance optimization of assets and liabilities

    OpenAIRE

    Leippold, Markus; Trojani, Fabio; Vanini, Paolo

    2005-01-01

    We present a geometric approach to discrete time multiperiod mean variance portfolio optimization that largely simplifies the mathematical analysis and the economic interpretation of such model settings. We show that multiperiod mean variance optimal policies can be decomposed in an orthogonal set of basis strategies, each having a clear economic interpretation. This implies that the corresponding multi period mean variance frontiers are spanned by an orthogonal basis of dynamic returns. Spec...

  8. A Hybrid Change Detection Approach for Damage Detection and Recovery Monitoring

    Science.gov (United States)

    de Alwis Pitts, Dilkushi; Wieland, Marc; Wang, Shifeng; So, Emily; Pittore, Massimiliano

    2014-05-01

    Following a disaster, change detection via pre- and post-event very high resolution remote sensing images is an essential technique for damage assessment and recovery monitoring over large areas in complex urban environments. Most assessments to date focus on detection, destruction and recovery of man-made objects that facilitate shelter and accessibility, such as buildings, roads, bridges, etc., as indicators for assessment and better decision making. Moreover, many current change-detection mechanisms do not use all the data and knowledge which are often available for the pre-disaster state. Recognizing the continuous rather than dichotomous character of the data-rich/data-poor distinction permits the incorporation of ancillary data and existing knowledge into the processing flow. Such incorporation could improve the reliability of the results and thereby enhance the usability of robust methods for disaster management. This study proposes an application-specific and robust change detection method from multi-temporal very high resolution multi-spectral satellite images. This hybrid indicator-specific method uses readily available pre-disaster GIS data and integrates existing knowledge into the processing flow to optimize the change detection while offering the possibility to target specific types of changes to man-made objects. The indicator-specific information of the GIS objects is used as a series of masks to treat the GIS objects with similar characteristics similarly for better accuracy. The proposed approach is based on a fusion of a multi-index change detection method based on gradient, texture and edge similarity filters. The change detection index is flexible for disaster cases in which the pre-disaster and post-disaster images are not of the same resolution. The proposed automated method is evaluated with QuickBird and Ikonos datasets for abrupt changes soon after disaster. The method could also be extended in a semi-automated way for monitoring

  9. Optimizing the transient transfection process of HEK-293 suspension cells for protein production by nucleotide ratio monitoring

    DEFF Research Database (Denmark)

    de Los Milagros Bassani Molinas, Maria; Beer, Christiane; Hesse, F

    2014-01-01

    Large scale, transient gene expression (TGE) is highly dependent of the physiological status of a cell line. Therefore, intracellular nucleotide pools and ratios were used for identifying and monitoring the optimal status of a suspension cell line used for TGE. The transfection efficiency upon po...

  10. Mass Optimization of Battery/Supercapacitors Hybrid Systems Based on a Linear Programming Approach

    Science.gov (United States)

    Fleury, Benoit; Labbe, Julien

    2014-08-01

    The objective of this paper is to show that, on a specific launcher-type mission profile, a 40% gain of mass is expected using a battery/supercapacitors active hybridization instead of a single battery solution. This result is based on the use of a linear programming optimization approach to perform the mass optimization of the hybrid power supply solution.

  11. Site specific optimization of wind turbines energy cost: Iterative approach

    International Nuclear Information System (INIS)

    Rezaei Mirghaed, Mohammad; Roshandel, Ramin

    2013-01-01

    Highlights: • Optimization model of wind turbine parameters plus rectangular farm layout is developed. • Results show that levelized cost for single turbine fluctuates between 46.6 and 54.5 $/MW h. • Modeling results for two specific farms reported optimal sizing and farm layout. • Results show that levelized cost of the wind farms fluctuates between 45.8 and 67.2 $/MW h. - Abstract: The present study was aimed at developing a model to optimize the sizing parameters and farm layout of wind turbines according to the wind resource and economic aspects. The proposed model, including aerodynamic, economic and optimization sub-models, is used to achieve minimum levelized cost of electricity. The blade element momentum theory is utilized for aerodynamic modeling of pitch-regulated horizontal axis wind turbines. Also, a comprehensive cost model including capital costs of all turbine components is considered. An iterative approach is used to develop the optimization model. The modeling results are presented for three potential regions in Iran: Khaf, Ahar and Manjil. The optimum configurations and sizing for a single turbine with minimum levelized cost of electricity are presented. The optimal cost of energy for one turbine is calculated about 46.7, 54.5 and 46.6 dollars per MW h in the studied sites, respectively. In addition, optimal size of turbines, annual electricity production, capital cost, and wind farm layout for two different rectangular and square shaped farms in the proposed areas have been recognized. According to the results, optimal system configuration corresponds to minimum levelized cost of electricity about 45.8 to 67.2 dollars per MW h in the studied wind farms

  12. Shape Optimization of Swimming Sheets

    Energy Technology Data Exchange (ETDEWEB)

    Wilkening, J.; Hosoi, A.E.

    2005-03-01

    The swimming behavior of a flexible sheet which moves by propagating deformation waves along its body was first studied by G. I. Taylor in 1951. In addition to being of theoretical interest, this problem serves as a useful model of the locomotion of gastropods and various micro-organisms. Although the mechanics of swimming via wave propagation has been studied extensively, relatively little work has been done to define or describe optimal swimming by this mechanism.We carry out this objective for a sheet that is separated from a rigid substrate by a thin film of viscous Newtonian fluid. Using a lubrication approximation to model the dynamics, we derive the relevant Euler-Lagrange equations to optimize swimming speed and efficiency. The optimization equations are solved numerically using two different schemes: a limited memory BFGS method that uses cubic splines to represent the wave profile, and a multi-shooting Runge-Kutta approach that uses the Levenberg-Marquardt method to vary the parameters of the equations until the constraints are satisfied. The former approach is less efficient but generalizes nicely to the non-lubrication setting. For each optimization problem we obtain a one parameter family of solutions that becomes singular in a self-similar fashion as the parameter approaches a critical value. We explore the validity of the lubrication approximation near this singular limit by monitoring higher order corrections to the zeroth order theory and by comparing the results with finite element solutions of the full Stokes equations.

  13. Improving Wishart Classification of Polarimetric SAR Data Using the Hopfield Neural Network Optimization Approach

    Directory of Open Access Journals (Sweden)

    Íñigo Molina

    2012-11-01

    Full Text Available This paper proposes the optimization relaxation approach based on the analogue Hopfield Neural Network (HNN for cluster refinement of pre-classified Polarimetric Synthetic Aperture Radar (PolSAR image data. We consider the initial classification provided by the maximum-likelihood classifier based on the complex Wishart distribution, which is then supplied to the HNN optimization approach. The goal is to improve the classification results obtained by the Wishart approach. The classification improvement is verified by computing a cluster separability coefficient and a measure of homogeneity within the clusters. During the HNN optimization process, for each iteration and for each pixel, two consistency coefficients are computed, taking into account two types of relations between the pixel under consideration and its corresponding neighbors. Based on these coefficients and on the information coming from the pixel itself, the pixel under study is re-classified. Different experiments are carried out to verify that the proposed approach outperforms other strategies, achieving the best results in terms of separability and a trade-off with the homogeneity preserving relevant structures in the image. The performance is also measured in terms of computational central processing unit (CPU times.

  14. Multi-scale Modeling Approach for Design and Optimization of Oleochemical Processes

    DEFF Research Database (Denmark)

    Jones, Mark Nicholas; Forero-Hernandez, Hector Alexander; Sarup, Bent

    2017-01-01

    The primary goal of this work is to present a systematic methodology and software frameworkfor a multi-level approach ranging from process synthesis and modeling throughproperty prediction, to sensitivity analysis, property parameter tuning and optimization.This framework is applied to the follow...

  15. A two-stage approach for multi-objective decision making with applications to system reliability optimization

    International Nuclear Information System (INIS)

    Li Zhaojun; Liao Haitao; Coit, David W.

    2009-01-01

    This paper proposes a two-stage approach for solving multi-objective system reliability optimization problems. In this approach, a Pareto optimal solution set is initially identified at the first stage by applying a multiple objective evolutionary algorithm (MOEA). Quite often there are a large number of Pareto optimal solutions, and it is difficult, if not impossible, to effectively choose the representative solutions for the overall problem. To overcome this challenge, an integrated multiple objective selection optimization (MOSO) method is utilized at the second stage. Specifically, a self-organizing map (SOM), with the capability of preserving the topology of the data, is applied first to classify those Pareto optimal solutions into several clusters with similar properties. Then, within each cluster, the data envelopment analysis (DEA) is performed, by comparing the relative efficiency of those solutions, to determine the final representative solutions for the overall problem. Through this sequential solution identification and pruning process, the final recommended solutions to the multi-objective system reliability optimization problem can be easily determined in a more systematic and meaningful way.

  16. The analytical approach to optimization of active region structure of quantum dot laser

    International Nuclear Information System (INIS)

    Korenev, V V; Savelyev, A V; Zhukov, A E; Omelchenko, A V; Maximov, M V

    2014-01-01

    Using the analytical approach introduced in our previous papers we analyse the possibilities of optimization of size and structure of active region of semiconductor quantum dot lasers emitting via ground-state optical transitions. It is shown that there are optimal length' dispersion and number of QD layers in laser active region which allow one to obtain lasing spectrum of a given width at minimum injection current. Laser efficiency corresponding to the injection current optimized by the cavity length is practically equal to its maximum value

  17. The analytical approach to optimization of active region structure of quantum dot laser

    Science.gov (United States)

    Korenev, V. V.; Savelyev, A. V.; Zhukov, A. E.; Omelchenko, A. V.; Maximov, M. V.

    2014-10-01

    Using the analytical approach introduced in our previous papers we analyse the possibilities of optimization of size and structure of active region of semiconductor quantum dot lasers emitting via ground-state optical transitions. It is shown that there are optimal length' dispersion and number of QD layers in laser active region which allow one to obtain lasing spectrum of a given width at minimum injection current. Laser efficiency corresponding to the injection current optimized by the cavity length is practically equal to its maximum value.

  18. Future xenon system operational parameter optimization

    International Nuclear Information System (INIS)

    Lowrey, J.D.; Eslinger, P.W.; Miley, H.S.

    2016-01-01

    Any atmospheric monitoring network will have practical limitations in the density of its sampling stations. The classical approach to network optimization has been to have 12 or 24-h integration of air samples at the highest station density possible to improve minimum detectable concentrations. The authors present here considerations on optimizing sampler integration time to make the best use of any network and maximize the likelihood of collecting quality samples at any given location. In particular, this work makes the case that shorter duration sample integration (i.e. <12 h) enhances critical isotopic information and improves the source location capability of a radionuclide network, or even just one station. (author)

  19. Link Performance Analysis and monitoring - A unified approach to divergent requirements

    Science.gov (United States)

    Thom, G. A.

    Link Performance Analysis and real-time monitoring are generally covered by a wide range of equipment. Bit Error Rate testers provide digital link performance measurements but are not useful during real-time data flows. Real-time performance monitors utilize the fixed overhead content but vary widely from format to format. Link quality information is also present from signal reconstruction equipment in the form of receiver AGC, bit synchronizer AGC, and bit synchronizer soft decision level outputs, but no general approach to utilizing this information exists. This paper presents an approach to link tests, real-time data quality monitoring, and results presentation that utilizes a set of general purpose modules in a flexible architectural environment. The system operates over a wide range of bit rates (up to 150 Mbs) and employs several measurement techniques, including P/N code errors or fixed PCM format errors, derived real-time BER from frame sync errors, and Data Quality Analysis derived by counting significant sync status changes. The architecture performs with a minimum of elements in place to permit a phased update of the user's unit in accordance with his needs.

  20. Scientific and technological basis for maintenance optimization, planning, testing and monitoring for NPP with WWER

    International Nuclear Information System (INIS)

    Kovrizhkin, Yu.L.; Skalozubov, V.I.; Kochneva, V.Yu.

    2009-01-01

    The main results of the developments in the sphere of NPPs with WWER production efficiency increasing by the way of the maintenance optimization planning, testing and monitoring of the equipment and systems are shown. The attention is paid to the metal control during maintenance period of Power Unit. The realization methods of the transition concept at the repair according to the technical condition are resulted

  1. The impact of clinical trial monitoring approaches on data integrity and cost--a review of current literature.

    Science.gov (United States)

    Olsen, Rasmus; Bihlet, Asger Reinstrup; Kalakou, Faidra; Andersen, Jeppe Ragnar

    2016-04-01

    Monitoring is a costly requirement when conducting clinical trials. New regulatory guidance encourages the industry to consider alternative monitoring methods to the traditional 100 % source data verification (SDV) approach. The purpose of this literature review is to provide an overview of publications on different monitoring methods and their impact on subject safety data, data integrity, and monitoring cost. The literature search was performed by keyword searches in MEDLINE and hand search of key journals. All publications were reviewed for details on how a monitoring approach impacted subject safety data, data integrity, or monitoring costs. Twenty-two publications were identified. Three publications showed that SDV has some value for detection of not initially reported adverse events and centralized statistical monitoring (CSM) captures atypical trends. Fourteen publications showed little objective evidence of improved data integrity with traditional monitoring such as 100 % SDV and sponsor queries as compared to reduced SDV, CSM, and remote monitoring. Eight publications proposed a potential for significant cost reductions of monitoring by reducing SDV without compromising the validity of the trial results. One hundred percent SDV is not a rational method of ensuring data integrity and subject safety based on the high cost, and this literature review indicates that reduced SDV is a viable monitoring method. Alternative methods of monitoring such as centralized monitoring utilizing statistical tests are promising alternatives but have limitations as stand-alone tools. Reduced SDV combined with a centralized, risk-based approach may be the ideal solution to reduce monitoring costs while improving essential data quality.

  2. Optimization of Investment Planning Based on Game-Theoretic Approach

    Directory of Open Access Journals (Sweden)

    Elena Vladimirovna Butsenko

    2018-03-01

    Full Text Available The game-theoretic approach has a vast potential in solving economic problems. On the other hand, the theory of games itself can be enriched by the studies of real problems of decision-making. Hence, this study is aimed at developing and testing the game-theoretic technique to optimize the management of investment planning. This technique enables to forecast the results and manage the processes of investment planning. The proposed method of optimizing the management of investment planning allows to choose the best development strategy of an enterprise. This technique uses the “game with nature” model, and the Wald criterion, the maximum criterion and the Hurwitz criterion as criteria. The article presents a new algorithm for constructing the proposed econometric method to optimize investment project management. This algorithm combines the methods of matrix games. Furthermore, I show the implementation of this technique in a block diagram. The algorithm includes the formation of initial data, the elements of the payment matrix, as well as the definition of maximin, maximal, compromise and optimal management strategies. The methodology is tested on the example of the passenger transportation enterprise of the Sverdlovsk Railway in Ekaterinburg. The application of the proposed methodology and the corresponding algorithm allowed to obtain an optimal price strategy for transporting passengers for one direction of traffic. This price strategy contributes to an increase in the company’s income with minimal risk from the launch of this direction. The obtained results and conclusions show the effectiveness of using the developed methodology for optimizing the management of investment processes in the enterprise. The results of the research can be used as a basis for the development of an appropriate tool and applied by any economic entity in its investment activities.

  3. Integrating Modeling and Monitoring to Provide Long-Term Control of Contaminants

    International Nuclear Information System (INIS)

    Fogwell, Th.

    2009-01-01

    An introduction is presented of the types of problems that exist for long-term control of radionuclides at DOE sites. A breakdown of the distributions at specific sites is given, together with the associated difficulties. A paradigm for remediation showing the integration of monitoring with modeling is presented. It is based on a feedback system that allows for the monitoring to act as principal sensors in a control system. Currently the establishment of a very prescriptive monitoring program fails to have a mechanism for improving models and improving control of the contaminants. The resulting system can be optimized to improve performance. Optimizing monitoring automatically entails linking the monitoring with modeling. If monitoring designs were required to be more efficient, thus requiring optimization, then the monitoring automatically becomes linked to modeling. Records of decision could be written to accommodate revisions in monitoring as better modeling evolves. The technical pieces of the required paradigm are already available; they just need to be implemented and applied to solve the long-term control of the contaminants. An integration of the various parts of the system is presented. Each part is described, and examples are given. References are given to other projects which bring together similar elements in systems for the control of contaminants. Trends are given for the development of the technical features of a robust system. Examples of monitoring methods for specific sites are given. The examples are used to illustrate how such a system would work. Examples of technology needs are presented. Finally, other examples of integrated modeling-monitoring approaches are presented. (authors)

  4. Log-Optimal Portfolio Selection Using the Blackwell Approachability Theorem

    OpenAIRE

    V'yugin, Vladimir

    2014-01-01

    We present a method for constructing the log-optimal portfolio using the well-calibrated forecasts of market values. Dawid's notion of calibration and the Blackwell approachability theorem are used for computing well-calibrated forecasts. We select a portfolio using this "artificial" probability distribution of market values. Our portfolio performs asymptotically at least as well as any stationary portfolio that redistributes the investment at each round using a continuous function of side in...

  5. An Innovative Approach for online Meta Search Engine Optimization

    OpenAIRE

    Manral, Jai; Hossain, Mohammed Alamgir

    2015-01-01

    This paper presents an approach to identify efficient techniques used in Web Search Engine Optimization (SEO). Understanding SEO factors which can influence page ranking in search engine is significant for webmasters who wish to attract large number of users to their website. Different from previous relevant research, in this study we developed an intelligent Meta search engine which aggregates results from various search engines and ranks them based on several important SEO parameters. The r...

  6. Data driven approaches for diagnostics and optimization of NPP operation

    International Nuclear Information System (INIS)

    Pliska, J.; Machat, Z.

    2014-01-01

    The efficiency and heat rate is an important indicator of both the health of the power plant equipment and the quality of power plant operation. To achieve this challenges powerful tool is a statistical data processing of large data sets which are stored in data historians. These large data sets contain useful information about process quality and equipment and sensor health. The paper discusses data-driven approaches for model building of main power plant equipment such as condenser, cooling tower and the overall thermal cycle as well using multivariate regression techniques based on so called a regression triplet - data, model and method. Regression models comprise a base for diagnostics and optimization tasks. Diagnostics and optimization tasks are demonstrated on practical cases - diagnostics of main power plant equipment to early identify equipment fault, and optimization task of cooling circuit by cooling water flow control to achieve for a given boundary conditions the highest power output. (authors)

  7. A Hybrid Harmony Search Algorithm Approach for Optimal Power Flow

    Directory of Open Access Journals (Sweden)

    Mimoun YOUNES

    2012-08-01

    Full Text Available Optimal Power Flow (OPF is one of the main functions of Power system operation. It determines the optimal settings of generating units, bus voltage, transformer tap and shunt elements in Power System with the objective of minimizing total production costs or losses while the system is operating within its security limits. The aim of this paper is to propose a novel methodology (BCGAs-HSA that solves OPF including both active and reactive power dispatch It is based on combining the binary-coded genetic algorithm (BCGAs and the harmony search algorithm (HSA to determine the optimal global solution. This method was tested on the modified IEEE 30 bus test system. The results obtained by this method are compared with those obtained with BCGAs or HSA separately. The results show that the BCGAs-HSA approach can converge to the optimum solution with accuracy compared to those reported recently in the literature.

  8. Multiobjective RFID Network Optimization Using Multiobjective Evolutionary and Swarm Intelligence Approaches

    Directory of Open Access Journals (Sweden)

    Hanning Chen

    2014-01-01

    Full Text Available The development of radio frequency identification (RFID technology generates the most challenging RFID network planning (RNP problem, which needs to be solved in order to operate the large-scale RFID network in an optimal fashion. RNP involves many objectives and constraints and has been proven to be a NP-hard multi-objective problem. The application of evolutionary algorithm (EA and swarm intelligence (SI for solving multiobjective RNP (MORNP has gained significant attention in the literature, but these algorithms always transform multiple objectives into a single objective by weighted coefficient approach. In this paper, we use multiobjective EA and SI algorithms to find all the Pareto optimal solutions and to achieve the optimal planning solutions by simultaneously optimizing four conflicting objectives in MORNP, instead of transforming multiobjective functions into a single objective function. The experiment presents an exhaustive comparison of three successful multiobjective EA and SI, namely, the recently developed multiobjective artificial bee colony algorithm (MOABC, the nondominated sorting genetic algorithm II (NSGA-II, and the multiobjective particle swarm optimization (MOPSO, on MORNP instances of different nature, namely, the two-objective and three-objective MORNP. Simulation results show that MOABC proves to be more superior for planning RFID networks than NSGA-II and MOPSO in terms of optimization accuracy and computation robustness.

  9. Numerical optimization approach for resonant electromagnetic vibration transducer designed for random vibration

    International Nuclear Information System (INIS)

    Spreemann, Dirk; Hoffmann, Daniel; Folkmer, Bernd; Manoli, Yiannos

    2008-01-01

    This paper presents a design and optimization strategy for resonant electromagnetic vibration energy harvesting devices. An analytic expression for the magnetic field of cylindrical permanent magnets is used to build up an electromagnetic subsystem model. This subsystem is used to find the optimal resting position of the oscillating mass and to optimize the geometrical parameters (shape and size) of the magnet and coil. The objective function to be investigated is thereby the maximum voltage output of the transducer. An additional mechanical subsystem model based on well-known equations describing the dynamics of spring–mass–damper systems is established to simulate both nonlinear spring characteristics and the effect of internal limit stops. The mechanical subsystem enables the identification of optimal spring characteristics for realistic operation conditions such as stochastic vibrations. With the overall transducer model, a combination of both subsystems connected to a simple electrical circuit, a virtual operation of the optimized vibration transducer excited by a measured random acceleration profile can be performed. It is shown that the optimization approach results in an appreciable increase of the converter performance

  10. An Airway Network Flow Assignment Approach Based on an Efficient Multiobjective Optimization Framework

    Directory of Open Access Journals (Sweden)

    Xiangmin Guan

    2015-01-01

    Full Text Available Considering reducing the airspace congestion and the flight delay simultaneously, this paper formulates the airway network flow assignment (ANFA problem as a multiobjective optimization model and presents a new multiobjective optimization framework to solve it. Firstly, an effective multi-island parallel evolution algorithm with multiple evolution populations is employed to improve the optimization capability. Secondly, the nondominated sorting genetic algorithm II is applied for each population. In addition, a cooperative coevolution algorithm is adapted to divide the ANFA problem into several low-dimensional biobjective optimization problems which are easier to deal with. Finally, in order to maintain the diversity of solutions and to avoid prematurity, a dynamic adjustment operator based on solution congestion degree is specifically designed for the ANFA problem. Simulation results using the real traffic data from China air route network and daily flight plans demonstrate that the proposed approach can improve the solution quality effectively, showing superiority to the existing approaches such as the multiobjective genetic algorithm, the well-known multiobjective evolutionary algorithm based on decomposition, and a cooperative coevolution multiobjective algorithm as well as other parallel evolution algorithms with different migration topology.

  11. Path Planning of Mobile Elastic Robotic Arms by Indirect Approach of Optimal Control

    Directory of Open Access Journals (Sweden)

    Moharam Habibnejad Korayem

    2011-03-01

    Full Text Available Finding optimal trajectory is critical in several applications of robot manipulators. This paper is applied the open-loop optimal control approach for generating the optimal trajectory of the flexible mobile manipulators in point-to-point motion. This method is based on the Pontryagin-s minimum principle that by providing a two-point boundary value problem is solved the problem. This problem is known to be complex in particular when combined motion of the base and manipulator, non-holonomic constraint of the base and highly non-linear and complicated dynamic equations as a result of flexible nature of links are taken into account. The study emphasizes on modeling of the complete optimal control problem by remaining all nonlinear state and costate variables as well as control constraints. In this method, designer can compromise between different objectives by considering the proper penalty matrices and it yields to choose the proper trajectory among the various paths. The effectiveness and capability of the proposed approach are demonstrated through simulation studies. Finally, to verify the proposed method, the simulation results obtained from the model are compared with the results of those available in the literature.

  12. Probabilistic safety assessment and optimal control of hazardous technological systems. A marked point process approach

    Energy Technology Data Exchange (ETDEWEB)

    Holmberg, J [VTT Automation, Espoo (Finland)

    1997-04-01

    The thesis models risk management as an optimal control problem for a stochastic process. The approach classes the decisions made by management into three categories according to the control methods of a point process: (1) planned process lifetime, (2) modification of the design, and (3) operational decisions. The approach is used for optimization of plant shutdown criteria and surveillance test strategies of a hypothetical nuclear power plant. 62 refs. The thesis includes also five previous publications by author.

  13. Evolutionary algorithms approach for integrated bioenergy supply chains optimization

    International Nuclear Information System (INIS)

    Ayoub, Nasser; Elmoshi, Elsayed; Seki, Hiroya; Naka, Yuji

    2009-01-01

    In this paper, we propose an optimization model and solution approach for designing and evaluating integrated system of bioenergy production supply chains, SC, at the local level. Designing SC that simultaneously utilize a set of bio-resources together is a complicated task, considered here. The complication arises from the different nature and sources of bio-resources used in bioenergy production i.e., wet, dry or agriculture, industrial etc. Moreover, the different concerns that decision makers should take into account, to overcome the tradeoff anxieties of the socialists and investors, i.e., social, environmental and economical factors, was considered through the options of multi-criteria optimization. A first part of this research was introduced in earlier research work explaining the general Bioenergy Decision System gBEDS [Ayoub N, Martins R, Wang K, Seki H, Naka Y. Two levels decision system for efficient planning and implementation of bioenergy production. Energy Convers Manage 2007;48:709-23]. In this paper, brief introduction and emphasize on gBEDS are given; the optimization model is presented and followed by a case study on designing a supply chain of nine bio-resources at Iida city in the middle part of Japan.

  14. Practical approaches for self-monitoring of blood glucose: an Asia-Pacific perspective.

    Science.gov (United States)

    Chowdhury, Subhankar; Ji, Linong; Suwanwalaikorn, Sompongse; Yu, Neng-Chun; Tan, Eng Kiat

    2015-03-01

    Comprehensive glycemic control is necessary to improve outcomes and avoid complications in individuals with diabetes. Self-monitoring of blood glucose (SMBG) is a key enabler of glycemic assessment, providing real-time information that complements HbA1c monitoring and supports treatment optimization. However, SMBG is under-utilized by patients and physicians within the Asia-Pacific region, because of barriers such as the cost of monitoring supplies, lack of diabetes self-management skills, or concerns about the reliability of blood glucose readings. Practice recommendations in international and regional guidelines vary widely, and may not be detailed or specific enough to guide SMBG use effectively. This contributes to uncertainty among patients and physicians about how best to utilize this tool: when and how often to test, and what action(s) to take in response to high or low readings. In developing a practical SMBG regimen, the first step is to determine the recommended SMBG frequency and intensity needed to support the chosen treatment regimen. If there are practical obstacles to monitoring, such as affordability or access, physicians should identify the most important aspects of glycemic control to target for individual patients, and modify monitoring patterns accordingly. This consensus paper proposes a selection of structured, flexible SMBG patterns that can be tailored to the clinical, educational, behavioral, and financial requirements of individuals with diabetes.

  15. Optimal control approaches for aircraft conflict avoidance using speed regulation : a numerical study

    OpenAIRE

    Cellier , Loïc; Cafieri , Sonia; Messine , Frederic

    2013-01-01

    International audience; In this paper a numerical study is provided to solve the aircraft conflict avoidance problem through velocity regulation maneuvers. Starting from optimal controlbased model and approaches in which aircraft accelerations are the controls, and by applying the direct shooting technique, we propose to study two different largescale nonlinear optimization problems. In order to compare different possibilities of implementation, two environments (AMPL and MATLAB) and determin...

  16. Performance Monitoring Techniques Supporting Cognitive Optical Networking

    DEFF Research Database (Denmark)

    Caballero Jambrina, Antonio; Borkowski, Robert; Zibar, Darko

    2013-01-01

    High degree of heterogeneity of future optical networks, such as services with different quality-of-transmission requirements, modulation formats and switching techniques, will pose a challenge for the control and optimization of different parameters. Incorporation of cognitive techniques can help...... to solve this issue by realizing a network that can observe, act, learn and optimize its performance, taking into account end-to-end goals. In this letter we present the approach of cognition applied to heterogeneous optical networks developed in the framework of the EU project CHRON: Cognitive...... Heterogeneous Reconfigurable Optical Network. We focus on the approaches developed in the project for optical performance monitoring, which enable the feedback from the physical layer to the cognitive decision system by providing accurate description of the performance of the established lightpaths....

  17. Modeling and analysis of a decentralized electricity market: An integrated simulation/optimization approach

    International Nuclear Information System (INIS)

    Sarıca, Kemal; Kumbaroğlu, Gürkan; Or, Ilhan

    2012-01-01

    In this study, a model is developed to investigate the implications of an hourly day-ahead competitive power market on generator profits, electricity prices, availability and supply security. An integrated simulation/optimization approach is employed integrating a multi-agent simulation model with two alternative optimization models. The simulation model represents interactions between power generator, system operator, power user and power transmitter agents while the network flow optimization model oversees and optimizes the electricity flows, dispatches generators based on two alternative approaches used in the modeling of the underlying transmission network: a linear minimum cost network flow model and a non-linear alternating current optimal power flow model. Supply, demand, transmission, capacity and other technological constraints are thereby enforced. The transmission network, on which the scenario analyses are carried out, includes 30 bus, 41 lines, 9 generators, and 21 power users. The scenarios examined in the analysis cover various settings of transmission line capacities/fees, and hourly learning algorithms. Results provide insight into key behavioral and structural aspects of a decentralized electricity market under network constraints and reveal the importance of using an AC network instead of a simplified linear network flow approach. -- Highlights: ► An agent-based simulation model with an AC transmission environment with a day-ahead market. ► Physical network parameters have dramatic effects over price levels and stability. ► Due to AC nature of transmission network, adaptive agents have more local market power than minimal cost network flow. ► Behavior of the generators has significant effect over market price formation, as pointed out by bidding strategies. ► Transmission line capacity and fee policies are found to be very effective in price formation in the market.

  18. Developing a univariate approach to phase-I monitoring of fuzzy quality profiles

    Directory of Open Access Journals (Sweden)

    Kazem Noghondarian

    2012-10-01

    Full Text Available In many real-world applications, the quality of a process or a particular product can be characterized by a functional relationship called profile. A profile builds the relationships between a response quality characteristic and one or more explanatory variables. Monitoring the quality of a profile is implemented to understand and to verify the stability of this functional relationship over time. In some real applications, a fuzzy linear regression model can represent the profile adequately where the response quality characteristic is fuzzy. The purpose of this paper is to develop an approach for monitoring process/product profiles in fuzzy environment. A model in fuzzy linear regression is developed to construct the quality profiles by using linear programming and then fuzzy individuals and moving-range (I-MR control charts are developed to monitor both intercept and slope of fuzzy profiles to achieve an in-control process. A case study in customer satisfaction is presented to show the application of our approach and to express the sensitivity analysis of parameters for building a fuzzy profile.

  19. A unified modeling approach for physical experiment design and optimization in laser driven inertial confinement fusion

    Energy Technology Data Exchange (ETDEWEB)

    Li, Haiyan [Mechatronics Engineering School of Guangdong University of Technology, Guangzhou 510006 (China); Huang, Yunbao, E-mail: Huangyblhy@gmail.com [Mechatronics Engineering School of Guangdong University of Technology, Guangzhou 510006 (China); Jiang, Shaoen, E-mail: Jiangshn@vip.sina.com [Laser Fusion Research Center, China Academy of Engineering Physics, Mianyang 621900 (China); Jing, Longfei, E-mail: scmyking_2008@163.com [Laser Fusion Research Center, China Academy of Engineering Physics, Mianyang 621900 (China); Tianxuan, Huang; Ding, Yongkun [Laser Fusion Research Center, China Academy of Engineering Physics, Mianyang 621900 (China)

    2015-11-15

    Highlights: • A unified modeling approach for physical experiment design is presented. • Any laser facility can be flexibly defined and included with two scripts. • Complex targets and laser beams can be parametrically modeled for optimization. • Automatically mapping of laser beam energy facilitates targets shape optimization. - Abstract: Physical experiment design and optimization is very essential for laser driven inertial confinement fusion due to the high cost of each shot. However, only limited experiments with simple structure or shape on several laser facilities can be designed and evaluated in available codes, and targets are usually defined by programming, which may lead to it difficult for complex shape target design and optimization on arbitrary laser facilities. A unified modeling approach for physical experiment design and optimization on any laser facilities is presented in this paper. Its core idea includes: (1) any laser facility can be flexibly defined and included with two scripts, (2) complex shape targets and laser beams can be parametrically modeled based on features, (3) an automatically mapping scheme of laser beam energy onto discrete mesh elements of targets enable targets or laser beams be optimized without any additional interactive modeling or programming, and (4) significant computation algorithms are additionally presented to efficiently evaluate radiation symmetry on the target. Finally, examples are demonstrated to validate the significance of such unified modeling approach for physical experiments design and optimization in laser driven inertial confinement fusion.

  20. Topology optimization approaches

    DEFF Research Database (Denmark)

    Sigmund, Ole; Maute, Kurt

    2013-01-01

    Topology optimization has undergone a tremendous development since its introduction in the seminal paper by Bendsøe and Kikuchi in 1988. By now, the concept is developing in many different directions, including “density”, “level set”, “topological derivative”, “phase field”, “evolutionary...

  1. A perturbed martingale approach to global optimization

    Energy Technology Data Exchange (ETDEWEB)

    Sarkar, Saikat [Computational Mechanics Lab, Department of Civil Engineering, Indian Institute of Science, Bangalore 560012 (India); Roy, Debasish, E-mail: royd@civil.iisc.ernet.in [Computational Mechanics Lab, Department of Civil Engineering, Indian Institute of Science, Bangalore 560012 (India); Vasu, Ram Mohan [Department of Instrumentation and Applied Physics, Indian Institute of Science, Bangalore 560012 (India)

    2014-08-01

    A new global stochastic search, guided mainly through derivative-free directional information computable from the sample statistical moments of the design variables within a Monte Carlo setup, is proposed. The search is aided by imparting to the directional update term additional layers of random perturbations referred to as ‘coalescence’ and ‘scrambling’. A selection step, constituting yet another avenue for random perturbation, completes the global search. The direction-driven nature of the search is manifest in the local extremization and coalescence components, which are posed as martingale problems that yield gain-like update terms upon discretization. As anticipated and numerically demonstrated, to a limited extent, against the problem of parameter recovery given the chaotic response histories of a couple of nonlinear oscillators, the proposed method appears to offer a more rational, more accurate and faster alternative to most available evolutionary schemes, prominently the particle swarm optimization. - Highlights: • Evolutionary global optimization is posed as a perturbed martingale problem. • Resulting search via additive updates is a generalization over Gateaux derivatives. • Additional layers of random perturbation help avoid trapping at local extrema. • The approach ensures efficient design space exploration and high accuracy. • The method is numerically assessed via parameter recovery of chaotic oscillators.

  2. Optimal design and management of chlorination in drinking water networks: a multi-objective approach using Genetic Algorithms and the Pareto optimality concept

    Science.gov (United States)

    Nouiri, Issam

    2017-11-01

    This paper presents the development of multi-objective Genetic Algorithms to optimize chlorination design and management in drinking water networks (DWN). Three objectives have been considered: the improvement of the chlorination uniformity (healthy objective), the minimization of chlorine booster stations number, and the injected chlorine mass (economic objectives). The problem has been dissociated in medium and short terms ones. The proposed methodology was tested on hypothetical and real DWN. Results proved the ability of the developed optimization tool to identify relationships between the healthy and economic objectives as Pareto fronts. The proposed approach was efficient in computing solutions ensuring better chlorination uniformity while requiring the weakest injected chlorine mass when compared to other approaches. For the real DWN studied, chlorination optimization has been crowned by great improvement of free-chlorine-dosing uniformity and by a meaningful chlorine mass reduction, in comparison with the conventional chlorination.

  3. Dynamic programming approach for partial decision rule optimization

    KAUST Repository

    Amin, Talha

    2012-10-04

    This paper is devoted to the study of an extension of dynamic programming approach which allows optimization of partial decision rules relative to the length or coverage. We introduce an uncertainty measure J(T) which is the difference between number of rows in a decision table T and number of rows with the most common decision for T. For a nonnegative real number γ, we consider γ-decision rules (partial decision rules) that localize rows in subtables of T with uncertainty at most γ. Presented algorithm constructs a directed acyclic graph Δ γ(T) which nodes are subtables of the decision table T given by systems of equations of the kind "attribute = value". This algorithm finishes the partitioning of a subtable when its uncertainty is at most γ. The graph Δ γ(T) allows us to describe the whole set of so-called irredundant γ-decision rules. We can optimize such set of rules according to length or coverage. This paper contains also results of experiments with decision tables from UCI Machine Learning Repository.

  4. Dynamic programming approach for partial decision rule optimization

    KAUST Repository

    Amin, Talha M.; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata

    2012-01-01

    This paper is devoted to the study of an extension of dynamic programming approach which allows optimization of partial decision rules relative to the length or coverage. We introduce an uncertainty measure J(T) which is the difference between number of rows in a decision table T and number of rows with the most common decision for T. For a nonnegative real number γ, we consider γ-decision rules (partial decision rules) that localize rows in subtables of T with uncertainty at most γ. Presented algorithm constructs a directed acyclic graph Δ γ(T) which nodes are subtables of the decision table T given by systems of equations of the kind "attribute = value". This algorithm finishes the partitioning of a subtable when its uncertainty is at most γ. The graph Δ γ(T) allows us to describe the whole set of so-called irredundant γ-decision rules. We can optimize such set of rules according to length or coverage. This paper contains also results of experiments with decision tables from UCI Machine Learning Repository.

  5. Approaching direct optimization of as-built lens performance

    Science.gov (United States)

    McGuire, James P.; Kuper, Thomas G.

    2012-10-01

    We describe a method approaching direct optimization of the rms wavefront error of a lens including tolerances. By including the effect of tolerances in the error function, the designer can choose to improve the as-built performance with a fixed set of tolerances and/or reduce the cost of production lenses with looser tolerances. The method relies on the speed of differential tolerance analysis and has recently become practical due to the combination of continuing increases in computer hardware speed and multiple core processing We illustrate the method's use on a Cooke triplet, a double Gauss, and two plastic mobile phone camera lenses.

  6. a New Approach for Subway Tunnel Deformation Monitoring: High-Resolution Terrestrial Laser Scanning

    Science.gov (United States)

    Li, J.; Wan, Y.; Gao, X.

    2012-07-01

    With the improvement of the accuracy and efficiency of laser scanning technology, high-resolution terrestrial laser scanning (TLS) technology can obtain high precise points-cloud and density distribution and can be applied to high-precision deformation monitoring of subway tunnels and high-speed railway bridges and other fields. In this paper, a new approach using a points-cloud segmentation method based on vectors of neighbor points and surface fitting method based on moving least squares was proposed and applied to subway tunnel deformation monitoring in Tianjin combined with a new high-resolution terrestrial laser scanner (Riegl VZ-400). There were three main procedures. Firstly, a points-cloud consisted of several scanning was registered by linearized iterative least squares approach to improve the accuracy of registration, and several control points were acquired by total stations (TS) and then adjusted. Secondly, the registered points-cloud was resampled and segmented based on vectors of neighbor points to select suitable points. Thirdly, the selected points were used to fit the subway tunnel surface with moving least squares algorithm. Then a series of parallel sections obtained from temporal series of fitting tunnel surfaces were compared to analysis the deformation. Finally, the results of the approach in z direction were compared with the fiber optical displacement sensor approach and the results in x, y directions were compared with TS respectively, and comparison results showed the accuracy errors of x, y, z directions were respectively about 1.5 mm, 2 mm, 1 mm. Therefore the new approach using high-resolution TLS can meet the demand of subway tunnel deformation monitoring.

  7. Transfer of European Approach to Groundwater Monitoring in China

    Science.gov (United States)

    Zhou, Y.

    2007-12-01

    Major groundwater development in North China has been a key factor in the huge economic growth and the achievement of self sufficiency in food production. Groundwater accounts for more than 70 percent of urban water supply and provides important source of irrigation water during dry period. This has however caused continuous groundwater level decline and many associated problems: hundreds of thousands of dry wells, dry river beds, land subsidence, seawater intrusion and groundwater quality deterioration. Groundwater levels in the shallow unconfined aquifers have fallen 10m up to 50m, at an average rate of 1m/year. In the deep confined aquifers groundwater levels have commonly fallen 30m up to 90m, at an average rate of 3 to 5m/year. Furthermore, elevated nitrate concentrations have been found in shallow groundwater in large scale. Pesticides have been detected in vulnerable aquifers. Urgent actions are necessary for aquifer recovery and mitigating groundwater pollution. Groundwater quantity and quality monitoring plays a very important role in formulating cost-effective groundwater protection strategies. In 2000 European Union initiated a Water Framework Directive (2000/60/EC) to protect all waters in Europe. The objective is to achieve good water and ecological status by 2015 cross all member states. The Directive requires monitoring surface and groundwater in all river basins. A guidance document for monitoring was developed and published in 2003. Groundwater monitoring programs are distinguished into groundwater level monitoring and groundwater quality monitoring. Groundwater quality monitoring is further divided into surveillance monitoring and operational monitoring. The monitoring guidance specifies key principles for the design and operation of monitoring networks. A Sino-Dutch cooperation project was developed to transfer European approach to groundwater monitoring in China. The project aims at building a China Groundwater Information Centre. Case studies

  8. Optimal Route Searching with Multiple Dynamical Constraints—A Geometric Algebra Approach

    Directory of Open Access Journals (Sweden)

    Dongshuang Li

    2018-05-01

    Full Text Available The process of searching for a dynamic constrained optimal path has received increasing attention in traffic planning, evacuation, and personalized or collaborative traffic service. As most existing multiple constrained optimal path (MCOP methods cannot search for a path given various types of constraints that dynamically change during the search, few approaches for dynamic multiple constrained optimal path (DMCOP with type II dynamics are available for practical use. In this study, we develop a method to solve the DMCOP problem with type II dynamics based on the unification of various types of constraints under a geometric algebra (GA framework. In our method, the network topology and three different types of constraints are represented by using algebraic base coding. With a parameterized optimization of the MCOP algorithm based on a greedy search strategy under the generation-refinement paradigm, this algorithm is found to accurately support the discovery of optimal paths as the constraints of numerical values, nodes, and route structure types are dynamically added to the network. The algorithm was tested with simulated cases of optimal tourism route searches in China’s road networks with various combinations of constraints. The case study indicates that our algorithm can not only solve the DMCOP with different types of constraints but also use constraints to speed up the route filtering.

  9. A Review of Player Monitoring Approaches in Basketball: Current Trends and Future Directions.

    Science.gov (United States)

    Fox, Jordan L; Scanlan, Aaron T; Stanton, Robert

    2017-07-01

    Fox, JL, Scanlan, AT, and Stanton, R. A review of player monitoring approaches in basketball: current trends and future directions. J Strength Cond Res 31(7): 2021-2029, 2017-Effective monitoring of players in team sports such as basketball requires an understanding of the external demands and internal responses, as they relate to training phases and competition. Monitoring of external demands and internal responses allows coaching staff to determine the dose-response associated with the imposed training load (TL), and subsequently, if players are adequately prepared for competition. This review discusses measures reported in the literature for monitoring the external demands and internal responses of basketball players during training and competition. The external demands of training and competition were primarily monitored using time-motion analysis, with limited use of microtechnology being reported. Internal responses during training were typically measured using hematological markers, heart rate, various TL models, and perceptual responses such as rating of perceived exertion (RPE). Heart rate was the most commonly reported indicator of internal responses during competition with limited reporting of hematological markers or RPE. These findings show a large discrepancy between the reporting of external and internal measures and training and competition demands. Microsensors, however, may be a practical and convenient method of player monitoring in basketball to overcome the limitations associated with current approaches while allowing for external demands and internal responses to be recorded simultaneously. The triaxial accelerometers of microsensors seem well suited for basketball and warrant validation to definitively determine their place in the monitoring of basketball players. Coaching staff should make use of this technology by tracking individual player responses across the annual plan and using real-time monitoring to minimize factors such as fatigue

  10. A hardware-algorithm co-design approach to optimize seizure detection algorithms for implantable applications.

    Science.gov (United States)

    Raghunathan, Shriram; Gupta, Sumeet K; Markandeya, Himanshu S; Roy, Kaushik; Irazoqui, Pedro P

    2010-10-30

    Implantable neural prostheses that deliver focal electrical stimulation upon demand are rapidly emerging as an alternate therapy for roughly a third of the epileptic patient population that is medically refractory. Seizure detection algorithms enable feedback mechanisms to provide focally and temporally specific intervention. Real-time feasibility and computational complexity often limit most reported detection algorithms to implementations using computers for bedside monitoring or external devices communicating with the implanted electrodes. A comparison of algorithms based on detection efficacy does not present a complete picture of the feasibility of the algorithm with limited computational power, as is the case with most battery-powered applications. We present a two-dimensional design optimization approach that takes into account both detection efficacy and hardware cost in evaluating algorithms for their feasibility in an implantable application. Detection features are first compared for their ability to detect electrographic seizures from micro-electrode data recorded from kainate-treated rats. Circuit models are then used to estimate the dynamic and leakage power consumption of the compared features. A score is assigned based on detection efficacy and the hardware cost for each of the features, then plotted on a two-dimensional design space. An optimal combination of compared features is used to construct an algorithm that provides maximal detection efficacy per unit hardware cost. The methods presented in this paper would facilitate the development of a common platform to benchmark seizure detection algorithms for comparison and feasibility analysis in the next generation of implantable neuroprosthetic devices to treat epilepsy. Copyright © 2010 Elsevier B.V. All rights reserved.

  11. Sensor distributions for structural monitoring

    DEFF Research Database (Denmark)

    Ulriksen, Martin Dalgaard; Bernal, Dionisio

    2017-01-01

    Deciding on the spatial distribution of output sensors for vibration-based structural health monitoring (SHM) is a task that has been, and still is, studied extensively. Yet, when referring to the conventional damage characterization hierarchy, composed of detection, localization, and quantificat......Deciding on the spatial distribution of output sensors for vibration-based structural health monitoring (SHM) is a task that has been, and still is, studied extensively. Yet, when referring to the conventional damage characterization hierarchy, composed of detection, localization......, and quantification, it is primarily the first component that has been addressed with regard to optimal sensor placement. In this particular context, a common approach is to distribute sensors, of which the amount is determined a priori, such that some scalar function of the probability of detection for a pre......-defined set of damage patterns is maximized. Obviously, the optimal sensor distribution, in terms of damage detection, is algorithm-dependent, but studies have showed how correlation generally exists between the different strategies. However, it still remains a question how this “optimality” correlates...

  12. Hierarchical approach to optimization of parallel matrix multiplication on large-scale platforms

    KAUST Repository

    Hasanov, Khalid; Quintin, Jean-Noë l; Lastovetsky, Alexey

    2014-01-01

    -scale parallelism in mind. Indeed, while in 1990s a system with few hundred cores was considered a powerful supercomputer, modern top supercomputers have millions of cores. In this paper, we present a hierarchical approach to optimization of message-passing parallel

  13. A Hybrid Approach to the Optimization of Multiechelon Systems

    Directory of Open Access Journals (Sweden)

    Paweł Sitek

    2015-01-01

    Full Text Available In freight transportation there are two main distribution strategies: direct shipping and multiechelon distribution. In the direct shipping, vehicles, starting from a depot, bring their freight directly to the destination, while in the multiechelon systems, freight is delivered from the depot to the customers through an intermediate points. Multiechelon systems are particularly useful for logistic issues in a competitive environment. The paper presents a concept and application of a hybrid approach to modeling and optimization of the Multi-Echelon Capacitated Vehicle Routing Problem. Two ways of mathematical programming (MP and constraint logic programming (CLP are integrated in one environment. The strengths of MP and CLP in which constraints are treated in a different way and different methods are implemented and combined to use the strengths of both. The proposed approach is particularly important for the discrete decision models with an objective function and many discrete decision variables added up in multiple constraints. An implementation of hybrid approach in the ECLiPSe system using Eplex library is presented. The Two-Echelon Capacitated Vehicle Routing Problem (2E-CVRP and its variants are shown as an illustrative example of the hybrid approach. The presented hybrid approach will be compared with classical mathematical programming on the same benchmark data sets.

  14. A deterministic approach for performance assessment and optimization of power distribution units in Iran

    International Nuclear Information System (INIS)

    Azadeh, A.; Ghaderi, S.F.; Omrani, H.

    2009-01-01

    This paper presents a deterministic approach for performance assessment and optimization of power distribution units in Iran. The deterministic approach is composed of data envelopment analysis (DEA), principal component analysis (PCA) and correlation techniques. Seventeen electricity distribution units have been considered for the purpose of this study. Previous studies have generally used input-output DEA models for benchmarking and evaluation of electricity distribution units. However, this study considers an integrated deterministic DEA-PCA approach since the DEA model should be verified and validated by a robust multivariate methodology such as PCA. Moreover, the DEA models are verified and validated by PCA, Spearman and Kendall's Tau correlation techniques, while previous studies do not have the verification and validation features. Also, both input- and output-oriented DEA models are used for sensitivity analysis of the input and output variables. Finally, this is the first study to present an integrated deterministic approach for assessment and optimization of power distributions in Iran

  15. A novel linear programming approach to fluence map optimization for intensity modulated radiation therapy treatment planning

    International Nuclear Information System (INIS)

    Romeijn, H Edwin; Ahuja, Ravindra K; Dempsey, James F; Kumar, Arvind; Li, Jonathan G

    2003-01-01

    We present a novel linear programming (LP) based approach for efficiently solving the intensity modulated radiation therapy (IMRT) fluence-map optimization (FMO) problem to global optimality. Our model overcomes the apparent limitations of a linear-programming approach by approximating any convex objective function by a piecewise linear convex function. This approach allows us to retain the flexibility offered by general convex objective functions, while allowing us to formulate the FMO problem as a LP problem. In addition, a novel type of partial-volume constraint that bounds the tail averages of the differential dose-volume histograms of structures is imposed while retaining linearity as an alternative approach to improve dose homogeneity in the target volumes, and to attempt to spare as many critical structures as possible. The goal of this work is to develop a very rapid global optimization approach that finds high quality dose distributions. Implementation of this model has demonstrated excellent results. We found globally optimal solutions for eight 7-beam head-and-neck cases in less than 3 min of computational time on a single processor personal computer without the use of partial-volume constraints. Adding such constraints increased the running times by a factor of 2-3, but improved the sparing of critical structures. All cases demonstrated excellent target coverage (>95%), target homogeneity (<10% overdosing and <7% underdosing) and organ sparing using at least one of the two models

  16. An approach to multiobjective optimization of rotational therapy. II. Pareto optimal surfaces and linear combinations of modulated blocked arcs for a prostate geometry.

    Science.gov (United States)

    Pardo-Montero, Juan; Fenwick, John D

    2010-06-01

    The purpose of this work is twofold: To further develop an approach to multiobjective optimization of rotational therapy treatments recently introduced by the authors [J. Pardo-Montero and J. D. Fenwick, "An approach to multiobjective optimization of rotational therapy," Med. Phys. 36, 3292-3303 (2009)], especially regarding its application to realistic geometries, and to study the quality (Pareto optimality) of plans obtained using such an approach by comparing them with Pareto optimal plans obtained through inverse planning. In the previous work of the authors, a methodology is proposed for constructing a large number of plans, with different compromises between the objectives involved, from a small number of geometrically based arcs, each arc prioritizing different objectives. Here, this method has been further developed and studied. Two different techniques for constructing these arcs are investigated, one based on image-reconstruction algorithms and the other based on more common gradient-descent algorithms. The difficulty of dealing with organs abutting the target, briefly reported in previous work of the authors, has been investigated using partial OAR unblocking. Optimality of the solutions has been investigated by comparison with a Pareto front obtained from inverse planning. A relative Euclidean distance has been used to measure the distance of these plans to the Pareto front, and dose volume histogram comparisons have been used to gauge the clinical impact of these distances. A prostate geometry has been used for the study. For geometries where a blocked OAR abuts the target, moderate OAR unblocking can substantially improve target dose distribution and minimize hot spots while not overly compromising dose sparing of the organ. Image-reconstruction type and gradient-descent blocked-arc computations generate similar results. The Pareto front for the prostate geometry, reconstructed using a large number of inverse plans, presents a hockey-stick shape

  17. Dynamic programming approach to optimization of approximate decision rules

    KAUST Repository

    Amin, Talha

    2013-02-01

    This paper is devoted to the study of an extension of dynamic programming approach which allows sequential optimization of approximate decision rules relative to the length and coverage. We introduce an uncertainty measure R(T) which is the number of unordered pairs of rows with different decisions in the decision table T. For a nonnegative real number β, we consider β-decision rules that localize rows in subtables of T with uncertainty at most β. Our algorithm constructs a directed acyclic graph Δβ(T) which nodes are subtables of the decision table T given by systems of equations of the kind "attribute = value". This algorithm finishes the partitioning of a subtable when its uncertainty is at most β. The graph Δβ(T) allows us to describe the whole set of so-called irredundant β-decision rules. We can describe all irredundant β-decision rules with minimum length, and after that among these rules describe all rules with maximum coverage. We can also change the order of optimization. The consideration of irredundant rules only does not change the results of optimization. This paper contains also results of experiments with decision tables from UCI Machine Learning Repository. © 2012 Elsevier Inc. All rights reserved.

  18. A trust region approach with multivariate Padé model for optimal circuit design

    Science.gov (United States)

    Abdel-Malek, Hany L.; Ebid, Shaimaa E. K.; Mohamed, Ahmed S. A.

    2017-11-01

    Since the optimization process requires a significant number of consecutive function evaluations, it is recommended to replace the function by an easily evaluated approximation model during the optimization process. The model suggested in this article is based on a multivariate Padé approximation. This model is constructed using data points of ?, where ? is the number of parameters. The model is updated over a sequence of trust regions. This model avoids the slow convergence of linear models of ? and has features of quadratic models that need interpolation data points of ?. The proposed approach is tested by applying it to several benchmark problems. Yield optimization using such a direct method is applied to some practical circuit examples. Minimax solution leads to a suitable initial point to carry out the yield optimization process. The yield is optimized by the proposed derivative-free method for active and passive filter examples.

  19. Damage approach: A new method for topology optimization with local stress constraints

    DEFF Research Database (Denmark)

    Verbart, Alexander; Langelaar, Matthijs; van Keulen, Fred

    2016-01-01

    In this paper, we propose a new method for topology optimization with local stress constraints. In this method, material in which a stress constraint is violated is considered as damaged. Since damaged material will contribute less to the overall performance of the structure, the optimizer...... will promote a design with a minimal amount of damaged material. We tested the method on several benchmark problems, and the results show that the method is a viable alternative for conventional stress-based approaches based on constraint relaxation followed by constraint aggregation....

  20. Formulation and Optimization of Multiparticulate Drug Delivery System Approach for High Drug Loading.

    Science.gov (United States)

    Shah, Neha; Mehta, Tejal; Gohel, Mukesh

    2017-08-01

    The aim of the present work was to develop and optimize multiparticulate formulation viz. pellets of naproxen by employing QbD and risk assessment approach. Mixture design with extreme vertices was applied to the formulation with high loading of drug (about 90%) and extrusion-spheronization as a process for manufacturing pellets. Independent variables chosen were level of microcrystalline cellulose (MCC)-X 1 , polyvinylpyrrolidone K-90 (PVP K-90)-X 2 , croscarmellose sodium (CCS)-X 3 , and polacrilin potassium (PP)-X 4 . Dependent variables considered were disintegration time (DT)-Y 1 , sphericity-Y 2 , and percent drug release-Y 3 . The formulation was optimized based on the batches generated by MiniTab 17 software. The batch with maximum composite desirability (0.98) proved to be optimum. From the evaluation of design batches, it was observed that, even in low variation, the excipients affect the pelletization property of the blend and also the final drug release. In conclusion, pellets with high drug loading can be effectively manufactured and optimized systematically using QbD approach.

  1. Optimized Structure of the Traffic Flow Forecasting Model With a Deep Learning Approach.

    Science.gov (United States)

    Yang, Hao-Fan; Dillon, Tharam S; Chen, Yi-Ping Phoebe

    2017-10-01

    Forecasting accuracy is an important issue for successful intelligent traffic management, especially in the domain of traffic efficiency and congestion reduction. The dawning of the big data era brings opportunities to greatly improve prediction accuracy. In this paper, we propose a novel model, stacked autoencoder Levenberg-Marquardt model, which is a type of deep architecture of neural network approach aiming to improve forecasting accuracy. The proposed model is designed using the Taguchi method to develop an optimized structure and to learn traffic flow features through layer-by-layer feature granulation with a greedy layerwise unsupervised learning algorithm. It is applied to real-world data collected from the M6 freeway in the U.K. and is compared with three existing traffic predictors. To the best of our knowledge, this is the first time that an optimized structure of the traffic flow forecasting model with a deep learning approach is presented. The evaluation results demonstrate that the proposed model with an optimized structure has superior performance in traffic flow forecasting.

  2. A citizen science approach to monitoring bleaching in the zoantharian Palythoa tuberculosa

    KAUST Repository

    Parkinson, John Everett; Yang, Sung-Yin; Kawamura, Iori; Byron, Gordon; Todd, Peter Alan; Reimer, James Davis

    2016-01-01

    in midwinter, as well as low sample size and brief training owing to the course structure. Despite certain limitations of P. tuberculosa as a focal organism, the citizen science approach to color monitoring has promise, and we

  3. Approaching Behaviour Monitor and Vibration Indication in Developing a General Moving Object Alarm System (GMOAS

    Directory of Open Access Journals (Sweden)

    Haiwei Dong

    2013-07-01

    Full Text Available People who suffer from hearing impairment caused by illness, age or extremely noisy environments are constantly in danger of being hit or knocked down by fast moving objects behind them when they have no companion or augmented sensory system to warn them. In this paper, we propose the General Moving Object Alarm System (GMOAS, a system focused on aiding the safe mobility of people under these circumstances. The GMOAS is a wearable haptic device that consists of two main subsystems: (i a moving object monitoring subsystem that uses laser range data to detect and track approaching objects, and (ii an alarm subsystem that warns the user of possibly dangerous approaching objects by triggering tactile vibrations on an “alarm necklace”. For moving object monitoring, we propose a simple yet efficient solution to monitor the approaching behavior of objects. Compared with previous work in motion detection and tracking, we are not interested in specific objects but any type of approaching object that might harm the user. To this extent, we define a boundary in the laser range data where the objects are monitored. Within this boundary a fan-shape grid is constructed to obtain an evenly distributed spatial partitioning of the data. These partitions are efficiently clustered into continuous objects which are then tracked through time using an object association algorithm based on updating a deviation matrix that represents angle, distance and size variations of the objects. The speed of the tracked objects is monitored throughout the algorithm. When the speed of an approaching object surpasses the safety threshold, the alarm necklace is triggered indicating the approaching direction of the fast moving object. The alarm necklace is equipped with three motors that can indicate five directions with respect to the user: left, back, right, left-back and right-back. We performed three types of outdoor experiments (object passing, approaching and crossing that

  4. TRACKING AND MONITORING OF TAGGED OBJECTS EMPLOYING PARTICLE SWARM OPTIMIZATION ALGORITHM IN A DEPARTMENTAL STORE

    Directory of Open Access Journals (Sweden)

    Indrajit Bhattacharya

    2011-05-01

    Full Text Available The present paper proposes a departmental store automation system based on Radio Frequency Identification (RFID technology and Particle Swarm Optimization (PSO algorithm. The items in the departmental store spanned over different sections and in multiple floors, are tagged with passive RFID tags. The floor is divided into number of zones depending on different types of items that are placed in their respective racks. Each of the zones is placed with one RFID reader, which constantly monitors the items in their zone and periodically sends that information to the application. The problem of systematic periodic monitoring of the store is addressed in this application so that the locations, distributions and demands of every item in the store can be invigilated with intelligence. The proposed application is successfully demonstrated on a simulated case study.

  5. A Google Trends-based approach for monitoring NSSI

    Directory of Open Access Journals (Sweden)

    Bragazzi NL

    2013-12-01

    Full Text Available Nicola Luigi Bragazzi DINOGMI, Department of Neuroscience, Rehabilitation, Ophthalmology, Genetics, Maternal and Child Health, Section of Psychiatry, University of Genoa, Genoa, Italy Abstract: Non-suicidal self-injury (NSSI is an intentional, direct, and socially unacceptable behavior resulting in the destruction of one's own body tissues with no intention of dying or committing suicide, even though it is associated with a higher risk of attempted, planned, or just considered suicide. In this preliminary report, we introduce the concept of “NSSI 2.0”; that is to say, the study of the Internet usage by subjects with NSSI, and we introduce a Google Trends-based approach for monitoring NSSI, called NSSI infodemiology and infoveillance. Despite some limitations, Google Trends has already proven to be reliable for infectious diseases monitoring, and here we extend its application and potentiality in the field of suicidology. Ad hoc web portals and surveys could be designed in light of the reported results for helping people with NSSI. Keywords: infodemiology, infoveillance, Internet, non-suicidal self-injury

  6. Optimal PID settings for first and second-order processes - Comparison with different controller tuning approaches

    OpenAIRE

    Pappas, Iosif

    2016-01-01

    PID controllers are extensively used in industry. Although many tuning methodologies exist, finding good controller settings is not an easy task and frequently optimization-based design is preferred to satisfy more complex criteria. In this thesis, the focus was to find which tuning approaches, if any, present close to optimal behavior. Pareto-optimal controllers were found for different first and second-order processes with time delay. Performance was quantified in terms of the integrat...

  7. A measure theoretic approach to traffic flow optimization on networks

    OpenAIRE

    Cacace, Simone; Camilli, Fabio; De Maio, Raul; Tosin, Andrea

    2018-01-01

    We consider a class of optimal control problems for measure-valued nonlinear transport equations describing traffic flow problems on networks. The objective isto minimise/maximise macroscopic quantities, such as traffic volume or average speed,controlling few agents, for example smart traffic lights and automated cars. The measuretheoretic approach allows to study in a same setting local and nonlocal drivers interactionsand to consider the control variables as additional measures interacting ...

  8. A Genetic Algorithms-based Approach for Optimized Self-protection in a Pervasive Service Middleware

    DEFF Research Database (Denmark)

    Zhang, Weishan; Ingstrup, Mads; Hansen, Klaus Marius

    2009-01-01

    With increasingly complex and heterogeneous systems in pervasive service computing, it becomes more and more important to provide self-protected services to end users. In order to achieve self-protection, the corresponding security should be provided in an optimized manner considering...... the constraints of heterogeneous devices and networks. In this paper, we present a Genetic Algorithms-based approach for obtaining optimized security configurations at run time, supported by a set of security OWL ontologies and an event-driven framework. This approach has been realized as a prototype for self-protection...... in the Hydra middleware, and is integrated with a framework for enforcing the computed solution at run time using security obligations. The experiments with the prototype on configuring security strategies for a pervasive service middleware show that this approach has acceptable performance, and could be used...

  9. A NEW APPROACH FOR SUBWAY TUNNEL DEFORMATION MONITORING: HIGH-RESOLUTION TERRESTRIAL LASER SCANNING

    Directory of Open Access Journals (Sweden)

    J. Li

    2012-07-01

    Full Text Available With the improvement of the accuracy and efficiency of laser scanning technology, high-resolution terrestrial laser scanning (TLS technology can obtain high precise points-cloud and density distribution and can be applied to high-precision deformation monitoring of subway tunnels and high-speed railway bridges and other fields. In this paper, a new approach using a points-cloud segmentation method based on vectors of neighbor points and surface fitting method based on moving least squares was proposed and applied to subway tunnel deformation monitoring in Tianjin combined with a new high-resolution terrestrial laser scanner (Riegl VZ-400. There were three main procedures. Firstly, a points-cloud consisted of several scanning was registered by linearized iterative least squares approach to improve the accuracy of registration, and several control points were acquired by total stations (TS and then adjusted. Secondly, the registered points-cloud was resampled and segmented based on vectors of neighbor points to select suitable points. Thirdly, the selected points were used to fit the subway tunnel surface with moving least squares algorithm. Then a series of parallel sections obtained from temporal series of fitting tunnel surfaces were compared to analysis the deformation. Finally, the results of the approach in z direction were compared with the fiber optical displacement sensor approach and the results in x, y directions were compared with TS respectively, and comparison results showed the accuracy errors of x, y, z directions were respectively about 1.5 mm, 2 mm, 1 mm. Therefore the new approach using high-resolution TLS can meet the demand of subway tunnel deformation monitoring.

  10. Dynamic optimization of maintenance and improvement planning for water main system: Periodic replacement approach

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jong Woo; Choi, Go Bong; Lee, Jong Min [Seoul National University, Seoul (Korea, Republic of); Suh, Jung Chul [Samchully Corporation, Seoul (Korea, Republic of)

    2016-01-15

    This paper proposes a Markov decision process (MDP) based approach to derive an optimal schedule of maintenance, rehabilitation and replacement of the water main system. The scheduling problem utilizes auxiliary information of a pipe such as the current state, cost, and deterioration model. The objective function and detailed algorithm of dynamic programming are modified to solve the periodic replacement problem. The optimal policy evaluated by the proposed algorithm is compared to several existing policies via Monte Carlo simulations. The proposed decision framework provides a systematic way to obtain an optimal policy.

  11. Interactive Genetic Algorithm - An Adaptive and Interactive Decision Support Framework for Design of Optimal Groundwater Monitoring Plans

    Science.gov (United States)

    Babbar-Sebens, M.; Minsker, B. S.

    2006-12-01

    In the water resources management field, decision making encompasses many kinds of engineering, social, and economic constraints and objectives. Representing all of these problem dependant criteria through models (analytical or numerical) and various formulations (e.g., objectives, constraints, etc.) within an optimization- simulation system can be a very non-trivial issue. Most models and formulations utilized for discerning desirable traits in a solution can only approximate the decision maker's (DM) true preference criteria, and they often fail to consider important qualitative and incomputable phenomena related to the management problem. In our research, we have proposed novel decision support frameworks that allow DMs to actively participate in the optimization process. The DMs explicitly indicate their true preferences based on their subjective criteria and the results of various simulation models and formulations. The feedback from the DMs is then used to guide the search process towards solutions that are "all-rounders" from the perspective of the DM. The two main research questions explored in this work are: a) Does interaction between the optimization algorithm and a DM assist the system in searching for groundwater monitoring designs that are robust from the DM's perspective?, and b) How can an interactive search process be made more effective when human factors, such as human fatigue and cognitive learning processes, affect the performance of the algorithm? The application of these frameworks on a real-world groundwater long-term monitoring (LTM) case study in Michigan highlighted the following salient advantages: a) in contrast to the non-interactive optimization methodology, the proposed interactive frameworks were able to identify low cost monitoring designs whose interpolation maps respected the expected spatial distribution of the contaminants, b) for many same-cost designs, the interactive methodologies were able to propose multiple alternatives

  12. Pragmatic Approach for Multistage Phasor Measurement Unit Placement

    DEFF Research Database (Denmark)

    Rather, Zakir Hussain; Chen, Zhe; Thoegersen, Poul

    2016-01-01

    Effective phasor measurement unit (PMU) placement is a key to the implementation of efficient and economically feasible wide area measurement systems in modern power systems. This paper proposes a pragmatic approach for cost-effective stage-wise deployment of PMUs while considering realistic...... constraints. Inspired from a real world experience, the proposed approach optimally allocates PMU placement in a stage-wise manner. The proposed approach also considers large-scale wind integration for effective grid state monitoring of wind generation dynamics. The proposed approach is implemented...... on the Danish power system projected for the year 2040. Furthermore, practical experience learnt from an optimal PMU placement project aimed at PMU placement in the Danish power system is presented, which is expected to provide insight of practical challenges at ground level that could be considered by PMU...

  13. A practical approach for solving multi-objective reliability redundancy allocation problems using extended bare-bones particle swarm optimization

    International Nuclear Information System (INIS)

    Zhang, Enze; Wu, Yifei; Chen, Qingwei

    2014-01-01

    This paper proposes a practical approach, combining bare-bones particle swarm optimization and sensitivity-based clustering for solving multi-objective reliability redundancy allocation problems (RAPs). A two-stage process is performed to identify promising solutions. Specifically, a new bare-bones multi-objective particle swarm optimization algorithm (BBMOPSO) is developed and applied in the first stage to identify a Pareto-optimal set. This algorithm mainly differs from other multi-objective particle swarm optimization algorithms in the parameter-free particle updating strategy, which is especially suitable for handling the complexity and nonlinearity of RAPs. Moreover, by utilizing an approach based on the adaptive grid to update the global particle leaders, a mutation operator to improve the exploration ability and an effective constraint handling strategy, the integrated BBMOPSO algorithm can generate excellent approximation of the true Pareto-optimal front for RAPs. This is followed by a data clustering technique based on difference sensitivity in the second stage to prune the obtained Pareto-optimal set and obtain a small, workable sized set of promising solutions for system implementation. Two illustrative examples are presented to show the feasibility and effectiveness of the proposed approach

  14. An approach to optimization of the choice of boiler steel grades as to a mixed-integer programming problem

    International Nuclear Information System (INIS)

    Kler, Alexandr M.; Potanina, Yulia M.

    2017-01-01

    One of the ways to enhance the energy efficiency of thermal power plants is to increase thermodynamic parameters of steam. A sufficient level of reliability and longevity can be provided by the application of advanced construction materials (in particular, high-alloy steel can be used to manufacture the most loaded heating surfaces of a boiler unit). A rational choice of technical and economic parameters of energy plants as the most complex technical systems should be made using the methods of mathematical modeling and optimization. The paper considers an original approach to an economically sound optimal choice of steel grade to manufacture heating surfaces for boiler units. A case study of optimization of the discrete-continuous parameters of an energy unit operating at ultra-supercritical steam parameters, in combination with construction of a variant selection tree is presented. - Highlights: • A case study on optimization of an ultra-supercritical power plant is demonstrated. • Optimization is based on the minimization of electricity price. • An approach is proposed to optimize the selection of boiler steel grades. • The approach is based on the construction of a variant tree. • The selection of steel grades for a boiler unit is shown.

  15. Optimal planning approaches with multiple impulses for rendezvous based on hybrid genetic algorithm and control method

    Directory of Open Access Journals (Sweden)

    JingRui Zhang

    2015-03-01

    Full Text Available In this article, we focus on safe and effective completion of a rendezvous and docking task by looking at planning approaches and control with fuel-optimal rendezvous for a target spacecraft running on a near-circular reference orbit. A variety of existent practical path constraints are considered, including the constraints of field of view, impulses, and passive safety. A rendezvous approach is calculated by using a hybrid genetic algorithm with those constraints. Furthermore, a control method of trajectory tracking is adopted to overcome the external disturbances. Based on Clohessy–Wiltshire equations, we first construct the mathematical model of optimal planning approaches of multiple impulses with path constraints. Second, we introduce the principle of hybrid genetic algorithm with both stronger global searching ability and local searching ability. We additionally explain the application of this algorithm in the problem of trajectory planning. Then, we give three-impulse simulation examples to acquire an optimal rendezvous trajectory with the path constraints presented in this article. The effectiveness and applicability of the tracking control method are verified with the optimal trajectory above as control objective through the numerical simulation.

  16. Optimal Integration of Intermittent Renewables: A System LCOE Stochastic Approach

    Directory of Open Access Journals (Sweden)

    Carlo Lucheroni

    2018-03-01

    Full Text Available We propose a system level approach to value the impact on costs of the integration of intermittent renewable generation in a power system, based on expected breakeven cost and breakeven cost risk. To do this, we carefully reconsider the definition of Levelized Cost of Electricity (LCOE when extended to non-dispatchable generation, by examining extra costs and gains originated by the costly management of random power injections. We are thus lead to define a ‘system LCOE’ as a system dependent LCOE that takes properly into account intermittent generation. In order to include breakeven cost risk we further extend this deterministic approach to a stochastic setting, by introducing a ‘stochastic system LCOE’. This extension allows us to discuss the optimal integration of intermittent renewables from a broad, system level point of view. This paper thus aims to provide power producers and policy makers with a new methodological scheme, still based on the LCOE but which updates this valuation technique to current energy system configurations characterized by a large share of non-dispatchable production. Quantifying and optimizing the impact of intermittent renewables integration on power system costs, risk and CO 2 emissions, the proposed methodology can be used as powerful tool of analysis for assessing environmental and energy policies.

  17. Presenting a Multi-level Superstructure Optimization Approach for Mechatronic System Design

    DEFF Research Database (Denmark)

    Pedersen, Henrik C.; Andersen, Torben Ole; Bech, Michael Møller

    2010-01-01

    Synergism and integration in the design process is what sets apart a Mechatronic System from a traditional, multidisciplinary system. However the typical design approach has been to divide the design problem into sub problems for each technology area (mechanics, electronics and control) and descr......Synergism and integration in the design process is what sets apart a Mechatronic System from a traditional, multidisciplinary system. However the typical design approach has been to divide the design problem into sub problems for each technology area (mechanics, electronics and control......) and describe the interface between the technologies, whereas the lack of well-established, systematic engineering methods to form the basic set-off in analysis and design of complete mechatronic systems has been obvious. The focus of the current paper is therefore to present an integrated design approach...... for mechatronic system design, utilizing a multi-level superstructure optimization based approach. Finally two design examples are presented and the possibilities and limitations of the approach are outlined....

  18. A Honey Bee Foraging approach for optimal location of a biomass power plant

    Energy Technology Data Exchange (ETDEWEB)

    Vera, David; Jurado, Francisco [Dept. of Electrical Engineering, University of Jaen, 23700 EPS Linares, Jaen (Spain); Carabias, Julio; Ruiz-Reyes, Nicolas [Dept. of Telecommunication Engineering, University of Jaen, 23700 EPS Linares, Jaen (Spain)

    2010-07-15

    Over eight million hectares of olive trees are cultivated worldwide, especially in Mediterranean countries, where more than 97% of the world's olive oil is produced. The three major olive oil producers worldwide are Spain, Italy, and Greece. Olive tree pruning residues are an autochthonous and important renewable source that, in most of cases, farmers burn through an uncontrolled manner. Besides, industrial uses have not yet been developed. The aim of this paper consists of a new calculation tool based on particles swarm (Binary Honey Bee Foraging, BHBF). Effectively, this approach will make possible to determine the optimal location, biomass supply area and power plant size that offer the best profitability for investor. Moreover, it prevents the accurate method (not feasible from computational viewpoint). In this work, Profitability Index (PI) is set as the fitness function for the BHBF approach. Results are compared with other evolutionary optimization algorithms such as Binary Particle Swarm Optimization (BPSO), and Genetic Algorithms (GA). All the experiments have shown that the optimal plant size is 2 MW, PI = 3.3122, the best location corresponds to coordinate: X = 49, Y = 97 and biomass supply area is 161.33 km{sup 2}. The simulation times have been reduced to the ninth of time than the greedy (accurate) solution. Matlab registered is used to run all simulations. (author)

  19. Optimizing Health Care Environmental Hygiene.

    Science.gov (United States)

    Carling, Philip C

    2016-09-01

    This article presents a review and perspectives on aspects of optimizing health care environmental hygiene. The topics covered include the epidemiology of environmental surface contamination, a discussion of cleaning health care patient area surfaces, an overview of disinfecting health care surfaces, an overview of challenges in monitoring cleaning versus cleanliness, a description of an integrated approach to environmental hygiene and hand hygiene as interrelated disciplines, and an overview of the research opportunities and challenges related to health care environmental hygiene. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Cuff-less PPG based continuous blood pressure monitoring: a smartphone based approach.

    Science.gov (United States)

    Gaurav, Aman; Maheedhar, Maram; Tiwari, Vijay N; Narayanan, Rangavittal

    2016-08-01

    Cuff-less estimation of systolic (SBP) and diastolic (DBP) blood pressure is an efficient approach for non-invasive and continuous monitoring of an individual's vitals. Although pulse transit time (PTT) based approaches have been successful in estimating the systolic and diastolic blood pressures to a reasonable degree of accuracy, there is still scope for improvement in terms of accuracies. Moreover, PTT approach requires data from sensors placed at two different locations along with individual calibration of physiological parameters for deriving correct estimation of systolic and diastolic blood pressure (BP) and hence is not suitable for smartphone deployment. Heart Rate Variability is one of the extensively used non-invasive parameters to assess cardiovascular autonomic nervous system and is known to be associated with SBP and DBP indirectly. In this work, we propose a novel method to extract a comprehensive set of features by combining PPG signal based and Heart Rate Variability (HRV) related features using a single PPG sensor. Further, these features are fed into a DBP feedback based combinatorial neural network model to arrive at a common weighted average output of DBP and subsequently SBP. Our results show that using this current approach, an accuracy of ±6.8 mmHg for SBP and ±4.7 mmHg for DBP is achievable on 1,750,000 pulses extracted from a public database (comprising 3000 people). Since most of the smartphones are now equipped with PPG sensor, a mobile based cuff-less BP estimation will enable the user to monitor their BP as a vital parameter on demand. This will open new avenues towards development of pervasive and continuous BP monitoring systems leading to an early detection and prevention of cardiovascular diseases.

  1. An optimized Chlorophyll a switching algorithm for MERIS and OLCI in phytoplankton-dominated waters

    CSIR Research Space (South Africa)

    Smith, Marie E

    2018-06-01

    Full Text Available complexity for ocean colour applications such as Harmful Algal Bloom (HAB) monitoring. As low and high biomass algorithmic approaches for ocean colour differ, no single algorithm can optimally retrieve accurate Chl a over such a wide range of biomass. We...

  2. A possibilistic approach for transient identification with 'don't know' response capability optimized by genetic algorithm

    International Nuclear Information System (INIS)

    Almeida, Jose Carlos S. de; Schirru, Roberto; Pereira, Claudio M.N.A.; Universidade Federal, Rio de Janeiro, RJ

    2002-01-01

    This work describes a possibilistic approach for transient identification based on the minimum centroids set method, proposed in previous work, optimized by genetic algorithm. The idea behind this method is to split the complex classification problem into small and simple ones, so that the performance in the classification can be increased. In order to accomplish that, a genetic algorithm is used to learn, from realistic simulated data, the optimized time partitions, which the robustness and correctness in the classification are maximized. The use of a possibilistic classification approach propitiates natural and consistent classification rules, leading naturally to a good heuristic to handle the 'don't know 'response, in case of unrecognized transient, which is fairly desirable in transient classification systems where safety is critical. Application of the proposed approach to a nuclear transient indentification problem reveals good capability of the genetic algorithm in learning optimized possibilistic classification rules for efficient diagnosis including 'don't know' response. Obtained results are shown and commented. (author)

  3. Optimization of monitoring sewage with radionuclide contaminants. Optimizatsiya kontroya stochnykh vod, zagryaznennykh radionuklidami

    Energy Technology Data Exchange (ETDEWEB)

    Egorov, V N [Vsesoyuznyj Nauchno-Issledovatel' skij Inst. Neorganicheskikh Materialov, Moscow (Russian Federation)

    1991-03-01

    Recommendations on optimization of monitoring contaminated sewage aimed at enviromental protection agxinst radioactive contamination at minimum cost are presented. The way of selecting water sampling technique depends on water composition stability and flow rate. Depending on the type of radionuclide distribution in the sewage one can estimate minimum frequency of sampling or number of samples sufficient for assuring reliability of the conclusion on the excess or non-excess of permissible radioactive contamination levels, as well as analysis assigned accuracy. By irregular contaminated sewage-discharge and possibility of short-term releases of different form and duration, sampling should be accomplished through automatic devices of continuons or periodic operation.

  4. A Sensor Web and Web Service-Based Approach for Active Hydrological Disaster Monitoring

    Directory of Open Access Journals (Sweden)

    Xi Zhai

    2016-09-01

    Full Text Available Rapid advancements in Earth-observing sensor systems have led to the generation of large amounts of remote sensing data that can be used for the dynamic monitoring and analysis of hydrological disasters. The management and analysis of these data could take advantage of distributed information infrastructure technologies such as Web service and Sensor Web technologies, which have shown great potential in facilitating the use of observed big data in an interoperable, flexible and on-demand way. However, it remains a challenge to achieve timely response to hydrological disaster events and to automate the geoprocessing of hydrological disaster observations. This article proposes a Sensor Web and Web service-based approach to support active hydrological disaster monitoring. This approach integrates an event-driven mechanism, Web services, and a Sensor Web and coordinates them using workflow technologies to facilitate the Web-based sharing and processing of hydrological hazard information. The design and implementation of hydrological Web services for conducting various hydrological analysis tasks on the Web using dynamically updating sensor observation data are presented. An application example is provided to demonstrate the benefits of the proposed approach over the traditional approach. The results confirm the effectiveness and practicality of the proposed approach in cases of hydrological disaster.

  5. A multi-criteria optimization and decision-making approach for improvement of food engineering processes

    Directory of Open Access Journals (Sweden)

    Alik Abakarov

    2013-04-01

    Full Text Available The objective of this study was to propose a multi-criteria optimization and decision-making technique to solve food engineering problems. This technique was demonstrated using experimental data obtained on osmotic dehydration of carrot cubes in a sodium chloride solution. The Aggregating Functions Approach, the Adaptive Random Search Algorithm, and the Penalty Functions Approach were used in this study to compute the initial set of non-dominated or Pareto-optimal solutions. Multiple non-linear regression analysis was performed on a set of experimental data in order to obtain particular multi-objective functions (responses, namely water loss, solute gain, rehydration ratio, three different colour criteria of rehydrated product, and sensory evaluation (organoleptic quality. Two multi-criteria decision-making approaches, the Analytic Hierarchy Process (AHP and the Tabular Method (TM, were used simultaneously to choose the best alternative among the set of non-dominated solutions. The multi-criteria optimization and decision-making technique proposed in this study can facilitate the assessment of criteria weights, giving rise to a fairer, more consistent, and adequate final compromised solution or food process. This technique can be useful to food scientists in research and education, as well as to engineers involved in the improvement of a variety of food engineering processes.

  6. Using 50 years of soil radiocarbon data to identify optimal approaches for estimating soil carbon residence times

    Science.gov (United States)

    Baisden, W. T.; Canessa, S.

    2013-01-01

    In 1959, Athol Rafter began a substantial programme of systematically monitoring the flow of 14C produced by atmospheric thermonuclear tests through organic matter in New Zealand soils under stable land use. A database of ∼500 soil radiocarbon measurements spanning 50 years has now been compiled, and is used here to identify optimal approaches for soil C-cycle studies. Our results confirm the potential of 14C to determine residence times, by estimating the amount of ‘bomb 14C’ incorporated. High-resolution time series confirm this approach is appropriate, and emphasise that residence times can be calculated routinely with two or more time points as little as 10 years apart. This approach is generally robust to the key assumptions that can create large errors when single time-point 14C measurements are modelled. The three most critical assumptions relate to: (1) the distribution of turnover times, and particularly the proportion of old C (‘passive fraction’), (2) the lag time between photosynthesis and C entering the modelled pool, (3) changes in the rates of C input. When carrying out approaches using robust assumptions on time-series samples, multiple soil layers can be aggregated using a mixing equation. Where good archived samples are available, AMS measurements can develop useful understanding for calibrating models of the soil C cycle at regional to continental scales with sample numbers on the order of hundreds rather than thousands. Sample preparation laboratories and AMS facilities can play an important role in coordinating the efficient delivery of robust calculated residence times for soil carbon.

  7. Using 50 years of soil radiocarbon data to identify optimal approaches for estimating soil carbon residence times

    International Nuclear Information System (INIS)

    Baisden, W.T.; Canessa, S.

    2013-01-01

    In 1959, Athol Rafter began a substantial programme of systematically monitoring the flow of 14 C produced by atmospheric thermonuclear tests through organic matter in New Zealand soils under stable land use. A database of ∼500 soil radiocarbon measurements spanning 50 years has now been compiled, and is used here to identify optimal approaches for soil C-cycle studies. Our results confirm the potential of 14 C to determine residence times, by estimating the amount of ‘bomb 14 C’ incorporated. High-resolution time series confirm this approach is appropriate, and emphasise that residence times can be calculated routinely with two or more time points as little as 10 years apart. This approach is generally robust to the key assumptions that can create large errors when single time-point 14 C measurements are modelled. The three most critical assumptions relate to: (1) the distribution of turnover times, and particularly the proportion of old C (‘passive fraction’), (2) the lag time between photosynthesis and C entering the modelled pool, (3) changes in the rates of C input. When carrying out approaches using robust assumptions on time-series samples, multiple soil layers can be aggregated using a mixing equation. Where good archived samples are available, AMS measurements can develop useful understanding for calibrating models of the soil C cycle at regional to continental scales with sample numbers on the order of hundreds rather than thousands. Sample preparation laboratories and AMS facilities can play an important role in coordinating the efficient delivery of robust calculated residence times for soil carbon.

  8. CASTOR end-to-end monitoring

    International Nuclear Information System (INIS)

    Rekatsinas, Theodoros; Duellmann, Dirk; Pokorski, Witold; Ponce, Sebastien; Rabacal, Bartolomeu; Waldron, Dennis; Wojcieszuk, Jacek

    2010-01-01

    With the start of Large Hadron Collider approaching, storage and management of raw event data, as well as reconstruction and analysis data, is of crucial importance for the researchers. The CERN Advanced STORage system (CASTOR) is a hierarchical system developed at CERN, used to store physics production files and user files. CASTOR, as one of the essential software tools used by the LHC experiments, has to provide reliable services for storing and managing data. Monitoring of this complicated system is mandatory in order to assure its stable operation and improve its future performance. This paper presents the new monitoring system of CASTOR which provides operation and user request specific metrics. This system is build around a dedicated, optimized database schema. The schema is populated by PL/SQL procedures, which process a stream of incoming raw metadata from different CASTOR components, initially collected by the Distributed Logging Facility (DLF). A web interface has been developed for the visualization of the monitoring data. The different histograms and plots are created using PHP scripts which query the monitoring database.

  9. Hybrid discrete PSO and OPF approach for optimization of biomass fueled micro-scale energy system

    International Nuclear Information System (INIS)

    Gómez-González, M.; López, A.; Jurado, F.

    2013-01-01

    Highlights: ► Method to determine the optimal location and size of biomass power plants. ► The proposed approach is a hybrid of PSO algorithm and optimal power flow. ► Comparison among the proposed algorithm and other methods. ► Computational costs are enough lower than that required for exhaustive search. - Abstract: This paper addresses generation of electricity in the specific aspect of finding the best location and sizing of biomass fueled gas micro-turbine power plants, taking into account the variables involved in the problem, such as the local distribution of biomass resources, biomass transportation and extraction costs, operation and maintenance costs, power losses costs, network operation costs, and technical constraints. In this paper a hybrid method is introduced employing discrete particle swarm optimization and optimal power flow. The approach can be applied to search the best sites and capacities to connect biomass fueled gas micro-turbine power systems in a distribution network among a large number of potential combinations and considering the technical constraints of the network. A fair comparison among the proposed algorithm and other methods is performed.

  10. Exploring Optimization Opportunities in Four-Point Suspension Wind Turbine Drivetrains through Integrated Design Approaches

    Energy Technology Data Exchange (ETDEWEB)

    Sethuraman, Latha [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Quick, Julian [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Guo, Yi [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Dykes, Katherine L [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2018-01-12

    Drivetrain design has significant influence on the costs of wind power generation. Current industry practices usually approach the drivetrain design with loads and system requirements defined by the turbine manufacturer. Several different manufacturers are contracted to supply individual components from the low-speed shaft to the generator - each receiving separate design specifications from the turbine manufacturer. Increasingly, more integrated approaches to turbine design have shown promise for blades and towers. Yet, integrated drivetrain design is a challenging task owing to the complex physical behavior of the important load-bearing components, namely the main bearings, gearbox, and the generator. In this paper we combine two of NREL's systems engineering design tools, DriveSE and GeneratorSE, to enable a comprehensive system-level drivetrain optimization for the IEAWind reference turbine for land-based applications. We compare a more traditional design with integrated approaches employing decoupled and coupled design optimization. It is demonstrated that both approaches have the potential to realize notable mass savings with opportunities to lower the costs of energy.

  11. Global optimization based on noisy evaluations: An empirical study of two statistical approaches

    International Nuclear Information System (INIS)

    Vazquez, Emmanuel; Villemonteix, Julien; Sidorkiewicz, Maryan; Walter, Eric

    2008-01-01

    The optimization of the output of complex computer codes has often to be achieved with a small budget of evaluations. Algorithms dedicated to such problems have been developed and compared, such as the Expected Improvement algorithm (El) or the Informational Approach to Global Optimization (IAGO). However, the influence of noisy evaluation results on the outcome of these comparisons has often been neglected, despite its frequent appearance in industrial problems. In this paper, empirical convergence rates for El and IAGO are compared when an additive noise corrupts the result of an evaluation. IAGO appears more efficient than El and various modifications of El designed to deal with noisy evaluations. Keywords. Global optimization; computer simulations; kriging; Gaussian process; noisy evaluations.

  12. Radiation ecological monitoring in NPP region

    International Nuclear Information System (INIS)

    Egorov, Yu.A.; Kazakov, S.V.

    1985-01-01

    The known principle of sanitary-hygienic regulation of NPP radiation effect on man and environment is analyzed. An ecological approach is required to optimize NPP relations with the environment and to regulate radioactivity of the NPP - environment system. The ecological approach envisages the development of standards of permissible concentrations of radioactive and chemical substances (as well as heat) in natural environment, taking into account their synergism, corresponding to ecologically permissible response reactions of biota to their effect. The ecological approach also comprises the sanitary-hygienic principle of radiation protection of man. Attention is paid to ecological monitoring in NPP region, comprising consideration of factors, affecting the environment, evaluation of the actual state of the environment, prediction of the environmental state, evaluation of the expected environmental state

  13. Online total organic carbon (TOC) monitoring for water and wastewater treatment plants processes and operations optimization

    Science.gov (United States)

    Assmann, Céline; Scott, Amanda; Biller, Dondra

    2017-08-01

    Organic measurements, such as biological oxygen demand (BOD) and chemical oxygen demand (COD) were developed decades ago in order to measure organics in water. Today, these time-consuming measurements are still used as parameters to check the water treatment quality; however, the time required to generate a result, ranging from hours to days, does not allow COD or BOD to be useful process control parameters - see (1) Standard Method 5210 B; 5-day BOD Test, 1997, and (2) ASTM D1252; COD Test, 2012. Online organic carbon monitoring allows for effective process control because results are generated every few minutes. Though it does not replace BOD or COD measurements still required for compliance reporting, it allows for smart, data-driven and rapid decision-making to improve process control and optimization or meet compliances. Thanks to the smart interpretation of generated data and the capability to now take real-time actions, municipal drinking water and wastewater treatment facility operators can positively impact their OPEX (operational expenditure) efficiencies and their capabilities to meet regulatory requirements. This paper describes how three municipal wastewater and drinking water plants gained process insights, and determined optimization opportunities thanks to the implementation of online total organic carbon (TOC) monitoring.

  14. Image subsampling and point scoring approaches for large-scale marine benthic monitoring programs

    Science.gov (United States)

    Perkins, Nicholas R.; Foster, Scott D.; Hill, Nicole A.; Barrett, Neville S.

    2016-07-01

    Benthic imagery is an effective tool for quantitative description of ecologically and economically important benthic habitats and biota. The recent development of autonomous underwater vehicles (AUVs) allows surveying of spatial scales that were previously unfeasible. However, an AUV collects a large number of images, the scoring of which is time and labour intensive. There is a need to optimise the way that subsamples of imagery are chosen and scored to gain meaningful inferences for ecological monitoring studies. We examine the trade-off between the number of images selected within transects and the number of random points scored within images on the percent cover of target biota, the typical output of such monitoring programs. We also investigate the efficacy of various image selection approaches, such as systematic or random, on the bias and precision of cover estimates. We use simulated biotas that have varying size, abundance and distributional patterns. We find that a relatively small sampling effort is required to minimise bias. An increased precision for groups that are likely to be the focus of monitoring programs is best gained through increasing the number of images sampled rather than the number of points scored within images. For rare species, sampling using point count approaches is unlikely to provide sufficient precision, and alternative sampling approaches may need to be employed. The approach by which images are selected (simple random sampling, regularly spaced etc.) had no discernible effect on mean and variance estimates, regardless of the distributional pattern of biota. Field validation of our findings is provided through Monte Carlo resampling analysis of a previously scored benthic survey from temperate waters. We show that point count sampling approaches are capable of providing relatively precise cover estimates for candidate groups that are not overly rare. The amount of sampling required, in terms of both the number of images and

  15. Reducing unidentified MOV failures: An innovative approach to thermal overload monitoring

    International Nuclear Information System (INIS)

    Hill, K.; Watson, M.E.; Ali, H.S.; Schlesinger, R.

    1991-01-01

    Historically the failure of motor-operated valves to actuate on demand has caused plant transients, reduced safety system reliability, and lost plant availability. The typical control and indication circuit design uses thermal overload contacts in the control circuit only. This has been recognized as a significant unidentified valve failure mode that may prevent the valve from performing its safety function when required. Different approaches have been evaluated to alert operations personnel to this thermal overload condition, but no cost-effective solution has provided indication of the thermal overload while maintaining valve position indication. Iowa Electric Light and Power Company's Duane Arnold Energy Center (DAEC) is utilizing a nuclear-qualified thermal overload monitor in valve control and indication circuits. This innovative approach has proven economical as no new cabling or indicating devices are required. Indication is provided using existing valve position indicating lights. The monitor is engineered to provide indication of a thermal overload trip as well as continuous indication of valve position, consistent with Regulatory Guide 1.97 and guidance provided by Generic Letter 89-10

  16. Lyapunov matrices approach to the parametric optimization of time-delay systems

    Directory of Open Access Journals (Sweden)

    Duda Józef

    2015-09-01

    Full Text Available In the paper a Lyapunov matrices approach to the parametric optimization problem of time-delay systems with a P-controller is presented. The value of integral quadratic performance index of quality is equal to the value of Lyapunov functional for the initial function of the time-delay system. The Lyapunov functional is determined by means of the Lyapunov matrix

  17. Optimal and Approximate Approaches for Deployment of Heterogeneous Sensing Devices

    Directory of Open Access Journals (Sweden)

    Rabie Ramadan

    2007-04-01

    Full Text Available A modeling framework for the problem of deploying a set of heterogeneous sensors in a field with time-varying differential surveillance requirements is presented. The problem is formulated as mixed integer mathematical program with the objective to maximize coverage of a given field. Two metaheuristics are used to solve this problem. The first heuristic adopts a genetic algorithm (GA approach while the second heuristic implements a simulated annealing (SA algorithm. A set of experiments is used to illustrate the capabilities of the developed models and to compare their performance. The experiments investigate the effect of parameters related to the size of the sensor deployment problem including number of deployed sensors, size of the monitored field, and length of the monitoring horizon. They also examine several endogenous parameters related to the developed GA and SA algorithms.

  18. Two-Layer Linear MPC Approach Aimed at Walking Beam Billets Reheating Furnace Optimization

    Directory of Open Access Journals (Sweden)

    Silvia Maria Zanoli

    2017-01-01

    Full Text Available In this paper, the problem of the control and optimization of a walking beam billets reheating furnace located in an Italian steel plant is analyzed. An ad hoc Advanced Process Control framework has been developed, based on a two-layer linear Model Predictive Control architecture. This control block optimizes the steady and transient states of the considered process. Two main problems have been addressed. First, in order to manage all process conditions, a tailored module defines the process variables set to be included in the control problem. In particular, a unified approach for the selection on the control inputs to be used for control objectives related to the process outputs is guaranteed. The impact of the proposed method on the controller formulation is also detailed. Second, an innovative mathematical approach for stoichiometric ratios constraints handling has been proposed, together with their introduction in the controller optimization problems. The designed control system has been installed on a real plant, replacing operators’ mental model in the conduction of local PID controllers. After two years from the first startup, a strong energy efficiency improvement has been observed.

  19. A Dynamic Intelligent Decision Approach to Dependency Modeling of Project Tasks in Complex Engineering System Optimization

    Directory of Open Access Journals (Sweden)

    Tinggui Chen

    2013-01-01

    Full Text Available Complex engineering system optimization usually involves multiple projects or tasks. On the one hand, dependency modeling among projects or tasks highlights structures in systems and their environments which can help to understand the implications of connectivity on different aspects of system performance and also assist in designing, optimizing, and maintaining complex systems. On the other hand, multiple projects or tasks are either happening at the same time or scheduled into a sequence in order to use common resources. In this paper, we propose a dynamic intelligent decision approach to dependency modeling of project tasks in complex engineering system optimization. The approach takes this decision process as a two-stage decision-making problem. In the first stage, a task clustering approach based on modularization is proposed so as to find out a suitable decomposition scheme for a large-scale project. In the second stage, according to the decomposition result, a discrete artificial bee colony (ABC algorithm inspired by the intelligent foraging behavior of honeybees is developed for the resource constrained multiproject scheduling problem. Finally, a certain case from an engineering design of a chemical processing system is utilized to help to understand the proposed approach.

  20. Comparison and cost analysis of drinking water quality monitoring requirements versus practice in seven developing countries.

    Science.gov (United States)

    Crocker, Jonny; Bartram, Jamie

    2014-07-18

    Drinking water quality monitoring programs aim to support provision of safe drinking water by informing water quality management. Little evidence or guidance exists on best monitoring practices for low resource settings. Lack of financial, human, and technological resources reduce a country's ability to monitor water supply. Monitoring activities were characterized in Cambodia, Colombia, India (three states), Jordan, Peru, South Africa, and Uganda according to water sector responsibilities, monitoring approaches, and marginal cost. The seven study countries were selected to represent a range of low resource settings. The focus was on monitoring of microbiological parameters, such as E. coli, coliforms, and H2S-producing microorganisms. Data collection involved qualitative and quantitative methods. Across seven study countries, few distinct approaches to monitoring were observed, and in all but one country all monitoring relied on fixed laboratories for sample analysis. Compliance with monitoring requirements was highest for operational monitoring of large water supplies in urban areas. Sample transport and labor for sample collection and analysis together constitute approximately 75% of marginal costs, which exclude capital costs. There is potential for substantive optimization of monitoring programs by considering field-based testing and by fundamentally reconsidering monitoring approaches for non-piped supplies. This is the first study to look quantitatively at water quality monitoring practices in multiple developing countries.

  1. An Evolutionary Multi-objective Approach for Speed Tuning Optimization with Energy Saving in Railway Management

    OpenAIRE

    Chevrier , Rémy

    2010-01-01

    International audience; An approach for speed tuning in railway management is presented for optimizing both travel duration and energy saving. This approach is based on a state-of-the-art evolutionary algorithm with Pareto approach. This algorithm provides a set of diversified non-dominated solutions to the decision-maker. A case study on Gonesse connection (France) is also reported and analyzed.

  2. A genetic algorithm approach to optimization for the radiological worker allocation problem

    International Nuclear Information System (INIS)

    Yan Chen; Masakuni Narita; Masashi Tsuji; Sangduk Sa

    1996-01-01

    The worker allocation optimization problem in radiological facilities inevitably involves various types of requirements and constraints relevant to radiological protection and labor management. Some of these goals and constraints are not amenable to a rigorous mathematical formulation. Conventional methods for this problem rely heavily on sophisticated algebraic or numerical algorithms, which cause difficulties in the search for optimal solutions in the search space of worker allocation optimization problems. Genetic algorithms (GAB) are stochastic search algorithms introduced by J. Holland in the 1970s based on ideas and techniques from genetic and evolutionary theories. The most striking characteristic of GAs is the large flexibility allowed in the formulation of the optimal problem and the process of the search for the optimal solution. In the formulation, it is not necessary to define the optimal problem in rigorous mathematical terms, as required in the conventional methods. Furthermore, by designing a model of evolution for the optimal search problem, the optimal solution can be sought efficiently with computational simple manipulations without highly complex mathematical algorithms. We reported a GA approach to the worker allocation problem in radiological facilities in the previous study. In this study, two types of hard constraints were employed to reduce the huge search space, where the optimal solution is sought in such a way as to satisfy as many of soft constraints as possible. It was demonstrated that the proposed evolutionary method could provide the optimal solution efficiently compared with conventional methods. However, although the employed hard constraints could localize the search space into a very small region, it brought some complexities in the designed genetic operators and demanded additional computational burdens. In this paper, we propose a simplified evolutionary model with less restrictive hard constraints and make comparisons between

  3. Nontarget approach for environmental monitoring by GC × GC-HRTOFMS in the Tokyo Bay basin.

    Science.gov (United States)

    Zushi, Yasuyuki; Hashimoto, Shunji; Tanabe, Kiyoshi

    2016-08-01

    In this study, we developed an approach for sequential nontarget and target screening for the rapid and efficient analysis of multiple samples as an environmental monitoring using a comprehensive two-dimensional gas chromatograph coupled to a high resolution time-of-flight mass spectrometer (GC × GC-HRTOFMS). A key feature of the approach was the construction of an accurate mass spectral database learned from the sample via nontarget screening. To enhance the detection power in the nontarget screening, a global spectral deconvolution procedure based on non-negative matrix factorization was applied. The approach was applied to the monitoring of rivers in the Tokyo Bay basin. The majority of the compounds detected by the nontarget screening were alkyl chain-based compounds (55%). In the quantitative target screening based on the output from the nontarget screening, particularly high levels of organophosphorus flame retardants (median concentrations of 31, 116 and 141 ng l(-1) for TDCPP, TCIPP and TBEP, respectively) were observed among the target compounds. Flame retardants used for household furniture and building materials were detected in river basins where buildings and arterial traffic were dominated. The developed GC × GC-HRTOFMS approach was efficient and effective for environmental monitoring and provided valuable new information on various aspects of monitoring in the context of environmental management. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Ratiometric Gas Reporting: A Nondisruptive Approach To Monitor Gene Expression in Soils.

    Science.gov (United States)

    Cheng, Hsiao-Ying; Masiello, Caroline A; Del Valle, Ilenne; Gao, Xiaodong; Bennett, George N; Silberg, Jonathan J

    2018-03-16

    Fluorescent proteins are ubiquitous tools that are used to monitor the dynamic functions of natural and synthetic genetic circuits. However, these visual reporters can only be used in transparent settings, a limitation that complicates nondisruptive measurements of gene expression within many matrices, such as soils and sediments. We describe a new ratiometric gas reporting method for nondisruptively monitoring gene expression within hard-to-image environmental matrices. With this approach, C 2 H 4 is continuously synthesized by ethylene forming enzyme to provide information on viable cell number, and CH 3 Br is conditionally synthesized by placing a methyl halide transferase gene under the control of a conditional promoter. We show that ratiometric gas reporting enables the creation of Escherichia coli biosensors that report on acylhomoserine lactone (AHL) autoinducers used for quorum sensing by Gram-negative bacteria. Using these biosensors, we find that an agricultural soil decreases the bioavailable concentration of a long-chain AHL up to 100-fold. We also demonstrate that these biosensors can be used in soil to nondisruptively monitor AHLs synthesized by Rhizobium leguminosarum and degraded by Bacillus thuringiensis. Finally, we show that this new reporting approach can be used in Shewanella oneidensis, a bacterium that lives in sediments.

  5. An Information-Theoretic Approach for Indirect Train Traffic Monitoring Using Building Vibration

    Directory of Open Access Journals (Sweden)

    Susu Xu

    2017-05-01

    Full Text Available This paper introduces an indirect train traffic monitoring method to detect and infer real-time train events based on the vibration response of a nearby building. Monitoring and characterizing traffic events are important for cities to improve the efficiency of transportation systems (e.g., train passing, heavy trucks, and traffic. Most prior work falls into two categories: (1 methods that require intensive labor to manually record events or (2 systems that require deployment of dedicated sensors. These approaches are difficult and costly to execute and maintain. In addition, most prior work uses dedicated sensors designed for a single purpose, resulting in deployment of multiple sensor systems. This further increases costs. Meanwhile, with the increasing demands of structural health monitoring, many vibration sensors are being deployed in commercial buildings. Traffic events create ground vibration that propagates to nearby building structures inducing noisy vibration responses. We present an information-theoretic method for train event monitoring using commonly existing vibration sensors deployed for building health monitoring. The key idea is to represent the wave propagation in a building induced by train traffic as information conveyed in noisy measurement signals. Our technique first uses wavelet analysis to detect train events. Then, by analyzing information exchange patterns of building vibration signals, we infer the category of the events (i.e., southbound or northbound train. Our algorithm is evaluated with an 11-story building where trains pass by frequently. The results show that the method can robustly achieve a train event detection accuracy of up to a 93% true positive rate and an 80% true negative rate. For direction categorization, compared with the traditional signal processing method, our information-theoretic approach reduces categorization error from 32.1 to 12.1%, which is a 2.5× improvement.

  6. Computing Optimal Mixed Strategies for Terrorist Plot Detection Games with the Consideration of Information Leakage

    Directory of Open Access Journals (Sweden)

    Li MingChu

    2017-01-01

    Full Text Available The terrorist’s coordinated attack is becoming an increasing threat to western countries. By monitoring potential terrorists, security agencies are able to detect and destroy terrorist plots at their planning stage. Therefore, an optimal monitoring strategy for the domestic security agency becomes necessary. However, previous study about monitoring strategy generation fails to consider the information leakage, due to hackers and insider threat. Such leakage events may lead to failure of watching potential terrorists and destroying the plot, and cause a huge risk to public security. This paper makes two major contributions. Firstly, we develop a new Stackelberg game model for the security agency to generate optimal monitoring strategy with the consideration of information leakage. Secondly, we provide a double-oracle framework DO-TPDIL for calculation effectively. The experimental result shows that our approach can obtain robust strategies against information leakage with high feasibility and efficiency.

  7. An integrated approach of topology optimized design and selective laser melting process for titanium implants materials.

    Science.gov (United States)

    Xiao, Dongming; Yang, Yongqiang; Su, Xubin; Wang, Di; Sun, Jianfeng

    2013-01-01

    The load-bearing bone implants materials should have sufficient stiffness and large porosity, which are interacted since larger porosity causes lower mechanical properties. This paper is to seek the maximum stiffness architecture with the constraint of specific volume fraction by topology optimization approach, that is, maximum porosity can be achieved with predefine stiffness properties. The effective elastic modulus of conventional cubic and topology optimized scaffolds were calculated using finite element analysis (FEA) method; also, some specimens with different porosities of 41.1%, 50.3%, 60.2% and 70.7% respectively were fabricated by Selective Laser Melting (SLM) process and were tested by compression test. Results showed that the computational effective elastic modulus of optimized scaffolds was approximately 13% higher than cubic scaffolds, the experimental stiffness values were reduced by 76% than the computational ones. The combination of topology optimization approach and SLM process would be available for development of titanium implants materials in consideration of both porosity and mechanical stiffness.

  8. A Mission Planning Approach for Precision Farming Systems Based on Multi-Objective Optimization

    Directory of Open Access Journals (Sweden)

    Zhaoyu Zhai

    2018-06-01

    Full Text Available As the demand for food grows continuously, intelligent agriculture has drawn much attention due to its capability of producing great quantities of food efficiently. The main purpose of intelligent agriculture is to plan agricultural missions properly and use limited resources reasonably with minor human intervention. This paper proposes a Precision Farming System (PFS as a Multi-Agent System (MAS. Components of PFS are treated as agents with different functionalities. These agents could form several coalitions to complete the complex agricultural missions cooperatively. In PFS, mission planning should consider several criteria, like expected benefit, energy consumption or equipment loss. Hence, mission planning could be treated as a Multi-objective Optimization Problem (MOP. In order to solve MOP, an improved algorithm, MP-PSOGA, is proposed, taking advantages of the Genetic Algorithms and Particle Swarm Optimization. A simulation, called precise pesticide spraying mission, is performed to verify the feasibility of the proposed approach. Simulation results illustrate that the proposed approach works properly. This approach enables the PFS to plan missions and allocate scarce resources efficiently. The theoretical analysis and simulation is a good foundation for the future study. Once the proposed approach is applied to a real scenario, it is expected to bring significant economic improvement.

  9. Continuous dynamic assimilation of the inner region data in hydrodynamics modelling: optimization approach

    Directory of Open Access Journals (Sweden)

    F. I. Pisnitchenko

    2008-11-01

    Full Text Available In meteorological and oceanological studies the classical approach for finding the numerical solution of the regional model consists in formulating and solving a Cauchy-Dirichlet problem. The boundary conditions are obtained by linear interpolation of coarse-grid data provided by a global model. Errors in boundary conditions due to interpolation may cause large deviations from the correct regional solution. The methods developed to reduce these errors deal with continuous dynamic assimilation of known global data available inside the regional domain. One of the approaches of this assimilation procedure performs a nudging of large-scale components of regional model solution to large-scale global data components by introducing relaxation forcing terms into the regional model equations. As a result, the obtained solution is not a valid numerical solution to the original regional model. Another approach is the use a four-dimensional variational data assimilation procedure which is free from the above-mentioned shortcoming. In this work we formulate the joint problem of finding the regional model solution and data assimilation as a PDE-constrained optimization problem. Three simple model examples (ODE Burgers equation, Rossby-Oboukhov equation, Korteweg-de Vries equation are considered in this paper. Numerical experiments indicate that the optimization approach can significantly improve the precision of the regional solution.

  10. Optimizing nitrogen fertilizer use: Current approaches and simulation models

    International Nuclear Information System (INIS)

    Baethgen, W.E.

    2000-01-01

    Nitrogen (N) is the most common limiting nutrient in agricultural systems throughout the world. Crops need sufficient available N to achieve optimum yields and adequate grain-protein content. Consequently, sub-optimal rates of N fertilizers typically cause lower economical benefits for farmers. On the other hand, excessive N fertilizer use may result in environmental problems such as nitrate contamination of groundwater and emission of N 2 O and NO. In spite of the economical and environmental importance of good N fertilizer management, the development of optimum fertilizer recommendations is still a major challenge in most agricultural systems. This article reviews the approaches most commonly used for making N recommendations: expected yield level, soil testing and plant analysis (including quick tests). The paper introduces the application of simulation models that complement traditional approaches, and includes some examples of current applications in Africa and South America. (author)

  11. An approach to routine individual internal dose monitoring at the object 'Shelter' personnel considering uncertainties

    International Nuclear Information System (INIS)

    Mel'nichuk, D.V.; Bondarenko, O.O.; Medvedjev, S.Yu.

    2002-01-01

    An approach to organisation of routine individual internal dose monitoring of the personnel of the Object 'Shelter' is presented in the work, that considers individualised uncertainties. In this aspect two methods of effective dose assessment based on bioassay are considered in the work: (1) traditional indirect method at which application results of workplace monitoring are not taken into account, and (2) a combined method in which both results of bioassay measurements and workplace monitoring are considered

  12. Performance Optimization in Sport: A Psychophysiological Approach

    Directory of Open Access Journals (Sweden)

    Selenia di Fronso

    2017-11-01

    Full Text Available ABSTRACT In the last 20 years, there was a growing interest in the study of the theoretical and applied issues surrounding psychophysiological processes underlying performance. The psychophysiological monitoring, which enables the study of these processes, consists of the assessment of the activation and functioning level of the organism using a multidimensional approach. In sport, it can be used to attain a better understanding of the processes underlying athletic performance and to improve it. The most frequently used ecological techniques include electromyography (EMG, electrocardiography (ECG, electroencephalography (EEG, and the assessment of electrodermal activity and breathing rhythm. The purpose of this paper is to offer an overview of the use of these techniques in applied interventions in sport and physical exercise and to give athletes, coaches and sport psychology experts new insights for performance improvement.

  13. Delay-Dependent Exponential Optimal Synchronization for Nonidentical Chaotic Systems via Neural-Network-Based Approach

    Directory of Open Access Journals (Sweden)

    Feng-Hsiag Hsiao

    2013-01-01

    Full Text Available A novel approach is presented to realize the optimal exponential synchronization of nonidentical multiple time-delay chaotic (MTDC systems via fuzzy control scheme. A neural-network (NN model is first constructed for the MTDC system. Then, a linear differential inclusion (LDI state-space representation is established for the dynamics of the NN model. Based on this LDI state-space representation, a delay-dependent exponential stability criterion of the error system derived in terms of Lyapunov's direct method is proposed to guarantee that the trajectories of the slave system can approach those of the master system. Subsequently, the stability condition of this criterion is reformulated into a linear matrix inequality (LMI. According to the LMI, a fuzzy controller is synthesized not only to realize the exponential synchronization but also to achieve the optimal performance by minimizing the disturbance attenuation level at the same time. Finally, a numerical example with simulations is given to demonstrate the effectiveness of our approach.

  14. Calibration of radiation monitors at nuclear power plants

    International Nuclear Information System (INIS)

    Boudreau, L.; Miller, A.D.; Naughton, M.D.

    1994-03-01

    This work was performed to provide guidance to the utilities in the primary and secondary calibration of the radiation monitoring systems (RMS) installed in nuclear power plants. These systems are installed in nuclear power plants to monitor ongoing processes, identify changing radiation fields, predict and limit personnel radiation exposures and measure and control discharge of radioactive materials to the environment. RMS are checked and calibrated on a continuing basis to ensure their precision and accuracy. This report discusses various approaches towards primary and secondary calibrations of the RMS equipment in light of accepted practices at typical power plants and recent interpretations of regulatory guidance. Detailed calibration techniques and overall system responses, trends, and practices are discussed. Industry, utility, and regulatory sources were contacted to create an overall consensus of the most reasonable approaches to optimizing the performance of this equipment

  15. An efficient particle swarm approach for mixed-integer programming in reliability-redundancy optimization applications

    International Nuclear Information System (INIS)

    Santos Coelho, Leandro dos

    2009-01-01

    The reliability-redundancy optimization problems can involve the selection of components with multiple choices and redundancy levels that produce maximum benefits, and are subject to the cost, weight, and volume constraints. Many classical mathematical methods have failed in handling nonconvexities and nonsmoothness in reliability-redundancy optimization problems. As an alternative to the classical optimization approaches, the meta-heuristics have been given much attention by many researchers due to their ability to find an almost global optimal solutions. One of these meta-heuristics is the particle swarm optimization (PSO). PSO is a population-based heuristic optimization technique inspired by social behavior of bird flocking and fish schooling. This paper presents an efficient PSO algorithm based on Gaussian distribution and chaotic sequence (PSO-GC) to solve the reliability-redundancy optimization problems. In this context, two examples in reliability-redundancy design problems are evaluated. Simulation results demonstrate that the proposed PSO-GC is a promising optimization technique. PSO-GC performs well for the two examples of mixed-integer programming in reliability-redundancy applications considered in this paper. The solutions obtained by the PSO-GC are better than the previously best-known solutions available in the recent literature

  16. An efficient particle swarm approach for mixed-integer programming in reliability-redundancy optimization applications

    Energy Technology Data Exchange (ETDEWEB)

    Santos Coelho, Leandro dos [Industrial and Systems Engineering Graduate Program, LAS/PPGEPS, Pontifical Catholic University of Parana, PUCPR, Imaculada Conceicao, 1155, 80215-901 Curitiba, Parana (Brazil)], E-mail: leandro.coelho@pucpr.br

    2009-04-15

    The reliability-redundancy optimization problems can involve the selection of components with multiple choices and redundancy levels that produce maximum benefits, and are subject to the cost, weight, and volume constraints. Many classical mathematical methods have failed in handling nonconvexities and nonsmoothness in reliability-redundancy optimization problems. As an alternative to the classical optimization approaches, the meta-heuristics have been given much attention by many researchers due to their ability to find an almost global optimal solutions. One of these meta-heuristics is the particle swarm optimization (PSO). PSO is a population-based heuristic optimization technique inspired by social behavior of bird flocking and fish schooling. This paper presents an efficient PSO algorithm based on Gaussian distribution and chaotic sequence (PSO-GC) to solve the reliability-redundancy optimization problems. In this context, two examples in reliability-redundancy design problems are evaluated. Simulation results demonstrate that the proposed PSO-GC is a promising optimization technique. PSO-GC performs well for the two examples of mixed-integer programming in reliability-redundancy applications considered in this paper. The solutions obtained by the PSO-GC are better than the previously best-known solutions available in the recent literature.

  17. Proposal for a biological environmental monitoring approach to be used in libraries and archives.

    Science.gov (United States)

    Pasquarella, Cesira; Saccani, Elisa; Sansebastiano, Giuliano Ezio; Ugolotti, Manuela; Pasquariello, Giovanna; Albertini, Roberto

    2012-01-01

    In cultural-heritage-related indoor environments, biological particles represent a hazard not only for cultural property, but also for operators and visitors. Reliable environmental monitoring methods are essential for examining each situation and assessing the effectiveness of preventive measures. We propose an integrated approach to the study of biological pollution in indoor environments such as libraries and archives. This approach includes microbial air and surface sampling, as well as an investigation of allergens and pollens. Part of this monitoring plan has been applied at the Palatina Library in Parma, Italy. However, wider collections of data are needed to fully understand the phenomena related with biological contamination, define reliable contamination threshold values, and implement appropriate preventive measures.

  18. A citizen science approach to monitoring bleaching in the zoantharian Palythoa tuberculosa

    KAUST Repository

    Parkinson, John Everett

    2016-03-28

    Coral reef bleaching events are expected to become more frequent and severe in the near future as climate changes. The zoantharian Palythoa tuberculosa bleaches earlier than many scleractinian corals and may serve as an indicator species. Basic monitoring of such species could help to detect and even anticipate bleaching events, especially in areas where more sophisticated approaches that rely on buoy or satellite measurements of sea surface temperature are unavailable or too coarse. One simple and inexpensive monitoring method involves training volunteers to record observations of host color as a proxy for symbiosis quality. Here, we trained university students to take the ‘color fingerprint’ of a reef by assessing the color of multiple randomly selected colonies of P. tuberculosa at one time point in Okinawa Island, Japan. We tested the reliability of the students’ color scores and whether they matched expectations based on previous monthly monitoring of tagged colonies at the same locations. We also measured three traditional metrics of symbiosis quality for comparison: symbiont morphological condition, cell density, and chlorophyll a content. We found that P. tuberculosa color score, although highly correlated among observers, provided little predictive power for the other variables. This was likely due to inherent variation in colony color among generally healthy zoantharians in midwinter, as well as low sample size and brief training owing to the course structure. Despite certain limitations of P. tuberculosa as a focal organism, the citizen science approach to color monitoring has promise, and we outline steps that could improve similar efforts in the future.

  19. A novel approach for navigational guidance of ships using onboard monitoring systems

    DEFF Research Database (Denmark)

    Nielsen, Ulrik Dam; Jensen, Jørgen Juncher

    2011-01-01

    A novel approach and conceptual ideas are outlined for risk-based navigational guidance of ships using decision support systems in combination with onboard, in-service monitoring systems. The guidance has as the main objective to advise on speed and/or course changes; in particular with focus...

  20. Compact approach to long-term monitored retrievable storage of spent fuel

    International Nuclear Information System (INIS)

    Muir, D.W.

    1986-01-01

    We examine a new approach to monitored retrievable storage (MRS) that is extremely compact in terms of total land use and may offer increased security and reduced environmental impact, relative to current designs. This approach involves embedding the spent fuel assemblies in monolithic blocks of metallic aluminum. While this would clearly require increased effort in the spent-fuel packaging phase, it would offer in return the above-mentioned environmental advantages, plus the option of easily extending the surface-storage time scale from several years to several decades if a need for longer storage times should arise in the future

  1. A HyperSpectral Imaging (HSI) approach for bio-digestate real time monitoring

    Science.gov (United States)

    Bonifazi, Giuseppe; Fabbri, Andrea; Serranti, Silvia

    2014-05-01

    One of the key issues in developing Good Agricultural Practices (GAP) is represented by the optimal utilisation of fertilisers and herbicidal to reduce the impact of Nitrates in soils and the environment. In traditional agriculture practises, these substances were provided to the soils through the use of chemical products (inorganic/organic fertilizers, soil improvers/conditioners, etc.), usually associated to several major environmental problems, such as: water pollution and contamination, fertilizer dependency, soil acidification, trace mineral depletion, over-fertilization, high energy consumption, contribution to climate change, impacts on mycorrhizas, lack of long-term sustainability, etc. For this reason, the agricultural market is more and more interested in the utilisation of organic fertilisers and soil improvers. Among organic fertilizers, there is an emerging interest for the digestate, a sub-product resulting from anaerobic digestion (AD) processes. Several studies confirm the high properties of digestate if used as organic fertilizer and soil improver/conditioner. Digestate, in fact, is somehow similar to compost: AD converts a major part of organic nitrogen to ammonia, which is then directly available to plants as nitrogen. In this paper, new analytical tools, based on HyperSpectral Imaging (HSI) sensing devices, and related detection architectures, is presented and discussed in order to define and apply simple to use, reliable, robust and low cost strategies finalised to define and implement innovative smart detection engines for digestate characterization and monitoring. This approach is finalized to utilize this "waste product" as a valuable organic fertilizer and soil conditioner, in a reduced impact and an "ad hoc" soil fertilisation perspective. Furthermore, the possibility to contemporary utilize the HSI approach to realize a real time physicalchemical characterisation of agricultural soils (i.e. nitrogen, phosphorus, etc., detection) could

  2. Many-objective Groundwater Monitoring Network Design Using Bias-Aware Ensemble Kalman Filtering and Evolutionary Optimization

    Science.gov (United States)

    Kollat, J. B.; Reed, P. M.

    2009-12-01

    This study contributes the ASSIST (Adaptive Strategies for Sampling in Space and Time) framework for improving long-term groundwater monitoring decisions across space and time while accounting for the influences of systematic model errors (or predictive bias). The ASSIST framework combines contaminant flow-and-transport modeling, bias-aware ensemble Kalman filtering (EnKF) and many-objective evolutionary optimization. Our goal in this work is to provide decision makers with a fuller understanding of the information tradeoffs they must confront when performing long-term groundwater monitoring network design. Our many-objective analysis considers up to 6 design objectives simultaneously and consequently synthesizes prior monitoring network design methodologies into a single, flexible framework. This study demonstrates the ASSIST framework using a tracer study conducted within a physical aquifer transport experimental tank located at the University of Vermont. The tank tracer experiment was extensively sampled to provide high resolution estimates of tracer plume behavior. The simulation component of the ASSIST framework consists of stochastic ensemble flow-and-transport predictions using ParFlow coupled with the Lagrangian SLIM transport model. The ParFlow and SLIM ensemble predictions are conditioned with tracer observations using a bias-aware EnKF. The EnKF allows decision makers to enhance plume transport predictions in space and time in the presence of uncertain and biased model predictions by conditioning them on uncertain measurement data. In this initial demonstration, the position and frequency of sampling were optimized to: (i) minimize monitoring cost, (ii) maximize information provided to the EnKF, (iii) minimize failure to detect the tracer, (iv) maximize the detection of tracer flux, (v) minimize error in quantifying tracer mass, and (vi) minimize error in quantifying the moment of the tracer plume. The results demonstrate that the many-objective problem

  3. A triaxial accelerometer monkey algorithm for optimal sensor placement in structural health monitoring

    Science.gov (United States)

    Jia, Jingqing; Feng, Shuo; Liu, Wei

    2015-06-01

    Optimal sensor placement (OSP) technique is a vital part of the field of structural health monitoring (SHM). Triaxial accelerometers have been widely used in the SHM of large-scale structures in recent years. Triaxial accelerometers must be placed in such a way that all of the important dynamic information is obtained. At the same time, the sensor configuration must be optimal, so that the test resources are conserved. The recommended practice is to select proper degrees of freedom (DOF) based upon several criteria and the triaxial accelerometers are placed at the nodes corresponding to these DOFs. This results in non-optimal placement of many accelerometers. A ‘triaxial accelerometer monkey algorithm’ (TAMA) is presented in this paper to solve OSP problems of triaxial accelerometers. The EFI3 measurement theory is modified and involved in the objective function to make it more adaptable in the OSP technique of triaxial accelerometers. A method of calculating the threshold value based on probability theory is proposed to improve the healthy rate of monkeys in a troop generation process. Meanwhile, the processes of harmony ladder climb and scanning watch jump are proposed and given in detail. Finally, Xinghai NO.1 Bridge in Dalian is implemented to demonstrate the effectiveness of TAMA. The final results obtained by TAMA are compared with those of the original monkey algorithm and EFI3 measurement, which show that TAMA can improve computational efficiency and get a better sensor configuration.

  4. Integrating environmental monitoring with cumulative effects management and decision making.

    Science.gov (United States)

    Cronmiller, Joshua G; Noble, Bram F

    2018-05-01

    Cumulative effects (CE) monitoring is foundational to emerging regional and watershed CE management frameworks, yet monitoring is often poorly integrated with CE management and decision-making processes. The challenges are largely institutional and organizational, more so than scientific or technical. Calls for improved integration of monitoring with CE management and decision making are not new, but there has been limited research on how best to integrate environmental monitoring programs to ensure credible CE science and to deliver results that respond to the more immediate questions and needs of regulatory decision makers. This paper examines options for the integration of environmental monitoring with CE frameworks. Based on semistructured interviews with practitioners, regulators, and other experts in the Lower Athabasca, Alberta, Canada, 3 approaches to monitoring system design are presented. First, a distributed monitoring system, reflecting the current approach in the Lower Athabasca, where monitoring is delegated to different external programs and organizations; second, a 1-window system in which monitoring is undertaken by a single, in-house agency for the purpose of informing management and regulatory decision making; third, an independent system driven primarily by CE science and understanding causal relationships, with knowledge adopted for decision support where relevant to specific management questions. The strengths and limitations of each approach are presented. A hybrid approach may be optimal-an independent, nongovernment, 1-window model for CE science, monitoring, and information delivery-capitalizing on the strengths of distributed, 1-window, and independent monitoring systems while mitigating their weaknesses. If governments are committed to solving CE problems, they must invest in the long-term science needed to do so; at the same time, if science-based monitoring programs are to be sustainable over the long term, they must be responsive to

  5. On-line near-infrared spectroscopy optimizing and monitoring biotransformation process of γ-aminobutyric acid

    Directory of Open Access Journals (Sweden)

    Guoyu Ding

    2016-06-01

    Full Text Available Near-infrared spectroscopy (NIRS with its fast and nondestructive advantages can be qualified for the real-time quantitative analysis. This paper demonstrates that NIRS combined with partial least squares (PLS regression can be used as a rapid analytical method to simultaneously quantify l-glutamic acid (l-Glu and γ-aminobutyric acid (GABA in a biotransformation process and to guide the optimization of production conditions when the merits of NIRS are combined with response surface methodology. The high performance liquid chromatography (HPLC reference analysis was performed by the o-phthaldialdehyde pre-column derivatization. NIRS measurements of two batches of 141 samples were firstly analyzed by PLS with several spectral pre-processing methods. Compared with those of the HPLC reference analysis, the resulting determination coefficients (R2, root mean square error of prediction (RMSEP and residual predictive deviation (RPD of the external validation for the l-Glu concentration were 99.5%, 1.62 g/L, and 11.3, respectively. For the GABA concentration, R2, RMSEP, and RPD were 99.8%, 4.00 g/L, and 16.4, respectively. This NIRS model was then used to optimize the biotransformation process through a Box-Behnken experimental design. Under the optimal conditions without pH adjustment, 200 g/L l-Glu could be catalyzed by 7148 U/L glutamate decarboxylase (GAD to GABA, reaching 99% conversion at the fifth hour. NIRS analysis provided timely information on the conversion from l-Glu to GABA. The results suggest that the NIRS model can not only be used for the routine profiling of enzymatic conversion, providing a simple and effective method of monitoring the biotransformation process of GABA, but also be considered to be an optimal tool to guide the optimization of production conditions.

  6. Non-intrusive long-term monitoring approaches

    International Nuclear Information System (INIS)

    Smathers, D.; Mangan, D.

    1998-01-01

    In order to promote internatinal confidence that the US and Russia are disarming per their commitments under Article 6 of the Non-Proliferation Treaty, an international verification regime may be applied to US and Russian excess fissile materials. Initially, it is envisioned that this verification regime would be applied at storage facilities; however, it should be anticipated that the verification regime would continue throughout any material disposition activities, should such activities be pursued. once the materials are accepted into the verification regime, it is assumed that long term monitoring will be used to maintain continuity of knowledge. The requirements for long term storage monitoring include unattended operation for extended periods of time, minimal intrusiveness on the host nation's safety and security activities, data collection incorporating data authentication, and monitoring redundancy to allow resolution of anomalies and to continue coverage in the event of equipment failures. Additional requirements include effective data review and analysis processes, operation during storage facility loading, procedure for removal of inventory items for safety-related surveillance, and low cost, reliable equipment. A monitoring system might include both continuous monitoring of storagecontainers and continuous area monitoring. These would be complemented with periodic on-site inspections. A fissile material storage facility is not a static operation. The initial studies have shown there are a number of valid reasons why a host nation may need them to remove material from the storage facility. A practical monitoring system must be able to accommodate necessary material movements

  7. Optimized Field Sampling and Monitoring of Airborne Hazardous Transport Plumes; A Geostatistical Simulation Approach

    International Nuclear Information System (INIS)

    Chen, DI-WEN

    2001-01-01

    Airborne hazardous plumes inadvertently released during nuclear/chemical/biological incidents are mostly of unknown composition and concentration until measurements are taken of post-accident ground concentrations from plume-ground deposition of constituents. Unfortunately, measurements often are days post-incident and rely on hazardous manned air-vehicle measurements. Before this happens, computational plume migration models are the only source of information on the plume characteristics, constituents, concentrations, directions of travel, ground deposition, etc. A mobile ''lighter than air'' (LTA) system is being developed at Oak Ridge National Laboratory that will be part of the first response in emergency conditions. These interactive and remote unmanned air vehicles will carry light-weight detectors and weather instrumentation to measure the conditions during and after plume release. This requires a cooperative computationally organized, GPS-controlled set of LTA's that self-coordinate around the objectives in an emergency situation in restricted time frames. A critical step before an optimum and cost-effective field sampling and monitoring program proceeds is the collection of data that provides statistically significant information, collected in a reliable and expeditious manner. Efficient aerial arrangements of the detectors taking the data (for active airborne release conditions) are necessary for plume identification, computational 3-dimensional reconstruction, and source distribution functions. This report describes the application of stochastic or geostatistical simulations to delineate the plume for guiding subsequent sampling and monitoring designs. A case study is presented of building digital plume images, based on existing ''hard'' experimental data and ''soft'' preliminary transport modeling results of Prairie Grass Trials Site. Markov Bayes Simulation, a coupled Bayesian/geostatistical methodology, quantitatively combines soft information

  8. Reliable fault detection and diagnosis of photovoltaic systems based on statistical monitoring approaches

    KAUST Repository

    Harrou, Fouzi; Sun, Ying; Taghezouit, Bilal; Saidi, Ahmed; Hamlati, Mohamed-Elkarim

    2017-01-01

    This study reports the development of an innovative fault detection and diagnosis scheme to monitor the direct current (DC) side of photovoltaic (PV) systems. Towards this end, we propose a statistical approach that exploits the advantages of one

  9. GSM BASED IRRIGATION CONTROL AND MONITORING SYSTEM

    OpenAIRE

    GODFREY A. MILLS; STEPHEN K. ARMOO; AGYEMAN K. ROCKSON; ROBERT A. SOWAH; MOSES A. ACQUAH

    2013-01-01

    Irrigated agriculture is one of the primary water consumers in most parts of the world. With developments in technology, efforts are being channeled into automation of irrigation systems to facilitate remote control of the irrigation system and optimize crop production and cost effectiveness. This paper describes an on-going work on GSM based irrigation monitoring and control systems. The objective of the work is to provide an approach that helps farmers to easily access, manage and regulate ...

  10. Optimization of minoxidil microemulsions using fractional factorial design approach.

    Science.gov (United States)

    Jaipakdee, Napaphak; Limpongsa, Ekapol; Pongjanyakul, Thaned

    2016-01-01

    The objective of this study was to apply fractional factorial and multi-response optimization designs using desirability function approach for developing topical microemulsions. Minoxidil (MX) was used as a model drug. Limonene was used as an oil phase. Based on solubility, Tween 20 and caprylocaproyl polyoxyl-8 glycerides were selected as surfactants, propylene glycol and ethanol were selected as co-solvent in aqueous phase. Experiments were performed according to a two-level fractional factorial design to evaluate the effects of independent variables: Tween 20 concentration in surfactant system (X1), surfactant concentration (X2), ethanol concentration in co-solvent system (X3), limonene concentration (X4) on MX solubility (Y1), permeation flux (Y2), lag time (Y3), deposition (Y4) of MX microemulsions. It was found that Y1 increased with increasing X3 and decreasing X2, X4; whereas Y2 increased with decreasing X1, X2 and increasing X3. While Y3 was not affected by these variables, Y4 increased with decreasing X1, X2. Three regression equations were obtained and calculated for predicted values of responses Y1, Y2 and Y4. The predicted values matched experimental values reasonably well with high determination coefficient. By using optimal desirability function, optimized microemulsion demonstrating the highest MX solubility, permeation flux and skin deposition was confirmed as low level of X1, X2 and X4 but high level of X3.

  11. Energy optimization and prediction of complex petrochemical industries using an improved artificial neural network approach integrating data envelopment analysis

    International Nuclear Information System (INIS)

    Han, Yong-Ming; Geng, Zhi-Qiang; Zhu, Qun-Xiong

    2016-01-01

    Graphical abstract: This paper proposed an energy optimization and prediction of complex petrochemical industries based on a DEA-integrated ANN approach (DEA-ANN). The proposed approach utilizes the DEA model with slack variables for sensitivity analysis to determine the effective decision making units (DMUs) and indicate the optimized direction of the ineffective DMUs. Compared with the traditional ANN approach, the DEA-ANN prediction model is effectively verified by executing a linear comparison between all DMUs and the effective DMUs through the standard data source from the UCI (University of California at Irvine) repository. Finally, the proposed model is validated through an application in a complex ethylene production system of China petrochemical industry. Meanwhile, the optimization result and the prediction value are obtained to reduce energy consumption of the ethylene production system, guide ethylene production and improve energy efficiency. - Highlights: • The DEA-integrated ANN approach is proposed. • The DEA-ANN prediction model is effectively verified through the standard data source from the UCI repository. • The energy optimization and prediction framework of complex petrochemical industries based on the proposed method is obtained. • The proposed method is valid and efficient in improvement of energy efficiency in complex petrochemical plants. - Abstract: Since the complex petrochemical data have characteristics of multi-dimension, uncertainty and noise, it is difficult to accurately optimize and predict the energy usage of complex petrochemical systems. Therefore, this paper proposes a data envelopment analysis (DEA) integrated artificial neural network (ANN) approach (DEA-ANN). The proposed approach utilizes the DEA model with slack variables for sensitivity analysis to determine the effective decision making units (DMUs) and indicate the optimized direction of the ineffective DMUs. Compared with the traditional ANN approach, the DEA

  12. Industrial Environmental Monitoring — A Land Restoration Costs Tracking Tool

    International Nuclear Information System (INIS)

    Iskakov, M.; Nurgaziyev, M.; Eleyushov, B.; Kayukov, P.

    2014-01-01

    This paper describes a procedure in use in Kazakhstan for controlling the rehabilitation of sites damaged by subsurface operations. It sets out the legal requirements and a methodology for production environmental control in which a procedure is established for monitoring and impact assessment and for optimizing remediation approaches, taking into account the environmental impact and the associated costs of different options. (author)

  13. Outage optimization - the US experience and approach

    International Nuclear Information System (INIS)

    LaPlatney, J.

    2007-01-01

    Sustainable development of Nuclear Energy depends heavily on excellent performance of the existing fleet which in turn depends heavily on the performance of planned outages. Some reactor fleets, for example Finland and Germany, have demonstrated sustained good outage performance from their start of commercial operation. Others, such as the US, have improved performance over time. The principles behind a successful outage optimization process are: -) duration is not sole measure of outage success, -) outage work must be performed safely, -) scope selection must focus on improving plant material condition to improve reliability, -) all approved outage work must be completed, -) work must be done cost effectively, -) post-outage plant reliability is a key measure of outage success, and -) outage lessons learned must be effectively implemented to achieve continuous improvement. This approach has proven its superiority over simple outage shortening, and has yielded good results in the US fleet over the past 15 years

  14. A multi-disciplinary approach for the structural monitoring of Cultural Heritages in a seismic area

    Science.gov (United States)

    Fabrizia Buongiorno, Maria; Musacchio, Massimo; Guerra, Ignazio; Porco, Giacinto; Stramondo, Salvatore; Casula, Giuseppe; Caserta, Arrigo; Speranza, Fabio; Doumaz, Fawzi; Giovanna Bianchi, Maria; Luzi, Guido; Ilaria Pannaccione Apa, Maria; Montuori, Antonio; Gaudiosi, Iolanda; Vecchio, Antonio; Gervasi, Anna; Bonali, Elena; Romano, Dolores; Falcone, Sergio; La Piana, Carmelo

    2014-05-01

    In the recent years, the concepts of seismic risk vulnerability and structural health monitoring have become very important topics in the field of both structural and civil engineering for the identification of appropriate risk indicators and risk assessment methodologies in Cultural Heritages monitoring. The latter, which includes objects, building and sites with historical, architectural and/or engineering relevance, concerns the management, the preservation and the maintenance of the heritages within their surrounding environmental context, in response to climate changes and natural hazards (e.g. seismic, volcanic, landslides and flooding hazards). Within such a framework, the complexity and the great number of variables to be considered require a multi-disciplinary approach including strategies, methodologies and tools able to provide an effective monitoring of Cultural Heritages form both scientific and operational viewpoints. Based on this rationale, in this study, an advanced, technological and operationally-oriented approach is presented and tested, which enables measuring and monitoring Cultural Heritage conservation state and geophysical/geological setting of the area, in order to mitigate the seismic risk of the historical public goods at different spatial scales*. The integration between classical geophysical methods with new emerging sensing techniques enables a multi-depth, multi-resolution, and multi-scale monitoring in both space and time. An integrated system of methodologies, instrumentation and data-processing approaches for non-destructive Cultural Heritage investigations is proposed, which concerns, in detail, the analysis of seismogenetic sources, the geological-geotechnical setting of the area and site seismic effects evaluation, proximal remote sensing techniques (e.g. terrestrial laser scanner, ground-based radar systems, thermal cameras), high-resolution aerial and satellite-based remote sensing methodologies (e.g. aeromagnetic surveys

  15. Monitoring of large-scale federated data storage: XRootD and beyond

    International Nuclear Information System (INIS)

    Andreeva, J; Beche, A; Arias, D Diguez; Giordano, D; Saiz, P; Tuckett, D; Belov, S; Oleynik, D; Petrosyan, A; Tadel, M; Vukotic, I

    2014-01-01

    The computing models of the LHC experiments are gradually moving from hierarchical data models with centrally managed data pre-placement towards federated storage which provides seamless access to data files independently of their location and dramatically improve recovery due to fail-over mechanisms. Construction of the data federations and understanding the impact of the new approach to data management on user analysis requires complete and detailed monitoring. Monitoring functionality should cover the status of all components of the federated storage, measuring data traffic and data access performance, as well as being able to detect any kind of inefficiencies and to provide hints for resource optimization and effective data distribution policy. Data mining of the collected monitoring data provides a deep insight into new usage patterns. In the WLCG context, there are several federations currently based on the XRootD technology. This paper will focus on monitoring for the ATLAS and CMS XRootD federations implemented in the Experiment Dashboard monitoring framework. Both federations consist of many dozens of sites accessed by many hundreds of clients and they continue to grow in size. Handling of the monitoring flow generated by these systems has to be well optimized in order to achieve the required performance. Furthermore, this paper demonstrates the XRootD monitoring architecture is sufficiently generic to be easily adapted for other technologies, such as HTTP/WebDAV dynamic federations.

  16. Optimal extraction of petroleum resources: an empirical approach

    International Nuclear Information System (INIS)

    Helmi-Oskoui, B.; Narayanan, R.; Glover, T.; Lyon, K.S.; Sinha, M.

    1992-01-01

    Petroleum reservoir behaviour at different levels of reservoir pressure is estimated with the actual well data and reservoir characteristics. Using the pressure at the bottom of producing wells as control variables, the time paths of profit maximizing joint production of oil and natural gas under various tax policies are obtained using a dynamic optimization approach. The results emerge from numerical solution of the maximization of estimated future expected revenues net of variable costs in the presence of taxation. Higher discount rate shifts the production forward in time and prolongs the production plan. The analysis of the state, corporate income taxes and depletion allowance reveals the changes in the revenues to the firm, the state and the federal governments. 18 refs., 3 figs., 4 tabs

  17. CHARACTERIZATION AND MONITORING OF NATURAL ATTENUATION OF CHLORINATED SOLVENTS IN GROUNDWATER: A SYSTEMS APPROACH

    Energy Technology Data Exchange (ETDEWEB)

    Looney, B; Michael Heitkamp, M; Gary Wein (NOEMAIL), G; Christopher Bagwell, C; Karen Vangelas, K; Karen-M Adams, K; Tyler Gilmore; Norman Cutshall; David Major; Mike Truex; Todd Wiedemeier; Francis H. Chapelle; Tom Early; Jody Waugh; David Peterson; Mark Ankeny; Claire H. Sink

    2006-08-10

    The objective of this document is to examine the use of a phased approach to characterizing and monitoring (C&M) natural attenuation processes and enhanced attenuation processes and to identify promising tools and techniques by which to accomplish the C&M. We will investigate developing techniques, such as molecular-based assessment tools, and existing tools that traditionally have not been used for monitoring the performance of environmental remediation technologies. Case studies will be used to provide examples of how non-traditional methods are being employed as characterization and monitoring tools to support MNA and EA. The document is not focused on a specific group of readers but rather is broadly directed with the intent that readers may gain information useful to their purposes. Thus, regulators may see some future characterization and monitoring techniques; end users may find novel ways to make MNA or EA more effective or efficient at their site; researchers may identify new areas for development or new and better combinations of existing methods. One consequence of this broad approach is that some readers may find certain sections either too rudimentary or too advanced for their needs. Hopefully, all will be able to use at least some of the document.

  18. Methodological approaches for monitoring opportunistic pathogens in premise plumbing: A review.

    Science.gov (United States)

    Wang, Hong; Bédard, Emilie; Prévost, Michèle; Camper, Anne K; Hill, Vincent R; Pruden, Amy

    2017-06-15

    Opportunistic premise (i.e., building) plumbing pathogens (OPPPs, e.g., Legionella pneumophila, Mycobacterium avium complex, Pseudomonas aeruginosa, Acanthamoeba, and Naegleria fowleri) are a significant and growing source of disease. Because OPPPs establish and grow as part of the native drinking water microbiota, they do not correspond to fecal indicators, presenting a major challenge to standard drinking water monitoring practices. Further, different OPPPs present distinct requirements for sampling, preservation, and analysis, creating an impediment to their parallel detection. The aim of this critical review is to evaluate the state of the science of monitoring OPPPs and identify a path forward for their parallel detection and quantification in a manner commensurate with the need for reliable data that is informative to risk assessment and mitigation. Water and biofilm sampling procedures, as well as factors influencing sample representativeness and detection sensitivity, are critically evaluated with respect to the five representative bacterial and amoebal OPPPs noted above. Available culturing and molecular approaches are discussed in terms of their advantages, limitations, and applicability. Knowledge gaps and research needs towards standardized approaches are identified. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. A chaotic quantum-behaved particle swarm approach applied to optimization of heat exchangers

    International Nuclear Information System (INIS)

    Mariani, Viviana Cocco; Klassen Duck, Anderson Rodrigo; Guerra, Fabio Alessandro; Santos Coelho, Leandro dos; Rao, Ravipudi Venkata

    2012-01-01

    Particle swarm optimization (PSO) method is a population-based optimization technique of swarm intelligence field in which each solution called “particle” flies around in a multidimensional problem search space. During the flight, every particle adjusts its position according to its own experience, as well as the experience of neighboring particles, using the best position encountered by itself and its neighbors. In this paper, a new quantum particle swarm optimization (QPSO) approach combined with Zaslavskii chaotic map sequences (QPSOZ) to shell and tube heat exchanger optimization is presented based on the minimization from economic view point. The results obtained in this paper for two case studies using the proposed QPSOZ approach, are compared with those obtained by using genetic algorithm, PSO and classical QPSO showing the best performance of QPSOZ. In order to verify the capability of the proposed method, two case studies are also presented showing that significant cost reductions are feasible with respect to traditionally designed exchangers. Referring to the literature test cases, reduction of capital investment up to 20% and 6% for the first and second cases, respectively, were obtained. Therefore, the annual pumping cost decreased markedly 72% and 75%, with an overall decrease of total cost up to 30% and 27%, respectively, for the cases 1 and 2, respectively, showing the improvement potential of the proposed method, QPSOZ. - Highlights: ► Shell and tube heat exchanger is minimized from economic view point. ► A new quantum particle swarm optimization (QPSO) combined with Zaslavskii chaotic map sequences (QPSOZ) is proposed. ► Reduction of capital investment up to 20% and 6% for the first and second cases was obtained. ► Annual pumping cost decreased 72% and 75%, with an overall decrease of total cost up to 30% and 27% using QPSOZ.

  20. An approach to criteria, design limits and monitoring in nuclear fuel waste disposal

    Energy Technology Data Exchange (ETDEWEB)

    Simmons, G R; Baumgartner, P; Bird, G A; Davison, C C; Johnson, L H; Tamm, J A

    1994-12-01

    The Nuclear Fuel Waste Management Program has been established to develop and demonstrate the technology for safe geological disposal of nuclear fuel waste. One objective of the program is to show that a disposal system (i.e., a disposal centre and associated transportation system) can be designed and that it would be safe. Therefore the disposal system must be shown to comply with safety requirements specified in guidelines, standards, codes and regulations. The components of the disposal system must also be shown to operate within the limits specified in their design. Compliance and performance of the disposal system would be assessed on a site-specific basis by comparing estimates of the anticipated performance of the system and its components with compliance or performance criteria. A monitoring program would be developed to consider the effects of the disposal system on the environment and would include three types of monitoring: baseline monitoring, compliance monitoring, and performance monitoring. This report presents an approach to establishing compliance and performance criteria, limits for use in disposal system component design, and the main elements of a monitoring program for a nuclear fuel waste disposal system. (author). 70 refs., 9 tabs., 13 figs.

  1. An approach to criteria, design limits and monitoring in nuclear fuel waste disposal

    International Nuclear Information System (INIS)

    Simmons, G.R.; Baumgartner, P.; Bird, G.A.; Davison, C.C.; Johnson, L.H.; Tamm, J.A.

    1994-12-01

    The Nuclear Fuel Waste Management Program has been established to develop and demonstrate the technology for safe geological disposal of nuclear fuel waste. One objective of the program is to show that a disposal system (i.e., a disposal centre and associated transportation system) can be designed and that it would be safe. Therefore the disposal system must be shown to comply with safety requirements specified in guidelines, standards, codes and regulations. The components of the disposal system must also be shown to operate within the limits specified in their design. Compliance and performance of the disposal system would be assessed on a site-specific basis by comparing estimates of the anticipated performance of the system and its components with compliance or performance criteria. A monitoring program would be developed to consider the effects of the disposal system on the environment and would include three types of monitoring: baseline monitoring, compliance monitoring, and performance monitoring. This report presents an approach to establishing compliance and performance criteria, limits for use in disposal system component design, and the main elements of a monitoring program for a nuclear fuel waste disposal system. (author). 70 refs., 9 tabs., 13 figs

  2. Swarm intelligence-based approach for optimal design of CMOS differential amplifier and comparator circuit using a hybrid salp swarm algorithm

    Science.gov (United States)

    Asaithambi, Sasikumar; Rajappa, Muthaiah

    2018-05-01

    In this paper, an automatic design method based on a swarm intelligence approach for CMOS analog integrated circuit (IC) design is presented. The hybrid meta-heuristics optimization technique, namely, the salp swarm algorithm (SSA), is applied to the optimal sizing of a CMOS differential amplifier and the comparator circuit. SSA is a nature-inspired optimization algorithm which mimics the navigating and hunting behavior of salp. The hybrid SSA is applied to optimize the circuit design parameters and to minimize the MOS transistor sizes. The proposed swarm intelligence approach was successfully implemented for an automatic design and optimization of CMOS analog ICs using Generic Process Design Kit (GPDK) 180 nm technology. The circuit design parameters and design specifications are validated through a simulation program for integrated circuit emphasis simulator. To investigate the efficiency of the proposed approach, comparisons have been carried out with other simulation-based circuit design methods. The performances of hybrid SSA based CMOS analog IC designs are better than the previously reported studies.

  3. A modular approach to large-scale design optimization of aerospace systems

    Science.gov (United States)

    Hwang, John T.

    Gradient-based optimization and the adjoint method form a synergistic combination that enables the efficient solution of large-scale optimization problems. Though the gradient-based approach struggles with non-smooth or multi-modal problems, the capability to efficiently optimize up to tens of thousands of design variables provides a valuable design tool for exploring complex tradeoffs and finding unintuitive designs. However, the widespread adoption of gradient-based optimization is limited by the implementation challenges for computing derivatives efficiently and accurately, particularly in multidisciplinary and shape design problems. This thesis addresses these difficulties in two ways. First, to deal with the heterogeneity and integration challenges of multidisciplinary problems, this thesis presents a computational modeling framework that solves multidisciplinary systems and computes their derivatives in a semi-automated fashion. This framework is built upon a new mathematical formulation developed in this thesis that expresses any computational model as a system of algebraic equations and unifies all methods for computing derivatives using a single equation. The framework is applied to two engineering problems: the optimization of a nanosatellite with 7 disciplines and over 25,000 design variables; and simultaneous allocation and mission optimization for commercial aircraft involving 330 design variables, 12 of which are integer variables handled using the branch-and-bound method. In both cases, the framework makes large-scale optimization possible by reducing the implementation effort and code complexity. The second half of this thesis presents a differentiable parametrization of aircraft geometries and structures for high-fidelity shape optimization. Existing geometry parametrizations are not differentiable, or they are limited in the types of shape changes they allow. This is addressed by a novel parametrization that smoothly interpolates aircraft

  4. Using 50 years of soil radiocarbon data to identify optimal approaches for estimating soil carbon residence times

    Energy Technology Data Exchange (ETDEWEB)

    Baisden, W.T., E-mail: t.baisden@gns.cri.nz [National Isotope Centre, GNS Science, P.O. Box 31312, Lower Hutt (New Zealand); Canessa, S. [National Isotope Centre, GNS Science, P.O. Box 31312, Lower Hutt (New Zealand)

    2013-01-15

    In 1959, Athol Rafter began a substantial programme of systematically monitoring the flow of {sup 14}C produced by atmospheric thermonuclear tests through organic matter in New Zealand soils under stable land use. A database of {approx}500 soil radiocarbon measurements spanning 50 years has now been compiled, and is used here to identify optimal approaches for soil C-cycle studies. Our results confirm the potential of {sup 14}C to determine residence times, by estimating the amount of 'bomb {sup 14}C' incorporated. High-resolution time series confirm this approach is appropriate, and emphasise that residence times can be calculated routinely with two or more time points as little as 10 years apart. This approach is generally robust to the key assumptions that can create large errors when single time-point {sup 14}C measurements are modelled. The three most critical assumptions relate to: (1) the distribution of turnover times, and particularly the proportion of old C ('passive fraction'), (2) the lag time between photosynthesis and C entering the modelled pool, (3) changes in the rates of C input. When carrying out approaches using robust assumptions on time-series samples, multiple soil layers can be aggregated using a mixing equation. Where good archived samples are available, AMS measurements can develop useful understanding for calibrating models of the soil C cycle at regional to continental scales with sample numbers on the order of hundreds rather than thousands. Sample preparation laboratories and AMS facilities can play an important role in coordinating the efficient delivery of robust calculated residence times for soil carbon.

  5. Pathogen Treatment Guidance and Monitoring Approaches fro On-Site Non-Potable Water Reuse

    Science.gov (United States)

    On-site non-potable water reuse is increasingly used to augment water supplies, but traditional fecal indicator approaches for defining and monitoring exposure risks are limited when applied to these decentralized options. This session emphasizes risk-based modeling to define pat...

  6. Teaching operating room conflict management to surgeons: clarifying the optimal approach.

    Science.gov (United States)

    Rogers, David; Lingard, Lorelei; Boehler, Margaret L; Espin, Sherry; Klingensmith, Mary; Mellinger, John D; Schindler, Nancy

    2011-09-01

    Conflict management has been identified as an essential competence for surgeons as they work in operating room (OR) teams; however, the optimal approach is unclear. Social science research offers two alternatives, the first of which recommends that task-related conflict be managed using problem-solving techniques while avoiding relationship conflict. The other approach advocates for the active management of relationship conflict as it almost always accompanies task-related conflict. Clarity about the optimal management strategy can be gained through a better understanding of conflict transformation, or the inter-relationship between conflict types, in this specific setting. The purpose of this study was to evaluate conflict transformation in OR teams in order to clarify the approach most appropriate for an educational conflict management programme for surgeons. A constructivist grounded theory approach was adopted to explore the phenomenon of OR team conflict. Narratives were collected from focus groups of OR nurses and surgeons at five participating centres. A subset of these narratives involved transformation between and within conflict types. This dataset was analysed. The results confirm that misattribution and the use of harsh language cause conflict transformation in OR teams just as they do in stable work teams. Negative emotionality was found to make a substantial contribution to responses to and consequences of conflict, notably in the swiftness with which individuals terminated their working relationships. These findings contribute to a theory of conflict transformation in the OR team. There are a number of behaviours that activate conflict transformation in the OR team and a conflict management education programme should include a description of and alternatives to these behaviours. The types of conflict are tightly interwoven in this setting and thus the most appropriate management strategy is one that assumes that both types of conflict will exist and

  7. An optimization approach for extracting and encoding consistent maps in a shape collection

    KAUST Repository

    Huang, Qi-Xing

    2012-11-01

    We introduce a novel approach for computing high quality point-topoint maps among a collection of related shapes. The proposed approach takes as input a sparse set of imperfect initial maps between pairs of shapes and builds a compact data structure which implicitly encodes an improved set of maps between all pairs of shapes. These maps align well with point correspondences selected from initial maps; they map neighboring points to neighboring points; and they provide cycle-consistency, so that map compositions along cycles approximate the identity map. The proposed approach is motivated by the fact that a complete set of maps between all pairs of shapes that admits nearly perfect cycleconsistency are highly redundant and can be represented by compositions of maps through a single base shape. In general, multiple base shapes are needed to adequately cover a diverse collection. Our algorithm sequentially extracts such a small collection of base shapes and creates correspondences from each of these base shapes to all other shapes. These correspondences are found by global optimization on candidate correspondences obtained by diffusing initial maps. These are then used to create a compact graphical data structure from which globally optimal cycle-consistent maps can be extracted using simple graph algorithms. Experimental results on benchmark datasets show that the proposed approach yields significantly better results than state-of-theart data-driven shape matching methods. © 2012 ACM.

  8. Context-Aware Local Optimization of Sensor Network Deployment

    Directory of Open Access Journals (Sweden)

    Meysam Argany

    2015-07-01

    Full Text Available Wireless sensor networks are increasingly used for tracking and monitoring dynamic phenomena in urban and natural areas. Spatial coverage is an important issue in sensor networks in order to fulfill the needs of sensing applications. Optimization methods are widely used to efficiently distribute sensor nodes in the network to achieve a desired level of coverage. Most of the existing algorithms do not consider the characteristics of the real environment in the optimization process. In this paper, we propose the integration of contextual information in optimization algorithms to improve sensor network coverage. First, we investigate the implication of contextual information in sensor networks. Then, a conceptual framework for local context-aware sensor network deployment optimization method is introduced and related algorithms are presented in detail. Finally, several experiments are carried out to evaluate the validity of the proposed method. The results obtained from these experiments show the effectiveness of our approach in different contextual situations.

  9. Numerical Optimization Design of Dynamic Quantizer via Matrix Uncertainty Approach

    Directory of Open Access Journals (Sweden)

    Kenji Sawada

    2013-01-01

    Full Text Available In networked control systems, continuous-valued signals are compressed to discrete-valued signals via quantizers and then transmitted/received through communication channels. Such quantization often degrades the control performance; a quantizer must be designed that minimizes the output difference between before and after the quantizer is inserted. In terms of the broadbandization and the robustness of the networked control systems, we consider the continuous-time quantizer design problem. In particular, this paper describes a numerical optimization method for a continuous-time dynamic quantizer considering the switching speed. Using a matrix uncertainty approach of sampled-data control, we clarify that both the temporal and spatial resolution constraints can be considered in analysis and synthesis, simultaneously. Finally, for the slow switching, we compare the proposed and the existing methods through numerical examples. From the examples, a new insight is presented for the two-step design of the existing continuous-time optimal quantizer.

  10. A convex optimization approach for solving large scale linear systems

    Directory of Open Access Journals (Sweden)

    Debora Cores

    2017-01-01

    Full Text Available The well-known Conjugate Gradient (CG method minimizes a strictly convex quadratic function for solving large-scale linear system of equations when the coefficient matrix is symmetric and positive definite. In this work we present and analyze a non-quadratic convex function for solving any large-scale linear system of equations regardless of the characteristics of the coefficient matrix. For finding the global minimizers, of this new convex function, any low-cost iterative optimization technique could be applied. In particular, we propose to use the low-cost globally convergent Spectral Projected Gradient (SPG method, which allow us to extend this optimization approach for solving consistent square and rectangular linear system, as well as linear feasibility problem, with and without convex constraints and with and without preconditioning strategies. Our numerical results indicate that the new scheme outperforms state-of-the-art iterative techniques for solving linear systems when the symmetric part of the coefficient matrix is indefinite, and also for solving linear feasibility problems.

  11. Gradient heat flux measurement as monitoring method for the diesel engine

    Science.gov (United States)

    Sapozhnikov, S. Z.; Mityakov, V. Yu; Mityakov, A. V.; Vintsarevich, A. V.; Pavlov, A. V.; Nalyotov, I. D.

    2017-11-01

    The usage of gradient heat flux measurement for monitoring of heat flux on combustion chamber surface and optimization of diesel work process is proposed. Heterogeneous gradient heat flux sensors can be used at various regimes for an appreciable length of time. Fuel injection timing is set by the position of the maximum point on the angular heat flux diagram however, the value itself of the heat flux may not be considered. The development of such an approach can be productive for remote monitoring of work process in the cylinders of high-power marine engines.

  12. Comparison and Cost Analysis of Drinking Water Quality Monitoring Requirements versus Practice in Seven Developing Countries

    Directory of Open Access Journals (Sweden)

    Jonny Crocker

    2014-07-01

    Full Text Available Drinking water quality monitoring programs aim to support provision of safe drinking water by informing water quality management. Little evidence or guidance exists on best monitoring practices for low resource settings. Lack of financial, human, and technological resources reduce a country’s ability to monitor water supply. Monitoring activities were characterized in Cambodia, Colombia, India (three states, Jordan, Peru, South Africa, and Uganda according to water sector responsibilities, monitoring approaches, and marginal cost. The seven study countries were selected to represent a range of low resource settings. The focus was on monitoring of microbiological parameters, such as E. coli, coliforms, and H2S-producing microorganisms. Data collection involved qualitative and quantitative methods. Across seven study countries, few distinct approaches to monitoring were observed, and in all but one country all monitoring relied on fixed laboratories for sample analysis. Compliance with monitoring requirements was highest for operational monitoring of large water supplies in urban areas. Sample transport and labor for sample collection and analysis together constitute approximately 75% of marginal costs, which exclude capital costs. There is potential for substantive optimization of monitoring programs by considering field-based testing and by fundamentally reconsidering monitoring approaches for non-piped supplies. This is the first study to look quantitatively at water quality monitoring practices in multiple developing countries.

  13. Fault detection and isolation in GPS receiver autonomous integrity monitoring based on chaos particle swarm optimization-particle filter algorithm

    Science.gov (United States)

    Wang, Ershen; Jia, Chaoying; Tong, Gang; Qu, Pingping; Lan, Xiaoyu; Pang, Tao

    2018-03-01

    The receiver autonomous integrity monitoring (RAIM) is one of the most important parts in an avionic navigation system. Two problems need to be addressed to improve this system, namely, the degeneracy phenomenon and lack of samples for the standard particle filter (PF). However, the number of samples cannot adequately express the real distribution of the probability density function (i.e., sample impoverishment). This study presents a GPS receiver autonomous integrity monitoring (RAIM) method based on a chaos particle swarm optimization particle filter (CPSO-PF) algorithm with a log likelihood ratio. The chaos sequence generates a set of chaotic variables, which are mapped to the interval of optimization variables to improve particle quality. This chaos perturbation overcomes the potential for the search to become trapped in a local optimum in the particle swarm optimization (PSO) algorithm. Test statistics are configured based on a likelihood ratio, and satellite fault detection is then conducted by checking the consistency between the state estimate of the main PF and those of the auxiliary PFs. Based on GPS data, the experimental results demonstrate that the proposed algorithm can effectively detect and isolate satellite faults under conditions of non-Gaussian measurement noise. Moreover, the performance of the proposed novel method is better than that of RAIM based on the PF or PSO-PF algorithm.

  14. Tapping and listening: a new approach to bolt looseness monitoring

    Science.gov (United States)

    Kong, Qingzhao; Zhu, Junxiao; Ho, Siu Chun Michael; Song, Gangbing

    2018-07-01

    Bolted joints are among the most common building blocks used across different types of structures, and are often the key components that sew all other structural parts together. Monitoring and assessment of looseness in bolted structures is one of the most attractive topics in mechanical, aerospace, and civil engineering. This paper presents a new percussion-based non-destructive approach to determine the health condition of bolted joints with the help of machine learning. The proposed method is very similar to the percussive diagnostic techniques used in clinical examinations to diagnose the health of patients. Due to the different interfacial properties among the bolts, nuts and the host structure, bolted joints can generate unique sounds when it is excited by impacts, such as from tapping. Power spectrum density, as a signal feature, was used to recognize and classify recorded tapping data. A machine learning model using the decision tree method was employed to identify the bolt looseness level. Experiments demonstrated that the newly proposed method for bolt looseness detection is very easy to implement by ‘listening to tapping’ and the monitoring accuracy is very high. With the rapid in robotics, the proposed approach has great potential to be implemented with intimately weaving robotics and machine learning to produce a cyber-physical system that can automatically inspect and determine the health of a structure.

  15. A new approach for structural health monitoring by applying anomaly detection on strain sensor data

    Science.gov (United States)

    Trichias, Konstantinos; Pijpers, Richard; Meeuwissen, Erik

    2014-03-01

    Structural Health Monitoring (SHM) systems help to monitor critical infrastructures (bridges, tunnels, etc.) remotely and provide up-to-date information about their physical condition. In addition, it helps to predict the structure's life and required maintenance in a cost-efficient way. Typically, inspection data gives insight in the structural health. The global structural behavior, and predominantly the structural loading, is generally measured with vibration and strain sensors. Acoustic emission sensors are more and more used for measuring global crack activity near critical locations. In this paper, we present a procedure for local structural health monitoring by applying Anomaly Detection (AD) on strain sensor data for sensors that are applied in expected crack path. Sensor data is analyzed by automatic anomaly detection in order to find crack activity at an early stage. This approach targets the monitoring of critical structural locations, such as welds, near which strain sensors can be applied during construction and/or locations with limited inspection possibilities during structural operation. We investigate several anomaly detection techniques to detect changes in statistical properties, indicating structural degradation. The most effective one is a novel polynomial fitting technique, which tracks slow changes in sensor data. Our approach has been tested on a representative test structure (bridge deck) in a lab environment, under constant and variable amplitude fatigue loading. In both cases, the evolving cracks at the monitored locations were successfully detected, autonomously, by our AD monitoring tool.

  16. Exploring Optimization Opportunities in Four-Point Suspension Wind Turbine Drivetrains Through Integrated Design Approaches: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Sethuraman, Latha [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Quick, Julian [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Guo, Yi [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Dykes, Katherine L [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2018-02-13

    Drivetrain design has significant influence on the costs of wind power generation. Current industry practices usually approach the drivetrain design with loads and system requirements defined by the turbine manufacturer. Several different manufacturers are contracted to supply individual components from the low-speed shaft to the generator - each receiving separate design specifications from the turbine manufacturer. Increasingly, more integrated approaches to turbine design have shown promise for blades and towers. Yet, integrated drivetrain design is a challenging task owing to the complex physical behavior of the important load-bearing components, namely the main bearings, gearbox, and the generator. In this paper we combine two of NREL's systems engineering design tools, DriveSE and GeneratorSE, to enable a comprehensive system-level drivetrain optimization for the IEAWind reference turbine for land-based applications. We compare a more traditional design with integrated approaches employing decoupled and coupled design optimization. It is demonstrated that both approaches have the potential to realize notable mass savings with opportunities to lower the costs of energy.

  17. Optimal Implementation of Prescription Drug Monitoring Programs in the Emergency Department

    Directory of Open Access Journals (Sweden)

    Garrett DePalma

    2018-02-01

    Full Text Available The opioid epidemic is the most significant modern-day, public health crisis. Physicians and lawmakers have developed methods and practices to curb opioid use. This article describes one method, prescription drug monitoring programs (PDMP, through the lens of how to optimize use for emergency departments (ED. EDs have rapidly become a central location to combat opioid abuse and drug diversion. PDMPs can provide emergency physicians with comprehensive prescribing information to improve clinical decisions around opioids. However, PDMPs vary tremendously in their accessibility and usability in the ED, which limits their effectiveness at the point of care. Problems are complicated by varying state-to-state requirements for data availability and accessibility. Several potential solutions to improving the utility of PDMPs in EDs include integrating PDMPs with electronic health records, implementing unsolicited reporting and prescription context, improving PDMP accessibility, data analytics, and expanding the scope of PDMPs. These improvements may help improve clinical decision-making for emergency physicians through better data, data presentation, and accessibility.

  18. A robust approach to optimal matched filter design in ultrasonic non-destructive evaluation (NDE)

    Science.gov (United States)

    Li, Minghui; Hayward, Gordon

    2017-02-01

    The matched filter was demonstrated to be a powerful yet efficient technique to enhance defect detection and imaging in ultrasonic non-destructive evaluation (NDE) of coarse grain materials, provided that the filter was properly designed and optimized. In the literature, in order to accurately approximate the defect echoes, the design utilized the real excitation signals, which made it time consuming and less straightforward to implement in practice. In this paper, we present a more robust and flexible approach to optimal matched filter design using the simulated excitation signals, and the control parameters are chosen and optimized based on the real scenario of array transducer, transmitter-receiver system response, and the test sample, as a result, the filter response is optimized and depends on the material characteristics. Experiments on industrial samples are conducted and the results confirm the great benefits of the method.

  19. Direct approach for bioprocess optimization in a continuous flat-bed photobioreactor system.

    Science.gov (United States)

    Kwon, Jong-Hee; Rögner, Matthias; Rexroth, Sascha

    2012-11-30

    Application of photosynthetic micro-organisms, such as cyanobacteria and green algae, for the carbon neutral energy production raises the need for cost-efficient photobiological processes. Optimization of these processes requires permanent control of many independent and mutably dependent parameters, for which a continuous cultivation approach has significant advantages. As central factors like the cell density can be kept constant by turbidostatic control, light intensity and iron content with its strong impact on productivity can be optimized. Both are key parameters due to their strong dependence on photosynthetic activity. Here we introduce an engineered low-cost 5 L flat-plate photobioreactor in combination with a simple and efficient optimization procedure for continuous photo-cultivation of microalgae. Based on direct determination of the growth rate at constant cell densities and the continuous measurement of O₂ evolution, stress conditions and their effect on the photosynthetic productivity can be directly observed. Copyright © 2012 Elsevier B.V. All rights reserved.

  20. An Optimal Control Approach for an Overall Cryogenic Plant Under Pulsed Heat Loads

    CERN Document Server

    Gómez Palacin, Luis; Blanco Viñuela, Enrique; Maekawa, Ryuji; Chalifour, Michel

    2015-01-01

    This work deals with the optimal management of a cryogenic plant composed by parallel refrigeration plants, which provide supercritical helium to pulsed heat loads. First, a data reconciliation approach is proposed to estimate precisely the refrigerator variables necessary to deduce the efficiency of each refrigerator. Second, taking into account these efficiencies, an optimal operation of the system is proposed and studied. Finally, while minimizing the power consumption of the refrigerators, the control system maintains stable operation of the cryoplant under pulsed heat loads. The management of the refrigerators is carried out by an upper control layer, which balances the relative production of cooling power in each refrigerator. In addition, this upper control layer deals with the mitigation of malfunctions and faults in the system. The proposed approach has been validated using a dynamic model of the cryoplant developed with EcosimPro software, based on first principles (mass and energy balances) and the...

  1. A QoS Optimization Approach in Cognitive Body Area Networks for Healthcare Applications.

    Science.gov (United States)

    Ahmed, Tauseef; Le Moullec, Yannick

    2017-04-06

    Wireless body area networks are increasingly featuring cognitive capabilities. This work deals with the emerging concept of cognitive body area networks. In particular, the paper addresses two important issues, namely spectrum sharing and interferences. We propose methods for channel and power allocation. The former builds upon a reinforcement learning mechanism, whereas the latter is based on convex optimization. Furthermore, we also propose a mathematical channel model for off-body communication links in line with the IEEE 802.15.6 standard. Simulation results for a nursing home scenario show that the proposed approach yields the best performance in terms of throughput and QoS for dynamic environments. For example, in a highly demanding scenario our approach can provide throughput up to 7 Mbps, while giving an average of 97.2% of time QoS satisfaction in terms of throughput. Simulation results also show that the power optimization algorithm enables reducing transmission power by approximately 4.5 dBm, thereby sensibly and significantly reducing interference.

  2. A dynamic approach for the optimal electricity dispatch in the deregulated market

    International Nuclear Information System (INIS)

    Carraretto, Cristian; Lazzaretto, Andrea

    2004-01-01

    The electricity market has been experiencing the deregulation process in many countries. Effective approaches to the management of single power plants or groups of plants are therefore becoming crucial for the competitiveness of energy utilities. A dynamic programming approach is presented in this paper for the optimal plant management in the new Italian deregulated market. A thorough description of the method is given in cases of free or fixed production over time (e.g. when the overall production is limited by bilateral contracts or cogeneration). Analysis of market characteristics, detailed thermodynamic models of plant operation and reliable price forecasts over the time period of interest are required. The suggested approach is useful for both long-term scheduling and planning daily offers in the market

  3. Two-phase optimizing approach to design assessments of long distance heat transportation for CHP systems

    International Nuclear Information System (INIS)

    Hirsch, Piotr; Duzinkiewicz, Kazimierz; Grochowski, Michał; Piotrowski, Robert

    2016-01-01

    Highlights: • New method for long distance heat transportation system effectivity evaluation. • Decision model formulation which reflects time and spatial structure of the problem. • Multi-criteria and complex approach to solving the decision-making problem. • Solver based on simulation-optimization approach with two-phase optimization method. • Sensitivity analysis of the optimization procedure elements. - Abstract: Cogeneration or Combined Heat and Power (CHP) for power plants is a method of putting to use waste heat which would be otherwise released to the environment. This allows the increase in thermodynamic efficiency of the plant and can be a source of environmental friendly heat for District Heating (DH). In the paper CHP for Nuclear Power Plant (NPP) is analyzed with the focus on heat transportation. A method for effectivity and feasibility evaluation of the long distance, high power Heat Transportation System (HTS) between the NPP and the DH network is proposed. As a part of the method the multi-criteria decision-making problem, having the structure of the mathematical programming problem, for optimized selection of design and operating parameters of the HTS is formulated. The constraints for this problem include a static model of HTS, that allows considerations of system lifetime, time variability and spatial topology. Thereby variation of annual heat demand within the DH area, variability of ground temperature, insulation and pipe aging and/or terrain elevation profile can be taken into account in the decision-making process. The HTS construction costs, pumping power, and heat losses are considered as objective functions. In general, the analyzed optimization problem is multi-criteria, hybrid and nonlinear. The two-phase optimization based on optimization-simulation framework is proposed to solve the decision-making problem. The solver introduces a number of assumptions concerning the optimization process. Methods for problem decomposition

  4. JuPOETs: a constrained multiobjective optimization approach to estimate biochemical model ensembles in the Julia programming language.

    Science.gov (United States)

    Bassen, David M; Vilkhovoy, Michael; Minot, Mason; Butcher, Jonathan T; Varner, Jeffrey D

    2017-01-25

    Ensemble modeling is a promising approach for obtaining robust predictions and coarse grained population behavior in deterministic mathematical models. Ensemble approaches address model uncertainty by using parameter or model families instead of single best-fit parameters or fixed model structures. Parameter ensembles can be selected based upon simulation error, along with other criteria such as diversity or steady-state performance. Simulations using parameter ensembles can estimate confidence intervals on model variables, and robustly constrain model predictions, despite having many poorly constrained parameters. In this software note, we present a multiobjective based technique to estimate parameter or models ensembles, the Pareto Optimal Ensemble Technique in the Julia programming language (JuPOETs). JuPOETs integrates simulated annealing with Pareto optimality to estimate ensembles on or near the optimal tradeoff surface between competing training objectives. We demonstrate JuPOETs on a suite of multiobjective problems, including test functions with parameter bounds and system constraints as well as for the identification of a proof-of-concept biochemical model with four conflicting training objectives. JuPOETs identified optimal or near optimal solutions approximately six-fold faster than a corresponding implementation in Octave for the suite of test functions. For the proof-of-concept biochemical model, JuPOETs produced an ensemble of parameters that gave both the mean of the training data for conflicting data sets, while simultaneously estimating parameter sets that performed well on each of the individual objective functions. JuPOETs is a promising approach for the estimation of parameter and model ensembles using multiobjective optimization. JuPOETs can be adapted to solve many problem types, including mixed binary and continuous variable types, bilevel optimization problems and constrained problems without altering the base algorithm. JuPOETs is open

  5. A Three-Stage Optimal Approach for Power System Economic Dispatch Considering Microgrids

    Directory of Open Access Journals (Sweden)

    Wei-Tzer Huang

    2016-11-01

    Full Text Available The inclusion of microgrids (MGs in power systems, especially distribution-substation-level MGs, significantly affects power systems because of the large volumes of import and export power flows. Consequently, power dispatch has become complicated, and finding an optimal solution is difficult. In this study, a three-stage optimal power dispatch model is proposed to solve such dispatch problems. In the proposed model, the entire power system is divided into two parts, namely, the main power grid and MGs. The optimal power dispatch problem is resolved on the basis of multi-area concepts. In stage I, the main power system economic dispatch (ED problem is solved by sensitive factors. In stage II, the optimal power dispatches of the local MGs are addressed via an improved direct search method. In stage III, the incremental linear models for the entire power system can be established on the basis of the solutions of the previous two stages and can be subjected to linear programming to determine the optimal reschedules from the original dispatch solutions. The proposed method is coded using Matlab and tested by utilizing an IEEE 14-bus test system to verify its feasibility and accuracy. Results demonstrated that the proposed approach can be used for the ED of power systems with MGs as virtual power plants.

  6. Optimized approach to decision fusion of heterogeneous data for breast cancer diagnosis

    International Nuclear Information System (INIS)

    Jesneck, Jonathan L.; Nolte, Loren W.; Baker, Jay A.; Floyd, Carey E.; Lo, Joseph Y.

    2006-01-01

    As more diagnostic testing options become available to physicians, it becomes more difficult to combine various types of medical information together in order to optimize the overall diagnosis. To improve diagnostic performance, here we introduce an approach to optimize a decision-fusion technique to combine heterogeneous information, such as from different modalities, feature categories, or institutions. For classifier comparison we used two performance metrics: The receiving operator characteristic (ROC) area under the curve [area under the ROC curve (AUC)] and the normalized partial area under the curve (pAUC). This study used four classifiers: Linear discriminant analysis (LDA), artificial neural network (ANN), and two variants of our decision-fusion technique, AUC-optimized (DF-A) and pAUC-optimized (DF-P) decision fusion. We applied each of these classifiers with 100-fold cross-validation to two heterogeneous breast cancer data sets: One of mass lesion features and a much more challenging one of microcalcification lesion features. For the calcification data set, DF-A outperformed the other classifiers in terms of AUC (p 0.10), the DF-P did significantly improve specificity versus the LDA at both 98% and 100% sensitivity (p<0.04). In conclusion, decision fusion directly optimized clinically significant performance measures, such as AUC and pAUC, and sometimes outperformed two well-known machine-learning techniques when applied to two different breast cancer data sets

  7. Sniffer Channel Selection for Monitoring Wireless LANs

    Science.gov (United States)

    Song, Yuan; Chen, Xian; Kim, Yoo-Ah; Wang, Bing; Chen, Guanling

    Wireless sniffers are often used to monitor APs in wireless LANs (WLANs) for network management, fault detection, traffic characterization, and optimizing deployment. It is cost effective to deploy single-radio sniffers that can monitor multiple nearby APs. However, since nearby APs often operate on orthogonal channels, a sniffer needs to switch among multiple channels to monitor its nearby APs. In this paper, we formulate and solve two optimization problems on sniffer channel selection. Both problems require that each AP be monitored by at least one sniffer. In addition, one optimization problem requires minimizing the maximum number of channels that a sniffer listens to, and the other requires minimizing the total number of channels that the sniffers listen to. We propose a novel LP-relaxation based algorithm, and two simple greedy heuristics for the above two optimization problems. Through simulation, we demonstrate that all the algorithms are effective in achieving their optimization goals, and the LP-based algorithm outperforms the greedy heuristics.

  8. Multipurpose Water Reservoir Management: An Evolutionary Multiobjective Optimization Approach

    Directory of Open Access Journals (Sweden)

    Luís A. Scola

    2014-01-01

    Full Text Available The reservoirs that feed large hydropower plants should be managed in order to provide other uses for the water resources. Those uses include, for instance, flood control and avoidance, irrigation, navigability in the rivers, and other ones. This work presents an evolutionary multiobjective optimization approach for the study of multiple water usages in multiple interlinked reservoirs, including both power generation objectives and other objectives not related to energy generation. The classical evolutionary algorithm NSGA-II is employed as the basic multiobjective optimization machinery, being modified in order to cope with specific problem features. The case studies, which include the analysis of a problem which involves an objective of navigability on the river, are tailored in order to illustrate the usefulness of the data generated by the proposed methodology for decision-making on the problem of operation planning of multiple reservoirs with multiple usages. It is shown that it is even possible to use the generated data in order to determine the cost of any new usage of the water, in terms of the opportunity cost that can be measured on the revenues related to electric energy sales.

  9. An Integrated Modeling Approach to Evaluate and Optimize Data Center Sustainability, Dependability and Cost

    Directory of Open Access Journals (Sweden)

    Gustavo Callou

    2014-01-01

    Full Text Available Data centers have evolved dramatically in recent years, due to the advent of social networking services, e-commerce and cloud computing. The conflicting requirements are the high availability levels demanded against the low sustainability impact and cost values. The approaches that evaluate and optimize these requirements are essential to support designers of data center architectures. Our work aims to propose an integrated approach to estimate and optimize these issues with the support of the developed environment, Mercury. Mercury is a tool for dependability, performance and energy flow evaluation. The tool supports reliability block diagrams (RBD, stochastic Petri nets (SPNs, continuous-time Markov chains (CTMC and energy flow (EFM models. The EFM verifies the energy flow on data center architectures, taking into account the energy efficiency and power capacity that each device can provide (assuming power systems or extract (considering cooling components. The EFM also estimates the sustainability impact and cost issues of data center architectures. Additionally, a methodology is also considered to support the modeling, evaluation and optimization processes. Two case studies are presented to illustrate the adopted methodology on data center power systems.

  10. Monitoring food and non-alcoholic beverage promotions to children.

    Science.gov (United States)

    Kelly, B; King, L; Baur, L; Rayner, M; Lobstein, T; Monteiro, C; Macmullan, J; Mohan, S; Barquera, S; Friel, S; Hawkes, C; Kumanyika, S; L'Abbé, M; Lee, A; Ma, J; Neal, B; Sacks, G; Sanders, D; Snowdon, W; Swinburn, B; Vandevijvere, S; Walker, C

    2013-10-01

    Food and non-alcoholic beverage marketing is recognized as an important factor influencing food choices related to non-communicable diseases. The monitoring of populations' exposure to food and non-alcoholic beverage promotions, and the content of these promotions, is necessary to generate evidence to understand the extent of the problem, and to determine appropriate and effective policy responses. A review of studies measuring the nature and extent of exposure to food promotions was conducted to identify approaches to monitoring food promotions via dominant media platforms. A step-wise approach, comprising 'minimal', 'expanded' and 'optimal' monitoring activities, was designed. This approach can be used to assess the frequency and level of exposure of population groups (especially children) to food promotions, the persuasive power of techniques used in promotional communications (power of promotions) and the nutritional composition of promoted food products. Detailed procedures for data sampling, data collection and data analysis for a range of media types are presented, as well as quantifiable measurement indicators for assessing exposure to and power of food and non-alcoholic beverage promotions. The proposed framework supports the development of a consistent system for monitoring food and non-alcoholic beverage promotions for comparison between countries and over time. © 2013 The Authors. Obesity Reviews published by John Wiley & Sons Ltd on behalf of the International Association for the Study of Obesity.

  11. SCIENTIFIC AND INNOVATIVE APPROACH TO PROBLEM PERTAINING TO EVALUATION AND MONITORING OF ENVIRONMENT QUALITY IN REPUBLIC OF BELARUS

    Directory of Open Access Journals (Sweden)

    I. V. Voytov

    2009-01-01

    Full Text Available The paper proposes a scientific and innovative approach to solution of an important problem in the field of rational nature management and ecology which presupposes realization of evaluation, analysis and monitoring of environment  quality  (EQ in Belarus.  This  approach is based on methods and  facilities  of  administrative-command  and  partially  automatic-control  management.   The  main components of the innovative approach are an automatic  system for  evaluation and monitoring of EQ including estimation and formation of nature-resource potential within 11 cadaster and other data base, general principles on evaluation and monitoring of EQ, structural and algorithmic schemes for evaluation of ecological state of administrative territories, calculation of generalized indices of nature-territorial complexes and solution of nature protection problems in respect of EQ monitoring. A system of equation calculation for the analysis and evaluation of technogenic load on main nature components of the environment (free air, water objects, soil cover, realization of monitoring function in respect of EQ and ecological state of local and urban territories, nature resources  and enterprises, pollution and state of some recipients and also data resources for execution of analytical calculations and functions directed on monitoring quality of nature components of the environment is advanced in the paper.

  12. Characterization and Monitoring of Natural Attenuation of Chlorinated Solvents in Ground Water: A Systems Approach

    Science.gov (United States)

    Cutshall, N. H.; Gilmore, T.; Looney, B. B.; Vangelas, K. M.; Adams, K. M.; Sink, C. H.

    2006-05-01

    Like many US industries and businesses, the Department of Energy (DOE) is responsible for remediation and restoration of soils and ground water contaminated with chlorinated ethenes. Monitored Natural Attenuation (MNA) is an attractive remediation approach and is probably the universal end-stage technology for removing such contamination. Since 2003 we have carried out a multifaceted program at the Savannah River Site designed to advance the state of the art for MNA of chlorinated ethenes in soils and groundwater. Three lines of effort were originally planned: 1) Improving the fundamental science for MNA, 2) Promoting better characterization and monitoring (CM) techniques, and 3) Advancing the regulatory aspects of MNA management. A fourth line, developing enhanced attenuation methods based on sustainable natural processes, was added in order to deal with sites where the initial natural attenuation capacity cannot offset contaminant loading rates. These four lines have been pursued in an integrated and mutually supportive fashion. Many DOE site-cleanup program managers view CM as major expenses, especially for natural attenuation where measuring attenuation is complex and the most critical attenuation mechanisms cannot be determined directly. We have reviewed new and developing approaches to CM for potential application in support of natural attenuation of chlorinated hydrocarbons in ground water at DOE sites (Gilmore, Tyler, et al., 2006 WSRC-TR- 2005-00199). Although our project is focused on chlorinated ethenes, many of the concepts and strategies are also applicable to a wider range of contaminants including radionuclides and metals. The greatest savings in CM are likely to come from new management approaches. New approaches can be based, for example, on conceptual models of attenuation capacity, the ability of a formation to reduce risks caused by contaminants. Using the mass balance concept as a guide, the integrated mass flux of contaminant is compared to

  13. Optimization under Uncertainty

    KAUST Repository

    Lopez, Rafael H.

    2016-01-06

    The goal of this poster is to present the main approaches to optimization of engineering systems in the presence of uncertainties. We begin by giving an insight about robust optimization. Next, we detail how to deal with probabilistic constraints in optimization, the so called the reliability based design. Subsequently, we present the risk optimization approach, which includes the expected costs of failure in the objective function. After that the basic description of each approach is given, the projects developed by CORE are presented. Finally, the main current topic of research of CORE is described.

  14. A new approach for optimization of thermal power plant based on the exergoeconomic analysis and structural optimization method: Application to the CGAM problem

    International Nuclear Information System (INIS)

    Seyyedi, Seyyed Masoud; Ajam, Hossein; Farahat, Said

    2010-01-01

    In large thermal systems, which have many design variables, conventional mathematical optimization methods are not efficient. Thus, exergoeconomic analysis can be used to assist optimization in these systems. In this paper a new iterative approach for optimization of large thermal systems is suggested. The proposed methodology uses exergoeconomic analysis, sensitivity analysis, and structural optimization method which are applied to determine sum of the investment and exergy destruction cost flow rates for each component, the importance of each decision variable and minimization of the total cost flow rate, respectively. Applicability to the large real complex thermal systems and rapid convergency are characteristics of this new iterative methodology. The proposed methodology is applied to the benchmark CGAM cogeneration system to show how it minimizes the total cost flow rate of operation for the installation. Results are compared with original CGAM problem.

  15. Hyperplane distance neighbor clustering based on local discriminant analysis for complex chemical processes monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Chunhong; Xiao, Shaoqing; Gu, Xiaofeng [Jiangnan University, Wuxi (China)

    2014-11-15

    The collected training data often include both normal and faulty samples for complex chemical processes. However, some monitoring methods, such as partial least squares (PLS), principal component analysis (PCA), independent component analysis (ICA) and Fisher discriminant analysis (FDA), require fault-free data to build the normal operation model. These techniques are applicable after the preliminary step of data clustering is applied. We here propose a novel hyperplane distance neighbor clustering (HDNC) based on the local discriminant analysis (LDA) for chemical process monitoring. First, faulty samples are separated from normal ones using the HDNC method. Then, the optimal subspace for fault detection and classification can be obtained using the LDA approach. The proposed method takes the multimodality within the faulty data into account, and thus improves the capability of process monitoring significantly. The HDNC-LDA monitoring approach is applied to two simulation processes and then compared with the conventional FDA based on the K-nearest neighbor (KNN-FDA) method. The results obtained in two different scenarios demonstrate the superiority of the HDNC-LDA approach in terms of fault detection and classification accuracy.

  16. Hyperplane distance neighbor clustering based on local discriminant analysis for complex chemical processes monitoring

    International Nuclear Information System (INIS)

    Lu, Chunhong; Xiao, Shaoqing; Gu, Xiaofeng

    2014-01-01

    The collected training data often include both normal and faulty samples for complex chemical processes. However, some monitoring methods, such as partial least squares (PLS), principal component analysis (PCA), independent component analysis (ICA) and Fisher discriminant analysis (FDA), require fault-free data to build the normal operation model. These techniques are applicable after the preliminary step of data clustering is applied. We here propose a novel hyperplane distance neighbor clustering (HDNC) based on the local discriminant analysis (LDA) for chemical process monitoring. First, faulty samples are separated from normal ones using the HDNC method. Then, the optimal subspace for fault detection and classification can be obtained using the LDA approach. The proposed method takes the multimodality within the faulty data into account, and thus improves the capability of process monitoring significantly. The HDNC-LDA monitoring approach is applied to two simulation processes and then compared with the conventional FDA based on the K-nearest neighbor (KNN-FDA) method. The results obtained in two different scenarios demonstrate the superiority of the HDNC-LDA approach in terms of fault detection and classification accuracy

  17. A Hybrid Generalized Hidden Markov Model-Based Condition Monitoring Approach for Rolling Bearings.

    Science.gov (United States)

    Liu, Jie; Hu, Youmin; Wu, Bo; Wang, Yan; Xie, Fengyun

    2017-05-18

    The operating condition of rolling bearings affects productivity and quality in the rotating machine process. Developing an effective rolling bearing condition monitoring approach is critical to accurately identify the operating condition. In this paper, a hybrid generalized hidden Markov model-based condition monitoring approach for rolling bearings is proposed, where interval valued features are used to efficiently recognize and classify machine states in the machine process. In the proposed method, vibration signals are decomposed into multiple modes with variational mode decomposition (VMD). Parameters of the VMD, in the form of generalized intervals, provide a concise representation for aleatory and epistemic uncertainty and improve the robustness of identification. The multi-scale permutation entropy method is applied to extract state features from the decomposed signals in different operating conditions. Traditional principal component analysis is adopted to reduce feature size and computational cost. With the extracted features' information, the generalized hidden Markov model, based on generalized interval probability, is used to recognize and classify the fault types and fault severity levels. Finally, the experiment results show that the proposed method is effective at recognizing and classifying the fault types and fault severity levels of rolling bearings. This monitoring method is also efficient enough to quantify the two uncertainty components.

  18. A Hybrid Generalized Hidden Markov Model-Based Condition Monitoring Approach for Rolling Bearings

    Directory of Open Access Journals (Sweden)

    Jie Liu

    2017-05-01

    Full Text Available The operating condition of rolling bearings affects productivity and quality in the rotating machine process. Developing an effective rolling bearing condition monitoring approach is critical to accurately identify the operating condition. In this paper, a hybrid generalized hidden Markov model-based condition monitoring approach for rolling bearings is proposed, where interval valued features are used to efficiently recognize and classify machine states in the machine process. In the proposed method, vibration signals are decomposed into multiple modes with variational mode decomposition (VMD. Parameters of the VMD, in the form of generalized intervals, provide a concise representation for aleatory and epistemic uncertainty and improve the robustness of identification. The multi-scale permutation entropy method is applied to extract state features from the decomposed signals in different operating conditions. Traditional principal component analysis is adopted to reduce feature size and computational cost. With the extracted features’ information, the generalized hidden Markov model, based on generalized interval probability, is used to recognize and classify the fault types and fault severity levels. Finally, the experiment results show that the proposed method is effective at recognizing and classifying the fault types and fault severity levels of rolling bearings. This monitoring method is also efficient enough to quantify the two uncertainty components.

  19. A multi-cycle optimization approach for low leakage in-core fuel management

    International Nuclear Information System (INIS)

    Cheng Pingdong; Shen Wei

    1999-01-01

    A new approach was developed to optimize pressurized waster reactor (PWR) low-leakage multi-cycle reload core design. The multi-cycle optimization process is carried out by the following three steps: The first step is a linear programming in search for an optimum power sharing distribution and optimum cycle length distribution for the successive several cycles to yield maximum multi-cycle total cycle length. In the second step, the fuel arrangement and burnable poison (BP) assignment are decoupled by using Haling power distribution and the optimum fuel arrangement is determined at the EOL in the absence of all BPs by employing a linear programming method or direct search method with objective function to force the calculated cycle length to be as close as possible to the optimum single cycle length obtained in the first step and with optimum power sharing distribution as additional constraints during optimization. In the third step, the BP assignment is optimized by the Flexible Tolerance Method (FTM) or linear programming method using the number of BP rods as control variable. The technology employed in the second and third steps was the usual decoupling method used in low-leakage core design. The first step was developed specially for multi-cycle optimization design and discussed in detail. Based on the proposed method a computer code MCYCO was encoded and tested for Qinshan Nuclear Power Plant (QNPP) low leakage reload core design. The multi-cycle optimization method developed, together with the program MCYCO, provides an applicable tool for solving the PWR low leakage reload core design problem

  20. Optical modeling toward optimizing monitoring of intestinal perfusion in trauma patients

    Science.gov (United States)

    Akl, Tony J.; Wilson, Mark A.; Ericson, M. N.; Coté, Gerard L.

    2013-02-01

    Trauma is the number one cause of death for people between the ages 1 and 44 years in the United States. In addition, according to the Centers of Disease Control and Prevention, injury results in over 31 million emergency department visits annually. Minimizing the resuscitation period in major abdominal injuries increases survival rates by correcting impaired tissue oxygen delivery. Optimization of resuscitation requires a monitoring method to determine sufficient tissue oxygenation. Oxygenation can be assessed by determining the adequacy of tissue perfusion. In this work, we present the design of a wireless perfusion and oxygenation sensor based on photoplethysmography. Through optical modeling, the benefit of using the visible wavelengths 470, 525 and 590nm (around the 525nm hemoglobin isobestic point) for intestinal perfusion monitoring is compared to the typical near infrared (NIR) wavelengths (805nm isobestic point) used in such sensors. Specifically, NIR wavelengths penetrate through the thin intestinal wall ( 4mm) leading to high background signals. However, these visible wavelengths have two times shorter penetration depth that the NIR wavelengths. Monte-Carlo simulations show that the transmittance of the three selected wavelengths is lower by 5 orders of magnitude depending on the perfusion state. Due to the high absorbance of hemoglobin in the visible range, the perfusion signal carried by diffusely reflected light is also enhanced by an order of magnitude while oxygenation signal levels are maintained. In addition, short source-detector separations proved to be beneficial for limiting the probing depth to the thickness of the intestinal wall.