WorldWideScience

Sample records for monitoring optimization approaches

  1. Optimizing Groundwater Monitoring Networks Using Integrated Statistical and Geostatistical Approaches

    Directory of Open Access Journals (Sweden)

    Jay Krishna Thakur

    2015-08-01

    Full Text Available The aim of this work is to investigate new approaches using methods based on statistics and geo-statistics for spatio-temporal optimization of groundwater monitoring networks. The formulated and integrated methods were tested with the groundwater quality data set of Bitterfeld/Wolfen, Germany. Spatially, the monitoring network was optimized using geo-statistical methods. Temporal optimization of the monitoring network was carried out using Sen’s method (1968. For geostatistical network optimization, a geostatistical spatio-temporal algorithm was used to identify redundant wells in 2- and 2.5-D Quaternary and Tertiary aquifers. Influences of interpolation block width, dimension, contaminant association, groundwater flow direction and aquifer homogeneity on statistical and geostatistical methods for monitoring network optimization were analysed. The integrated approach shows 37% and 28% redundancies in the monitoring network in Quaternary aquifer and Tertiary aquifer respectively. The geostatistical method also recommends 41 and 22 new monitoring wells in the Quaternary and Tertiary aquifers respectively. In temporal optimization, an overall optimized sampling interval was recommended in terms of lower quartile (238 days, median quartile (317 days and upper quartile (401 days in the research area of Bitterfeld/Wolfen. Demonstrated methods for improving groundwater monitoring network can be used in real monitoring network optimization with due consideration given to influencing factors.

  2. A probabilistic approach for optimal sensor allocation in structural health monitoring

    International Nuclear Information System (INIS)

    Azarbayejani, M; Reda Taha, M M; El-Osery, A I; Choi, K K

    2008-01-01

    Recent advances in sensor technology promote using large sensor networks to efficiently and economically monitor, identify and quantify damage in structures. In structural health monitoring (SHM) systems, the effectiveness and reliability of the sensor network are crucial to determine the optimal number and locations of sensors in SHM systems. Here, we suggest a probabilistic approach for identifying the optimal number and locations of sensors for SHM. We demonstrate a methodology to establish the probability distribution function that identifies the optimal sensor locations such that damage detection is enhanced. The approach is based on using the weights of a neural network trained from simulations using a priori knowledge about damage locations and damage severities to generate a normalized probability distribution function for optimal sensor allocation. We also demonstrate that the optimal sensor network can be related to the highest probability of detection (POD). The redundancy of the proposed sensor network is examined using a 'leave one sensor out' analysis. A prestressed concrete bridge is selected as a case study to demonstrate the effectiveness of the proposed method. The results show that the proposed approach can provide a robust design for sensor networks that are more efficient than a uniform distribution of sensors on a structure

  3. Monitoring and optimizing the co-composting of dewatered sludge: a mixture experimental design approach.

    Science.gov (United States)

    Komilis, Dimitrios; Evangelou, Alexandros; Voudrias, Evangelos

    2011-09-01

    The management of dewatered wastewater sludge is a major issue worldwide. Sludge disposal to landfills is not sustainable and thus alternative treatment techniques are being sought. The objective of this work was to determine optimal mixing ratios of dewatered sludge with other organic amendments in order to maximize the degradability of the mixtures during composting. This objective was achieved using mixture experimental design principles. An additional objective was to study the impact of the initial C/N ratio and moisture contents on the co-composting process of dewatered sludge. The composting process was monitored through measurements of O(2) uptake rates, CO(2) evolution, temperature profile and solids reduction. Eight (8) runs were performed in 100 L insulated air-tight bioreactors under a dynamic air flow regime. The initial mixtures were prepared using dewatered wastewater sludge, mixed paper wastes, food wastes, tree branches and sawdust at various initial C/N ratios and moisture contents. According to empirical modeling, mixtures of sludge and food waste mixtures at 1:1 ratio (ww, wet weight) maximize degradability. Structural amendments should be maintained below 30% to reach thermophilic temperatures. The initial C/N ratio and initial moisture content of the mixture were not found to influence the decomposition process. The bio C/bio N ratio started from around 10, for all runs, decreased during the middle of the process and increased to up to 20 at the end of the process. The solid carbon reduction of the mixtures without the branches ranged from 28% to 62%, whilst solid N reductions ranged from 30% to 63%. Respiratory quotients had a decreasing trend throughout the composting process. Copyright © 2011 Elsevier Ltd. All rights reserved.

  4. An alternative approach to continuous compliance monitoring and turbine plant optimization using a PEMS (predictive emission monitoring system)

    International Nuclear Information System (INIS)

    Swanson, B.G.; Lawrence, P.

    2009-01-01

    This paper reviewed the use of a predictive emissions monitoring system (PEMS) at 3 different gas turbine facilities in the United States and highlighted the costs and benefits of using a PEMS for documenting emissions of priority pollutants and greenhouse gases (GHG). The PEMS interfaces directly to the turbine control system and represents a lower cost alternative to the traditional continuous emission monitoring system (CEMS). The PEMS can track combustion efficiency through modeling of the turbine's operation and emissions. Excess emissions can be tracked and the causes of pollution can be determined and mitigated. The PEMS installed at the 3 turbine plants must meet rigorous performance specification criteria and the sites perform ongoing quality assurance tasks such as periodic audits with portable analyzers. The PEMS is much less expensive to install, operate, and maintain compared to the standard CEMS gas analyzer. Empirical PEMS achieves very high accuracy levels and has demonstrated superior reliability over CEMS for various types of continuous process applications under existing air compliance regulations in the United States. Annual accuracy testing at the gas turbine sites have shown that the PEMS predictions are usually within 5 per cent of the reference method. PEMS can be certified as an alternative to gas analyzer based CEMS for nitrogen oxides and carbon dioxide compliance and for GHG trading purposes. 5 refs., 8 figs.

  5. Optimized Field Sampling and Monitoring of Airborne Hazardous Transport Plumes; A Geostatistical Simulation Approach

    International Nuclear Information System (INIS)

    Chen, DI-WEN

    2001-01-01

    Airborne hazardous plumes inadvertently released during nuclear/chemical/biological incidents are mostly of unknown composition and concentration until measurements are taken of post-accident ground concentrations from plume-ground deposition of constituents. Unfortunately, measurements often are days post-incident and rely on hazardous manned air-vehicle measurements. Before this happens, computational plume migration models are the only source of information on the plume characteristics, constituents, concentrations, directions of travel, ground deposition, etc. A mobile ''lighter than air'' (LTA) system is being developed at Oak Ridge National Laboratory that will be part of the first response in emergency conditions. These interactive and remote unmanned air vehicles will carry light-weight detectors and weather instrumentation to measure the conditions during and after plume release. This requires a cooperative computationally organized, GPS-controlled set of LTA's that self-coordinate around the objectives in an emergency situation in restricted time frames. A critical step before an optimum and cost-effective field sampling and monitoring program proceeds is the collection of data that provides statistically significant information, collected in a reliable and expeditious manner. Efficient aerial arrangements of the detectors taking the data (for active airborne release conditions) are necessary for plume identification, computational 3-dimensional reconstruction, and source distribution functions. This report describes the application of stochastic or geostatistical simulations to delineate the plume for guiding subsequent sampling and monitoring designs. A case study is presented of building digital plume images, based on existing ''hard'' experimental data and ''soft'' preliminary transport modeling results of Prairie Grass Trials Site. Markov Bayes Simulation, a coupled Bayesian/geostatistical methodology, quantitatively combines soft information

  6. Optimization of environmental monitoring

    International Nuclear Information System (INIS)

    Winter, M.

    1986-01-01

    The routine work and tasks related to prevention in environmental monitoring of nuclear facilities range from low level methodology to the necessity of being likewise prepared to perform environmental impact measurements after nuclear incidents and accidents are presented [pt

  7. Stepped MS(All) Relied Transition (SMART): An approach to rapidly determine optimal multiple reaction monitoring mass spectrometry parameters for small molecules.

    Science.gov (United States)

    Ye, Hui; Zhu, Lin; Wang, Lin; Liu, Huiying; Zhang, Jun; Wu, Mengqiu; Wang, Guangji; Hao, Haiping

    2016-02-11

    Multiple reaction monitoring (MRM) is a universal approach for quantitative analysis because of its high specificity and sensitivity. Nevertheless, optimization of MRM parameters remains as a time and labor-intensive task particularly in multiplexed quantitative analysis of small molecules in complex mixtures. In this study, we have developed an approach named Stepped MS(All) Relied Transition (SMART) to predict the optimal MRM parameters of small molecules. SMART requires firstly a rapid and high-throughput analysis of samples using a Stepped MS(All) technique (sMS(All)) on a Q-TOF, which consists of serial MS(All) events acquired from low CE to gradually stepped-up CE values in a cycle. The optimal CE values can then be determined by comparing the extracted ion chromatograms for the ion pairs of interest among serial scans. The SMART-predicted parameters were found to agree well with the parameters optimized on a triple quadrupole from the same vendor using a mixture of standards. The parameters optimized on a triple quadrupole from a different vendor was also employed for comparison, and found to be linearly correlated with the SMART-predicted parameters, suggesting the potential applications of the SMART approach among different instrumental platforms. This approach was further validated by applying to simultaneous quantification of 31 herbal components in the plasma of rats treated with a herbal prescription. Because the sMS(All) acquisition can be accomplished in a single run for multiple components independent of standards, the SMART approach are expected to find its wide application in the multiplexed quantitative analysis of complex mixtures. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. HEURISTIC APPROACHES FOR PORTFOLIO OPTIMIZATION

    OpenAIRE

    Manfred Gilli, Evis Kellezi

    2000-01-01

    The paper first compares the use of optimization heuristics to the classical optimization techniques for the selection of optimal portfolios. Second, the heuristic approach is applied to problems other than those in the standard mean-variance framework where the classical optimization fails.

  9. Reconnecting Stochastic Methods With Hydrogeological Applications: A Utilitarian Uncertainty Analysis and Risk Assessment Approach for the Design of Optimal Monitoring Networks

    Science.gov (United States)

    Bode, Felix; Ferré, Ty; Zigelli, Niklas; Emmert, Martin; Nowak, Wolfgang

    2018-03-01

    Collaboration between academics and practitioners promotes knowledge transfer between research and industry, with both sides benefiting greatly. However, academic approaches are often not feasible given real-world limits on time, cost and data availability, especially for risk and uncertainty analyses. Although the need for uncertainty quantification and risk assessment are clear, there are few published studies examining how scientific methods can be used in practice. In this work, we introduce possible strategies for transferring and communicating academic approaches to real-world applications, countering the current disconnect between increasingly sophisticated academic methods and methods that work and are accepted in practice. We analyze a collaboration between academics and water suppliers in Germany who wanted to design optimal groundwater monitoring networks for drinking-water well catchments. Our key conclusions are: to prefer multiobjective over single-objective optimization; to replace Monte-Carlo analyses by scenario methods; and to replace data-hungry quantitative risk assessment by easy-to-communicate qualitative methods. For improved communication, it is critical to set up common glossaries of terms to avoid misunderstandings, use striking visualization to communicate key concepts, and jointly and continually revisit the project objectives. Ultimately, these approaches and recommendations are simple and utilitarian enough to be transferred directly to other practical water resource related problems.

  10. Topology optimization approaches

    DEFF Research Database (Denmark)

    Sigmund, Ole; Maute, Kurt

    2013-01-01

    Topology optimization has undergone a tremendous development since its introduction in the seminal paper by Bendsøe and Kikuchi in 1988. By now, the concept is developing in many different directions, including “density”, “level set”, “topological derivative”, “phase field”, “evolutionary...

  11. Optimal unemployment insurance with monitoring and sanctions

    NARCIS (Netherlands)

    Boone, J.; Fredriksson, P.; Holmlund, B.; van Ours, J.C.

    2007-01-01

    This article analyses the design of optimal unemployment insurance in a search equilibrium framework where search effort among the unemployed is not perfectly observable. We examine to what extent the optimal policy involves monitoring of search effort and benefit sanctions if observed search is

  12. Optimal Design of Air Quality Monitoring Network and its Application in an Oil Refinery Plant: An Approach to Keep Health Satus of Workers

    Directory of Open Access Journals (Sweden)

    Khaled ZoroufchiBenis

    2015-12-01

    Full Text Available Background: Industrial air pollution is a growing challenge to humane health, especially in developing countries, where there is no systematic monitoring of air pollution. Given the importance of the availabil­ity of valid information on population exposure to air pollutants, it is important to design an optimal Air Quality Monitoring Network (AQMN for assessing population exposure to air pollution and predicting the magnitude of the health risks to the population. Methods: A multi-pollutant method (implemented as a MATLAB program was explored for configur­ing an AQMN to detect the highest level of pollution around an oil refinery plant. The method ranks potential monitoring sites (grids according to their ability to represent the ambient concentra­tion. The term of cluster of contiguous grids that exceed a threshold value was used to calculate the Station Dosage. Selection of the best configuration of AQMN was done based on the ratio of a sta­tion’s dosage to the total dosage in the network. Results: Six monitoring stations were needed to detect the pollutants concentrations around the study area for estimating the level and distribution of exposure in the population with total network effi­ciency of about 99%. An analysis of the design procedure showed that wind regimes have greatest effect on the location of monitoring stations. Conclusion: The optimal AQMN enables authorities to implement an effective program of air quality management for protecting human health.

  13. WiMAX network performance monitoring & optimization

    DEFF Research Database (Denmark)

    Zhang, Qi; Dam, H

    2008-01-01

    frequency reuse, capacity planning, proper network dimensioning, multi-class data services and so on. Furthermore, as a small operator we also want to reduce the demand for sophisticated technicians and man labour hours. To meet these critical demands, we design a generic integrated network performance......In this paper we present our WiMAX (worldwide interoperability for microwave access) network performance monitoring and optimization solution. As a new and small WiMAX network operator, there are many demanding issues that we have to deal with, such as limited available frequency resource, tight...... this integrated network performance monitoring and optimization system in our WiMAX networks. This integrated monitoring and optimization system has such good flexibility and scalability that individual function component can be used by other operators with special needs and more advanced function components can...

  14. OPTIMIZATION METHODS FOR HYDROECOLOGICAL MONITORING SYSTEMS

    Directory of Open Access Journals (Sweden)

    Inna Pivovarova

    2016-09-01

    Full Text Available The paper describes current approaches to the rational distribution of monitoring stations. A short review and the organization of the system of hydro-geological observations in different countries are presented. On the basis of real data we propose a solution to the problem of how to calculate the average area per one hydrological station, which is the main indicator of the efficiency and performance of the monitoring system in general. We conclude that a comprehensive approach to the monitoring system organization is important, because only hydrometric and hydrochemical activities coordinated in time provide possibilities needed to analyse the underline causes of the observed pollutants content dynamics in water bodies in the long term.

  15. In-well time-of-travel approach to evaluate optimal purge duration during low-flow sampling of monitoring wells

    Science.gov (United States)

    Harte, Philip T.

    2017-01-01

    A common assumption with groundwater sampling is that low (time until inflow from the high hydraulic conductivity part of the screened formation can travel vertically in the well to the pump intake. Therefore, the length of the time needed for adequate purging prior to sample collection (called optimal purge duration) is controlled by the in-well, vertical travel times. A preliminary, simple analytical model was used to provide information on the relation between purge duration and capture of formation water for different gross levels of heterogeneity (contrast between low and high hydraulic conductivity layers). The model was then used to compare these time–volume relations to purge data (pumping rates and drawdown) collected at several representative monitoring wells from multiple sites. Results showed that computation of time-dependent capture of formation water (as opposed to capture of preexisting screen water), which were based on vertical travel times in the well, compares favorably with the time required to achieve field parameter stabilization. If field parameter stabilization is an indicator of arrival time of formation water, which has been postulated, then in-well, vertical flow may be an important factor at wells where low-flow sampling is the sample method of choice.

  16. Optimization of monitoring sewage with radionuclide contaminants

    International Nuclear Information System (INIS)

    Egorov, V.N.

    1991-01-01

    Recommendations on optimization of monitoring contaminated sewage aimed at enviromental protection agxinst radioactive contamination at minimum cost are presented. The way of selecting water sampling technique depends on water composition stability and flow rate. Depending on the type of radionuclide distribution in the sewage one can estimate minimum frequency of sampling or number of samples sufficient for assuring reliability of the conclusion on the excess or non-excess of permissible radioactive contamination levels, as well as analysis assigned accuracy. By irregular contaminated sewage-discharge and possibility of short-term releases of different form and duration, sampling should be accomplished through automatic devices of continuons or periodic operation

  17. A systems approach to nuclear facility monitoring

    International Nuclear Information System (INIS)

    Argo, P.E.; Doak, J.E.; Howse, J.W.

    1996-01-01

    Sensor technology for use in nuclear facility monitoring has reached an advanced stage of development. Research on where to place these sensors in a facility and how to combine their outputs in a meaningful fashion does not appear to be keeping pace. In this paper, the authors take a global view of the problem where sensor technology is viewed as only one piece of a large puzzle. Other pieces of this puzzle include the optimal location and type of sensors used in a specific facility, the rate at which sensors record information, and the risk associated with the materials/processes at a facility. If the data are analyzed off-site, how will they be transmitted? Is real-time analysis necessary? Is one monitoring only the facility itself, or might one also monitor the processing that occurs there (e.g., tank levels and concentrations)? How is one going to combine the outputs from the various sensors to give us an accurate picture of the state of the facility? This paper will not try to answer all these questions, but rather it will attempt to stimulate thought in this area by formulating a systems approach to the problem demonstrated by a prototype system and a system proposed for an actual facility. The focus will be on the data analysis aspect of the problem. Future work in this area should focus on recommendations and guidelines for a monitoring system based upon the type of facility and processing that occurs there

  18. A system approach to nuclear facility monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Argo, P.E.; Doak, J.E.; Howse, J.W.

    1996-09-01

    Sensor technology for use in nuclear facility monitoring has reached and advanced stage of development. Research on where to place these sensors in a facility and how to combine their outputs in a meaningful fashion does not appear to be keeping pace. In this paper, we take a global view of the problem where sensor technology is viewed as only one piece of a large puzzle. Other pieces of this puzzle include the optimal location and type of sensors used in a specific facility, the rate at which sensors record information, and the risk associated with the materials/processes at a facility. If the data are analyzed off-site, how will they be transmitted? Is real-time analysis necessary? Are we monitoring only the facility itself, or might we also monitor the processing that occurs there? How are we going to combine the output from the various sensors to give us an accurate picture of the state of the facility? This paper will not try to answer all these questions, but rather it will attempt to stimulate thought in this area by formulating a systems approach to the problem demonstrated by a prototype system and a systems proposed for an actual facility. Our focus will be on the data analysis aspect of the problem.

  19. Optimizing the spatial pattern of networks for monitoring radioactive releases

    NARCIS (Netherlands)

    Melles, S.J.; Heuvelink, G.B.M.; Twenhofel, C.J.W.; Dijk, van A.; Hiemstra, P.H.; Baume, O.P.; Stohlker, U.

    2011-01-01

    This study presents a method to optimize the sampling design of environmental monitoring networks in a multi-objective setting. We optimize the permanent network of radiation monitoring stations in the Netherlands and parts of Germany as an example. The optimization method proposed combines

  20. A combined geostatistical-optimization model for the optimal design of a groundwater quality monitoring network

    Science.gov (United States)

    Kolosionis, Konstantinos; Papadopoulou, Maria P.

    2017-04-01

    Monitoring networks provide essential information for water resources management especially in areas with significant groundwater exploitation due to extensive agricultural activities. In this work, a simulation-optimization framework is developed based on heuristic optimization methodologies and geostatistical modeling approaches to obtain an optimal design for a groundwater quality monitoring network. Groundwater quantity and quality data obtained from 43 existing observation locations at 3 different hydrological periods in Mires basin in Crete, Greece will be used in the proposed framework in terms of Regression Kriging to develop the spatial distribution of nitrates concentration in the aquifer of interest. Based on the existing groundwater quality mapping, the proposed optimization tool will determine a cost-effective observation wells network that contributes significant information to water managers and authorities. The elimination of observation wells that add little or no beneficial information to groundwater level and quality mapping of the area can be obtain using estimations uncertainty and statistical error metrics without effecting the assessment of the groundwater quality. Given the high maintenance cost of groundwater monitoring networks, the proposed tool could used by water regulators in the decision-making process to obtain a efficient network design that is essential.

  1. Dynamical System Approaches to Combinatorial Optimization

    DEFF Research Database (Denmark)

    Starke, Jens

    2013-01-01

    of large times as an asymptotically stable point of the dynamics. The obtained solutions are often not globally optimal but good approximations of it. Dynamical system and neural network approaches are appropriate methods for distributed and parallel processing. Because of the parallelization......Several dynamical system approaches to combinatorial optimization problems are described and compared. These include dynamical systems derived from penalty methods; the approach of Hopfield and Tank; self-organizing maps, that is, Kohonen networks; coupled selection equations; and hybrid methods...... thereof can be used as models for many industrial problems like manufacturing planning and optimization of flexible manufacturing systems. This is illustrated for an example in distributed robotic systems....

  2. Optimal Sensor Selection for Health Monitoring Systems

    Science.gov (United States)

    Santi, L. Michael; Sowers, T. Shane; Aguilar, Robert B.

    2005-01-01

    Sensor data are the basis for performance and health assessment of most complex systems. Careful selection and implementation of sensors is critical to enable high fidelity system health assessment. A model-based procedure that systematically selects an optimal sensor suite for overall health assessment of a designated host system is described. This procedure, termed the Systematic Sensor Selection Strategy (S4), was developed at NASA John H. Glenn Research Center in order to enhance design phase planning and preparations for in-space propulsion health management systems (HMS). Information and capabilities required to utilize the S4 approach in support of design phase development of robust health diagnostics are outlined. A merit metric that quantifies diagnostic performance and overall risk reduction potential of individual sensor suites is introduced. The conceptual foundation for this merit metric is presented and the algorithmic organization of the S4 optimization process is described. Representative results from S4 analyses of a boost stage rocket engine previously under development as part of NASA's Next Generation Launch Technology (NGLT) program are presented.

  3. Optimizing Liquid Effluent Monitoring at a Large Nuclear Complex

    International Nuclear Information System (INIS)

    Chou, Charissa J.; Johnson, V.G.; Barnett, Brent B.; Olson, Phillip M.

    2003-01-01

    Monitoring data for a centralized effluent treatment and disposal facility at the Hanford Site, a defense nuclear complex undergoing cleanup and decommissioning in southeast Washington State, was evaluated to optimize liquid effluent monitoring efficiency. Wastewater from several facilities is collected and discharged to the ground at a common disposal site. The discharged water infiltrates through 60 m of soil column to the groundwater, which eventually flows into the Columbia River, the second largest river in the contiguous United States. Protection of this important natural resource is the major objective of both cleanup and groundwater and effluent monitoring activities at the Hanford Site. Four years of effluent data were evaluated for this study. More frequent sampling was conducted during the first year of operation to assess temporal variability in analyte concentrations, to determine operational factors contributing to waste stream variability and to assess the probability of exceeding permit limits. Subsequently, the study was updated which included evaluation of the sampling and analysis regime. It was concluded that the probability of exceeding permit limits was one in a million under normal operating conditions, sampling frequency could be reduced, and several analytes could be eliminated, while indicators could be substituted for more expensive analyses. Findings were used by the state regulatory agency to modify monitoring requirements for a new discharge permit. The primary focus of this paper is on the statistical approaches and rationale that led to the successful permit modification and to a more cost-effective effluent monitoring program

  4. Enhanced Multi-Objective Optimization of Groundwater Monitoring Networks

    DEFF Research Database (Denmark)

    Bode, Felix; Binning, Philip John; Nowak, Wolfgang

    Drinking-water well catchments include many sources for potential contaminations like gas stations or agriculture. Finding optimal positions of monitoring wells for such purposes is challenging because there are various parameters (and their uncertainties) that influence the reliability...... and optimality of any suggested monitoring location or monitoring network. The goal of this project is to develop and establish a concept to assess, design, and optimize early-warning systems within well catchments. Such optimal monitoring networks need to optimize three competing objectives: (1) a high...... be reduced to a minimum. The method is based on numerical simulation of flow and transport in heterogeneous porous media coupled with geostatistics and Monte-Carlo, wrapped up within the framework of formal multi-objective optimization. In order to gain insight into the flow and transport physics...

  5. Optimal river monitoring network using optimal partition analysis: a case study of Hun River, Northeast China.

    Science.gov (United States)

    Wang, Hui; Liu, Chunyue; Rong, Luge; Wang, Xiaoxu; Sun, Lina; Luo, Qing; Wu, Hao

    2018-01-09

    River monitoring networks play an important role in water environmental management and assessment, and it is critical to develop an appropriate method to optimize the monitoring network. In this study, an effective method was proposed based on the attainment rate of National Grade III water quality, optimal partition analysis and Euclidean distance, and Hun River was taken as a method validation case. There were 7 sampling sites in the monitoring network of the Hun River, and 17 monitoring items were analyzed once a month during January 2009 to December 2010. The results showed that the main monitoring items in the surface water of Hun River were ammonia nitrogen (NH 4 + -N), chemical oxygen demand, and biochemical oxygen demand. After optimization, the required number of monitoring sites was reduced from seven to three, and 57% of the cost was saved. In addition, there were no significant differences between non-optimized and optimized monitoring networks, and the optimized monitoring networks could correctly represent the original monitoring network. The duplicate setting degree of monitoring sites decreased after optimization, and the rationality of the monitoring network was improved. Therefore, the optimal method was identified as feasible, efficient, and economic.

  6. Processing Approaches for DAS-Enabled Continuous Seismic Monitoring

    Science.gov (United States)

    Dou, S.; Wood, T.; Freifeld, B. M.; Robertson, M.; McDonald, S.; Pevzner, R.; Lindsey, N.; Gelvin, A.; Saari, S.; Morales, A.; Ekblaw, I.; Wagner, A. M.; Ulrich, C.; Daley, T. M.; Ajo Franklin, J. B.

    2017-12-01

    Distributed Acoustic Sensing (DAS) is creating a "field as laboratory" capability for seismic monitoring of subsurface changes. By providing unprecedented spatial and temporal sampling at a relatively low cost, DAS enables field-scale seismic monitoring to have durations and temporal resolutions that are comparable to those of laboratory experiments. Here we report on seismic processing approaches developed during data analyses of three case studies all using DAS-enabled seismic monitoring with applications ranging from shallow permafrost to deep reservoirs: (1) 10-hour downhole monitoring of cement curing at Otway, Australia; (2) 2-month surface monitoring of controlled permafrost thaw at Fairbanks, Alaska; (3) multi-month downhole and surface monitoring of carbon sequestration at Decatur, Illinois. We emphasize the data management and processing components relevant to DAS-based seismic monitoring, which include scalable approaches to data management, pre-processing, denoising, filtering, and wavefield decomposition. DAS has dramatically increased the data volume to the extent that terabyte-per-day data loads are now typical, straining conventional approaches to data storage and processing. To achieve more efficient use of disk space and network bandwidth, we explore improved file structures and data compression schemes. Because noise floor of DAS measurements is higher than that of conventional sensors, optimal processing workflow involving advanced denoising, deconvolution (of the source signatures), and stacking approaches are being established to maximize signal content of DAS data. The resulting workflow of data management and processing could accelerate the broader adaption of DAS for continuous monitoring of critical processes.

  7. Transformative monitoring approaches for reprocessing.

    Energy Technology Data Exchange (ETDEWEB)

    Cipiti, Benjamin B.

    2011-09-01

    The future of reprocessing in the United States is strongly driven by plant economics. With increasing safeguards, security, and safety requirements, future plant monitoring systems must be able to demonstrate more efficient operations while improving the current state of the art. The goal of this work was to design and examine the incorporation of advanced plant monitoring technologies into safeguards systems with attention to the burden on the operator. The technologies examined include micro-fluidic sampling for more rapid analytical measurements and spectroscopy-based techniques for on-line process monitoring. The Separations and Safeguards Performance Model was used to design the layout and test the effect of adding these technologies to reprocessing. The results here show that both technologies fill key gaps in existing materials accountability that provide detection of diversion events that may not be detected in a timely manner in existing plants. The plant architecture and results under diversion scenarios are described. As a tangent to this work, both the AMUSE and SEPHIS solvent extraction codes were examined for integration in the model to improve the reality of diversion scenarios. The AMUSE integration was found to be the most successful and provided useful results. The SEPHIS integration is still a work in progress and may provide an alternative option.

  8. Statistical sampling approaches for soil monitoring

    NARCIS (Netherlands)

    Brus, D.J.

    2014-01-01

    This paper describes three statistical sampling approaches for regional soil monitoring, a design-based, a model-based and a hybrid approach. In the model-based approach a space-time model is exploited to predict global statistical parameters of interest such as the space-time mean. In the hybrid

  9. Optimal Joint Liability Lending and with Costly Peer Monitoring

    NARCIS (Netherlands)

    Carli, Francesco; Uras, R.B.

    2014-01-01

    This paper characterizes an optimal group loan contract with costly peer monitoring. Using a fairly standard moral hazard framework, we show that the optimal group lending contract could exhibit a joint-liability scheme. However, optimality of joint-liability requires the involvement of a group

  10. Monitoring and Reporting HACs - A Federalist Approach

    Data.gov (United States)

    U.S. Department of Health & Human Services — Findings from a study entitled, Monitoring and Reporting Hospital-Acquired Conditions - A Federalist Approach, published in Volume 4, Issue 4 of Medicare and...

  11. Optimization of Molecular Approaches to Genogroup Neisseria meningitidis Carriage Isolates and Implications for Monitoring the Impact of New Serogroup B Vaccines.

    Directory of Open Access Journals (Sweden)

    Eduardo Rojas

    Full Text Available The reservoir for Neisseria meningitidis (Nm is the human oropharynx. Implementation of Nm serogroup C (NmC glycoconjugate vaccines directly reduced NmC carriage. Prophylactic vaccines are now available to prevent disease caused by the five major Nm disease causing serogroups (ABCWY. Nm serogroup B (NmB vaccines are composed of antigens that are conserved across Nm serogroups and therefore have the potential to impact all Nm carriage. To assess the effect of these vaccines on carriage, standardized approaches to identify and group Nm are required. Real-time PCR (rt-PCR capsule grouping assays that were internally controlled to confirm Nm species were developed for eight serogroups associated with carriage (A, B, C, E, W, X, Y and Z. The grouping scheme was validated using diverse bacterial species associated with carriage and then used to evaluate a collection of diverse Nm carriage isolates (n=234. A scheme that also included porA and ctrA probes was able to speciate the isolates, while ctrA also provided insights on the integrity of the polysaccharide loci. Isolates were typed for the Nm vaccine antigen factor H binding protein (fHbp, and were found to represent the known diversity of this antigen. The porA rt-PCR yielded positive results with all 234 of the Nm carriage isolates. Genogrouping assays classified 76.5% (179/234 of these isolates to a group, categorized 53 as nongenogroupable (NGG and two as mixed results. Thirty seven NGG isolates evidenced a disrupted capsular polysaccharide operon judged by a ctrA negative result. Only 28.6% (67/234 of the isolates were serogrouped by slide agglutination (SASG, highlighting the reduced capability of carriage strains to express capsular polysaccharide. These rt-PCR assays provide a comprehensive means to identify and genogroup N. meningitidis in carriage studies used to guide vaccination strategies and to assess the impact of novel fHbp containing vaccines on meningococcal carriage.

  12. Optimal taxation and welfare benefits with monitoring of job search

    NARCIS (Netherlands)

    Boone, J.; Bovenberg, A.L.

    2013-01-01

    In order to investigate the interaction between tax policy, welfare benefits, the government technology for monitoring and sanctioning inadequate search, workfare, and externalities from work, we incorporate endogenous job search and involuntary unemployment into a model of optimal nonlinear income

  13. Quantum Resonance Approach to Combinatorial Optimization

    Science.gov (United States)

    Zak, Michail

    1997-01-01

    It is shown that quantum resonance can be used for combinatorial optimization. The advantage of the approach is in independence of the computing time upon the dimensionality of the problem. As an example, the solution to a constraint satisfaction problem of exponential complexity is demonstrated.

  14. Optimal layout of radiological environment monitoring based on TOPSIS method

    International Nuclear Information System (INIS)

    Li Sufen; Zhou Chunlin

    2006-01-01

    TOPSIS is a method for multi-objective-decision-making, which can be applied to comprehensive assessment of environmental quality. This paper adopts it to get the optimal layout of radiological environment monitoring, it is proved that this method is a correct, simple and convenient, practical one, and beneficial to supervision departments to scientifically and reasonably layout Radiological Environment monitoring sites. (authors)

  15. Portfolio optimization using median-variance approach

    Science.gov (United States)

    Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli

    2013-04-01

    Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.

  16. Signal processing for solar array monitoring, fault detection, and optimization

    CERN Document Server

    Braun, Henry; Spanias, Andreas

    2012-01-01

    Although the solar energy industry has experienced rapid growth recently, high-level management of photovoltaic (PV) arrays has remained an open problem. As sensing and monitoring technology continues to improve, there is an opportunity to deploy sensors in PV arrays in order to improve their management. In this book, we examine the potential role of sensing and monitoring technology in a PV context, focusing on the areas of fault detection, topology optimization, and performance evaluation/data visualization. First, several types of commonly occurring PV array faults are considered and detection algorithms are described. Next, the potential for dynamic optimization of an array's topology is discussed, with a focus on mitigation of fault conditions and optimization of power output under non-fault conditions. Finally, monitoring system design considerations such as type and accuracy of measurements, sampling rate, and communication protocols are considered. It is our hope that the benefits of monitoring presen...

  17. Robust Portfolio Optimization using CAPM Approach

    Directory of Open Access Journals (Sweden)

    mohsen gharakhani

    2013-08-01

    Full Text Available In this paper, a new robust model of multi-period portfolio problem has been developed. One of the key concerns in any asset allocation problem is how to cope with uncertainty about future returns. There are some approaches in the literature for this purpose including stochastic programming and robust optimization. Applying these techniques to multi-period portfolio problem may increase the problem size in a way that the resulting model is intractable. In this paper, a novel approach has been proposed to formulate multi-period portfolio problem as an uncertain linear program assuming that asset return follows the single-index factor model. Robust optimization technique has been also used to solve the problem. In order to evaluate the performance of the proposed model, a numerical example has been applied using simulated data.

  18. Systematic approach to personnel neutron monitoring

    International Nuclear Information System (INIS)

    Griffith, R.V.; Hankins, D.E.

    1980-01-01

    NTA film and albedo detectors represent the major portion of personnel dosimeters now used for occupational neutron monitoring. However, recent attention to the spectral response of these systems has demonstrated the need for detectors that have a better match to the fields being monitored. Recent developments in direct recoil track etch dosimeters present some intriguing alternatives, and careful use of 237 Np fission fragment detectors offers the advantage of a good dose equivalent spectral match. Work continues on a number of other new detector mechanisms, but problems with sensitivity, energy response, gamma interference, etc., continue to prevent development of most mechanisms into viable personnel dosimeters. Current dosimeter limitations make a systematic approach to personnel neutron monitoring particularly important. Techniques have been developed and tested, using available portable survey instruments, that significantly improve the quality of dosimeter interpretation. Even simple spectrometry can be done with modest effort, significantly improving the health physicists ability to provide accurate neutron monitoring

  19. Methodological approach to strategic performance optimization

    OpenAIRE

    Hell, Marko; Vidačić, Stjepan; Garača, Željko

    2009-01-01

    This paper presents a matrix approach to the measuring and optimization of organizational strategic performance. The proposed model is based on the matrix presentation of strategic performance, which follows the theoretical notions of the balanced scorecard (BSC) and strategy map methodologies, initially developed by Kaplan and Norton. Development of a quantitative record of strategic objectives provides an arena for the application of linear programming (LP), which is a mathematical tech...

  20. FREQUENCY OPTIMIZATION FOR SECURITY MONITORING OF COMPUTER SYSTEMS

    Directory of Open Access Journals (Sweden)

    Вogatyrev V.A.

    2015-03-01

    Full Text Available The subject areas of the proposed research are monitoring facilities for protection of computer systems exposed to destructive attacks of accidental and malicious nature. The interval optimization model of test monitoring for the detection of hazardous states of security breach caused by destructive attacks is proposed. Optimization function is to maximize profit in case of requests servicing in conditions of uncertainty, and intensity variance of the destructive attacks including penalties when servicing of requests is in dangerous conditions. The vector task of system availability maximization and minimization of probabilities for its downtime and dangerous conditions is proposed to be reduced to the scalar optimization problem based on the criterion of profit maximization from information services (service of requests that integrates these private criteria. Optimization variants are considered with the definition of the averaged periodic activities of monitoring and adapting of these periods to the changes in the intensity of destructive attacks. Adaptation efficiency of the monitoring frequency to changes in the activity of the destructive attacks is shown. The proposed solutions can find their application for optimization of test monitoring intervals to detect hazardous conditions of security breach that makes it possible to increase the system effectiveness, and specifically, to maximize the expected profit from information services.

  1. Design and optimization of a ground water monitoring system using GIS and multicriteria decision analysis

    Energy Technology Data Exchange (ETDEWEB)

    Dutta, D.; Gupta, A.D.; Ramnarong, V.

    1998-12-31

    A GIS-based methodology has been developed to design a ground water monitoring system and implemented for a selected area in Mae-Klong River Basin, Thailand. A multicriteria decision-making analysis has been performed to optimize the network system based on major criteria which govern the monitoring network design such as minimization of cost of construction, reduction of kriging standard deviations, etc. The methodology developed in this study is a new approach to designing monitoring networks which can be used for any site considering site-specific aspects. It makes it possible to choose the best monitoring network from various alternatives based on the prioritization of decision factors.

  2. Expert system and process optimization techniques for real-time monitoring and control of plasma processes

    Science.gov (United States)

    Cheng, Jie; Qian, Zhaogang; Irani, Keki B.; Etemad, Hossein; Elta, Michael E.

    1991-03-01

    To meet the ever-increasing demand of the rapidly-growing semiconductor manufacturing industry it is critical to have a comprehensive methodology integrating techniques for process optimization real-time monitoring and adaptive process control. To this end we have accomplished an integrated knowledge-based approach combining latest expert system technology machine learning method and traditional statistical process control (SPC) techniques. This knowledge-based approach is advantageous in that it makes it possible for the task of process optimization and adaptive control to be performed consistently and predictably. Furthermore this approach can be used to construct high-level and qualitative description of processes and thus make the process behavior easy to monitor predict and control. Two software packages RIST (Rule Induction and Statistical Testing) and KARSM (Knowledge Acquisition from Response Surface Methodology) have been developed and incorporated with two commercially available packages G2 (real-time expert system) and ULTRAMAX (a tool for sequential process optimization).

  3. How to study optimal timing of PET/CT for monitoring of cancer treatment

    DEFF Research Database (Denmark)

    Vach, Werner; Høilund-Carlsen, Poul Flemming; Fischer, Barbara Malene Bjerregaard

    2011-01-01

    Purpose: The use of PET/CT for monitoring treatment response in cancer patients after chemo- or radiotherapy is a very promising approach to optimize cancer treatment. However, the timing of the PET/CT-based evaluation of reduction in viable tumor tissue is a crucial question. We investigated how...

  4. A perturbed martingale approach to global optimization

    Energy Technology Data Exchange (ETDEWEB)

    Sarkar, Saikat [Computational Mechanics Lab, Department of Civil Engineering, Indian Institute of Science, Bangalore 560012 (India); Roy, Debasish, E-mail: royd@civil.iisc.ernet.in [Computational Mechanics Lab, Department of Civil Engineering, Indian Institute of Science, Bangalore 560012 (India); Vasu, Ram Mohan [Department of Instrumentation and Applied Physics, Indian Institute of Science, Bangalore 560012 (India)

    2014-08-01

    A new global stochastic search, guided mainly through derivative-free directional information computable from the sample statistical moments of the design variables within a Monte Carlo setup, is proposed. The search is aided by imparting to the directional update term additional layers of random perturbations referred to as ‘coalescence’ and ‘scrambling’. A selection step, constituting yet another avenue for random perturbation, completes the global search. The direction-driven nature of the search is manifest in the local extremization and coalescence components, which are posed as martingale problems that yield gain-like update terms upon discretization. As anticipated and numerically demonstrated, to a limited extent, against the problem of parameter recovery given the chaotic response histories of a couple of nonlinear oscillators, the proposed method appears to offer a more rational, more accurate and faster alternative to most available evolutionary schemes, prominently the particle swarm optimization. - Highlights: • Evolutionary global optimization is posed as a perturbed martingale problem. • Resulting search via additive updates is a generalization over Gateaux derivatives. • Additional layers of random perturbation help avoid trapping at local extrema. • The approach ensures efficient design space exploration and high accuracy. • The method is numerically assessed via parameter recovery of chaotic oscillators.

  5. Outage optimization - the US experience and approach

    International Nuclear Information System (INIS)

    LaPlatney, J.

    2007-01-01

    Sustainable development of Nuclear Energy depends heavily on excellent performance of the existing fleet which in turn depends heavily on the performance of planned outages. Some reactor fleets, for example Finland and Germany, have demonstrated sustained good outage performance from their start of commercial operation. Others, such as the US, have improved performance over time. The principles behind a successful outage optimization process are: -) duration is not sole measure of outage success, -) outage work must be performed safely, -) scope selection must focus on improving plant material condition to improve reliability, -) all approved outage work must be completed, -) work must be done cost effectively, -) post-outage plant reliability is a key measure of outage success, and -) outage lessons learned must be effectively implemented to achieve continuous improvement. This approach has proven its superiority over simple outage shortening, and has yielded good results in the US fleet over the past 15 years

  6. Performance Optimization in Sport: A Psychophysiological Approach

    Directory of Open Access Journals (Sweden)

    Selenia di Fronso

    2017-11-01

    Full Text Available ABSTRACT In the last 20 years, there was a growing interest in the study of the theoretical and applied issues surrounding psychophysiological processes underlying performance. The psychophysiological monitoring, which enables the study of these processes, consists of the assessment of the activation and functioning level of the organism using a multidimensional approach. In sport, it can be used to attain a better understanding of the processes underlying athletic performance and to improve it. The most frequently used ecological techniques include electromyography (EMG, electrocardiography (ECG, electroencephalography (EEG, and the assessment of electrodermal activity and breathing rhythm. The purpose of this paper is to offer an overview of the use of these techniques in applied interventions in sport and physical exercise and to give athletes, coaches and sport psychology experts new insights for performance improvement.

  7. Microseismic Monitoring Design Optimization Based on Multiple Criteria Decision Analysis

    Science.gov (United States)

    Kovaleva, Y.; Tamimi, N.; Ostadhassan, M.

    2017-12-01

    Borehole microseismic monitoring of hydraulic fracture treatments of unconventional reservoirs is a widely used method in the oil and gas industry. Sometimes, the quality of the acquired microseismic data is poor. One of the reasons for poor data quality is poor survey design. We attempt to provide a comprehensive and thorough workflow, using multiple criteria decision analysis (MCDA), to optimize planning micriseismic monitoring. So far, microseismic monitoring has been used extensively as a powerful tool for determining fracture parameters that affect the influx of formation fluids into the wellbore. The factors that affect the quality of microseismic data and their final results include average distance between microseismic events and receivers, complexity of the recorded wavefield, signal-to-noise ratio, data aperture, etc. These criteria often conflict with each other. In a typical microseismic monitoring, those factors should be considered to choose the best monitoring well(s), optimum number of required geophones, and their depth. We use MDCA to address these design challenges and develop a method that offers an optimized design out of all possible combinations to produce the best data acquisition results. We believe that this will be the first research to include the above-mentioned factors in a 3D model. Such a tool would assist companies and practicing engineers in choosing the best design parameters for future microseismic projects.

  8. Optimizing Seismic Monitoring Networks for EGS and Conventional Geothermal Projects

    Science.gov (United States)

    Kraft, Toni; Herrmann, Marcus; Bethmann, Falko; Stefan, Wiemer

    2013-04-01

    In the past several years, geological energy technologies receive growing attention and have been initiated in or close to urban areas. Some of these technologies involve injecting fluids into the subsurface (e.g., oil and gas development, waste disposal, and geothermal energy development) and have been found or suspected to cause small to moderate sized earthquakes. These earthquakes, which may have gone unnoticed in the past when they occurred in remote sparsely populated areas, are now posing a considerable risk for the public acceptance of these technologies in urban areas. The permanent termination of the EGS project in Basel, Switzerland after a number of induced ML~3 (minor) earthquakes in 2006 is one prominent example. It is therefore essential for the future development and success of these geological energy technologies to develop strategies for managing induced seismicity and keeping the size of induced earthquakes at a level that is acceptable to all stakeholders. Most guidelines and recommendations on induced seismicity published since the 1970ies conclude that an indispensable component of such a strategy is the establishment of seismic monitoring in an early stage of a project. This is because an appropriate seismic monitoring is the only way to detect and locate induced microearthquakes with sufficient certainty to develop an understanding of the seismic and geomechanical response of the reservoir to the geotechnical operation. In addition, seismic monitoring lays the foundation for the establishment of advanced traffic light systems and is therefore an important confidence building measure towards the local population and authorities. We have developed an optimization algorithm for seismic monitoring networks in urban areas that allows to design and evaluate seismic network geometries for arbitrary geotechnical operation layouts. The algorithm is based on the D-optimal experimental design that aims to minimize the error ellipsoid of the linearized

  9. Condition Monitoring of Sensors in a NPP Using Optimized PCA

    Directory of Open Access Journals (Sweden)

    Wei Li

    2018-01-01

    Full Text Available An optimized principal component analysis (PCA framework is proposed to implement condition monitoring for sensors in a nuclear power plant (NPP in this paper. Compared with the common PCA method in previous research, the PCA method in this paper is optimized at different modeling procedures, including data preprocessing stage, modeling parameter selection stage, and fault detection and isolation stage. Then, the model’s performance is greatly improved through these optimizations. Finally, sensor measurements from a real NPP are used to train the optimized PCA model in order to guarantee the credibility and reliability of the simulation results. Meanwhile, artificial faults are sequentially imposed to sensor measurements to estimate the fault detection and isolation ability of the proposed PCA model. Simulation results show that the optimized PCA model is capable of detecting and isolating the sensors regardless of whether they exhibit major or small failures. Meanwhile, the quantitative evaluation results also indicate that better performance can be obtained in the optimized PCA method compared with the common PCA method.

  10. Monitoring active volcanoes: The geochemical approach

    Directory of Open Access Journals (Sweden)

    Takeshi Ohba

    2011-06-01

    Full Text Available

    The geochemical surveillance of an active volcano aims to recognize possible signals that are related to changes in volcanic activity. Indeed, as a consequence of the magma rising inside the volcanic "plumbing system" and/or the refilling with new batches of magma, the dissolved volatiles in the magma are progressively released as a function of their relative solubilities. When approaching the surface, these fluids that are discharged during magma degassing can interact with shallow aquifers and/or can be released along the main volcano-tectonic structures. Under these conditions, the following main degassing processes represent strategic sites to be monitored.

    The main purpose of this special volume is to collect papers that cover a wide range of topics in volcanic fluid geochemistry, which include geochemical characterization and geochemical monitoring of active volcanoes using different techniques and at different sites. Moreover, part of this volume has been dedicated to the new geochemistry tools.

  11. Optimizing liquid effluent monitoring at a large nuclear complex.

    Science.gov (United States)

    Chou, Charissa J; Barnett, D Brent; Johnson, Vernon G; Olson, Phil M

    2003-12-01

    Effluent monitoring typically requires a large number of analytes and samples during the initial or startup phase of a facility. Once a baseline is established, the analyte list and sampling frequency may be reduced. Although there is a large body of literature relevant to the initial design, few, if any, published papers exist on updating established effluent monitoring programs. This paper statistically evaluates four years of baseline data to optimize the liquid effluent monitoring efficiency of a centralized waste treatment and disposal facility at a large defense nuclear complex. Specific objectives were to: (1) assess temporal variability in analyte concentrations, (2) determine operational factors contributing to waste stream variability, (3) assess the probability of exceeding permit limits, and (4) streamline the sampling and analysis regime. Results indicated that the probability of exceeding permit limits was one in a million under normal facility operating conditions, sampling frequency could be reduced, and several analytes could be eliminated. Furthermore, indicators such as gross alpha and gross beta measurements could be used in lieu of more expensive specific isotopic analyses (radium, cesium-137, and strontium-90) for routine monitoring. Study results were used by the state regulatory agency to modify monitoring requirements for a new discharge permit, resulting in an annual cost savings of US dollars 223,000. This case study demonstrates that statistical evaluation of effluent contaminant variability coupled with process knowledge can help plant managers and regulators streamline analyte lists and sampling frequencies based on detection history and environmental risk.

  12. Optimization of deformation monitoring networks using finite element strain analysis

    Science.gov (United States)

    Alizadeh-Khameneh, M. Amin; Eshagh, Mehdi; Jensen, Anna B. O.

    2018-04-01

    An optimal design of a geodetic network can fulfill the requested precision and reliability of the network, and decrease the expenses of its execution by removing unnecessary observations. The role of an optimal design is highlighted in deformation monitoring network due to the repeatability of these networks. The core design problem is how to define precision and reliability criteria. This paper proposes a solution, where the precision criterion is defined based on the precision of deformation parameters, i. e. precision of strain and differential rotations. A strain analysis can be performed to obtain some information about the possible deformation of a deformable object. In this study, we split an area into a number of three-dimensional finite elements with the help of the Delaunay triangulation and performed the strain analysis on each element. According to the obtained precision of deformation parameters in each element, the precision criterion of displacement detection at each network point is then determined. The developed criterion is implemented to optimize the observations from the Global Positioning System (GPS) in Skåne monitoring network in Sweden. The network was established in 1989 and straddled the Tornquist zone, which is one of the most active faults in southern Sweden. The numerical results show that 17 out of all 21 possible GPS baseline observations are sufficient to detect minimum 3 mm displacement at each network point.

  13. Optimization of hydrometric monitoring network in urban drainage systems using information theory.

    Science.gov (United States)

    Yazdi, J

    2017-10-01

    Regular and continuous monitoring of urban runoff in both quality and quantity aspects is of great importance for controlling and managing surface runoff. Due to the considerable costs of establishing new gauges, optimization of the monitoring network is essential. This research proposes an approach for site selection of new discharge stations in urban areas, based on entropy theory in conjunction with multi-objective optimization tools and numerical models. The modeling framework provides an optimal trade-off between the maximum possible information content and the minimum shared information among stations. This approach was applied to the main surface-water collection system in Tehran to determine new optimal monitoring points under the cost considerations. Experimental results on this drainage network show that the obtained cost-effective designs noticeably outperform the consulting engineers' proposal in terms of both information contents and shared information. The research also determined the highly frequent sites at the Pareto front which might be important for decision makers to give a priority for gauge installation on those locations of the network.

  14. monitoring extension : a cognition oriented approach towards ...

    African Journals Online (AJOL)

    From the above influence relationship it can be concluded that monitoring of a ... economic or physical efficiency can be deduced from monitoring changes in ..... attractiveness, perceptions are of a more specific nature and are analysed on the.

  15. Big Data Reduction and Optimization in Sensor Monitoring Network

    Directory of Open Access Journals (Sweden)

    Bin He

    2014-01-01

    Full Text Available Wireless sensor networks (WSNs are increasingly being utilized to monitor the structural health of the underground subway tunnels, showing many promising advantages over traditional monitoring schemes. Meanwhile, with the increase of the network size, the system is incapable of dealing with big data to ensure efficient data communication, transmission, and storage. Being considered as a feasible solution to these issues, data compression can reduce the volume of data travelling between sensor nodes. In this paper, an optimization algorithm based on the spatial and temporal data compression is proposed to cope with these issues appearing in WSNs in the underground tunnel environment. The spatial and temporal correlation functions are introduced for the data compression and data recovery. It is verified that the proposed algorithm is applicable to WSNs in the underground tunnel.

  16. OSSA - An optimized approach to severe accident management: EPR application

    International Nuclear Information System (INIS)

    Sauvage, E. C.; Prior, R.; Coffey, K.; Mazurkiewicz, S. M.

    2006-01-01

    There is a recognized need to provide nuclear power plant technical staff with structured guidance for response to a potential severe accident condition involving core damage and potential release of fission products to the environment. Over the past ten years, many plants worldwide have implemented such guidance for their emergency technical support center teams either by following one of the generic approaches, or by developing fully independent approaches. There are many lessons to be learned from the experience of the past decade, in developing, implementing, and validating severe accident management guidance. Also, though numerous basic approaches exist which share common principles, there are differences in the methodology and application of the guidelines. AREVA/Framatome-ANP is developing an optimized approach to severe accident management guidance in a project called OSSA ('Operating Strategies for Severe Accidents'). There are still numerous operating power plants which have yet to implement severe accident management programs. For these, the option to use an updated approach which makes full use of lessons learned and experience, is seen as a major advantage. Very few of the current approaches covers all operating plant states, including shutdown states with the primary system closed and open. Although it is not necessary to develop an entirely new approach in order to add this capability, the opportunity has been taken to develop revised full scope guidance covering all plant states in addition to the fuel in the fuel building. The EPR includes at the design phase systems and measures to minimize the risk of severe accident and to mitigate such potential scenarios. This presents a difference in comparison with existing plant, for which severe accidents where not considered in the design. Thought developed for all type of plants, OSSA will also be applied on the EPR, with adaptations designed to take into account its favourable situation in that field

  17. Optimizing an estuarine water quality monitoring program through an entropy-based hierarchical spatiotemporal Bayesian framework

    Science.gov (United States)

    Alameddine, Ibrahim; Karmakar, Subhankar; Qian, Song S.; Paerl, Hans W.; Reckhow, Kenneth H.

    2013-10-01

    The total maximum daily load program aims to monitor more than 40,000 standard violations in around 20,000 impaired water bodies across the United States. Given resource limitations, future monitoring efforts have to be hedged against the uncertainties in the monitored system, while taking into account existing knowledge. In that respect, we have developed a hierarchical spatiotemporal Bayesian model that can be used to optimize an existing monitoring network by retaining stations that provide the maximum amount of information, while identifying locations that would benefit from the addition of new stations. The model assumes the water quality parameters are adequately described by a joint matrix normal distribution. The adopted approach allows for a reduction in redundancies, while emphasizing information richness rather than data richness. The developed approach incorporates the concept of entropy to account for the associated uncertainties. Three different entropy-based criteria are adopted: total system entropy, chlorophyll-a standard violation entropy, and dissolved oxygen standard violation entropy. A multiple attribute decision making framework is adopted to integrate the competing design criteria and to generate a single optimal design. The approach is implemented on the water quality monitoring system of the Neuse River Estuary in North Carolina, USA. The model results indicate that the high priority monitoring areas identified by the total system entropy and the dissolved oxygen violation entropy criteria are largely coincident. The monitoring design based on the chlorophyll-a standard violation entropy proved to be less informative, given the low probabilities of violating the water quality standard in the estuary.

  18. Game-theoretic approaches to optimal risk sharing

    NARCIS (Netherlands)

    Boonen, T.J.

    2014-01-01

    This Ph.D. thesis studies optimal risk capital allocation and optimal risk sharing. The first chapter deals with the problem of optimally allocating risk capital across divisions within a financial institution. To do so, an asymptotic approach is used to generalize the well-studied Aumann-Shapley

  19. Group Counseling Optimization: A Novel Approach

    Science.gov (United States)

    Eita, M. A.; Fahmy, M. M.

    A new population-based search algorithm, which we call Group Counseling Optimizer (GCO), is presented. It mimics the group counseling behavior of humans in solving their problems. The algorithm is tested using seven known benchmark functions: Sphere, Rosenbrock, Griewank, Rastrigin, Ackley, Weierstrass, and Schwefel functions. A comparison is made with the recently published comprehensive learning particle swarm optimizer (CLPSO). The results demonstrate the efficiency and robustness of the proposed algorithm.

  20. Kantian Optimization: An Approach to Cooperative Behavior

    OpenAIRE

    John E. Roemer

    2014-01-01

    Although evidence accrues in biology, anthropology and experimental economics that homo sapiens is a cooperative species, the reigning assumption in economic theory is that individuals optimize in an autarkic manner (as in Nash and Walrasian equilibrium). I here postulate a cooperative kind of optimizing behavior, called Kantian. It is shown that in simple economic models, when there are negative externalities (such as congestion effects from use of a commonly owned resource) or positive exte...

  1. Model Based Optimal Sensor Network Design for Condition Monitoring in an IGCC Plant

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, Rajeeva; Kumar, Aditya; Dai, Dan; Seenumani, Gayathri; Down, John; Lopez, Rodrigo

    2012-12-31

    This report summarizes the achievements and final results of this program. The objective of this program is to develop a general model-based sensor network design methodology and tools to address key issues in the design of an optimal sensor network configuration: the type, location and number of sensors used in a network, for online condition monitoring. In particular, the focus in this work is to develop software tools for optimal sensor placement (OSP) and use these tools to design optimal sensor network configuration for online condition monitoring of gasifier refractory wear and radiant syngas cooler (RSC) fouling. The methodology developed will be applicable to sensing system design for online condition monitoring for broad range of applications. The overall approach consists of (i) defining condition monitoring requirement in terms of OSP and mapping these requirements in mathematical terms for OSP algorithm, (ii) analyzing trade-off of alternate OSP algorithms, down selecting the most relevant ones and developing them for IGCC applications (iii) enhancing the gasifier and RSC models as required by OSP algorithms, (iv) applying the developed OSP algorithm to design the optimal sensor network required for the condition monitoring of an IGCC gasifier refractory and RSC fouling. Two key requirements for OSP for condition monitoring are desired precision for the monitoring variables (e.g. refractory wear) and reliability of the proposed sensor network in the presence of expected sensor failures. The OSP problem is naturally posed within a Kalman filtering approach as an integer programming problem where the key requirements of precision and reliability are imposed as constraints. The optimization is performed over the overall network cost. Based on extensive literature survey two formulations were identified as being relevant to OSP for condition monitoring; one based on LMI formulation and the other being standard INLP formulation. Various algorithms to solve

  2. Optimization of remediation strategies using vadose zone monitoring systems

    Science.gov (United States)

    Dahan, Ofer

    2016-04-01

    In-situ bio-remediation of the vadose zone depends mainly on the ability to change the subsurface hydrological, physical and chemical conditions in order to enable development of specific, indigenous, pollutants degrading bacteria. As such the remediation efficiency is much dependent on the ability to implement optimal hydraulic and chemical conditions in deep sections of the vadose zone. These conditions are usually determined in laboratory experiments where parameters such as the chemical composition of the soil water solution, redox potential and water content of the sediment are fully controlled. Usually, implementation of desired optimal degradation conditions in deep vadose zone at full scale field setups is achieved through infiltration of water enriched with chemical additives on the land surface. It is assumed that deep percolation into the vadose zone would create chemical conditions that promote biodegradation of specific compounds. However, application of water with specific chemical conditions near land surface dose not necessarily results in promoting of desired chemical and hydraulic conditions in deep sections of the vadose zone. A vadose-zone monitoring system (VMS) that was recently developed allows continuous monitoring of the hydrological and chemical properties of deep sections of the unsaturated zone. The VMS includes flexible time-domain reflectometry (FTDR) probes which allow continuous monitoring of the temporal variation of the vadose zone water content, and vadose-zone sampling ports (VSPs) which are designed to allow frequent sampling of the sediment pore-water and gas at multiple depths. Implementation of the vadose zone monitoring system in sites that undergoes active remediation provides real time information on the actual chemical and hydrological conditions in the vadose zone as the remediation process progresses. Up-to-date the system has been successfully implemented in several studies on water flow and contaminant transport in

  3. Preventive radioecological assessment of territory for optimization of monitoring and countermeasures after radiation accidents.

    Science.gov (United States)

    Prister, B S; Vinogradskaya, V D; Lev, T D; Talerko, M M; Garger, E K; Onishi, Y; Tischenko, O G

    2018-04-01

    A methodology of a preventive radioecological assessment of the territory has been developed for optimizing post-emergency monitoring and countermeasure implementation in an event of a severe radiation accident. Approaches and main stages of integrated radioecological zoning of the territory are described. An algorithm for the assessment of the potential radioecological criticality (sensitivity) of the area is presented. The proposed approach is validated using data of the dosimetric passportization in Ukraine after the Chernobyl accident for the test site settlements. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Optimization approaches for robot trajectory planning

    Directory of Open Access Journals (Sweden)

    Carlos Llopis-Albert

    2018-03-01

    Full Text Available The development of optimal trajectory planning algorithms for autonomous robots is a key issue in order to efficiently perform the robot tasks. This problem is hampered by the complex environment regarding the kinematics and dynamics of robots with several arms and/or degrees of freedom (dof, the design of collision-free trajectories and the physical limitations of the robots. This paper presents a review about the existing robot motion planning techniques and discusses their pros and cons regarding completeness, optimality, efficiency, accuracy, smoothness, stability, safety and scalability.

  5. A Collaborative Neurodynamic Approach to Multiple-Objective Distributed Optimization.

    Science.gov (United States)

    Yang, Shaofu; Liu, Qingshan; Wang, Jun

    2018-04-01

    This paper is concerned with multiple-objective distributed optimization. Based on objective weighting and decision space decomposition, a collaborative neurodynamic approach to multiobjective distributed optimization is presented. In the approach, a system of collaborative neural networks is developed to search for Pareto optimal solutions, where each neural network is associated with one objective function and given constraints. Sufficient conditions are derived for ascertaining the convergence to a Pareto optimal solution of the collaborative neurodynamic system. In addition, it is proved that each connected subsystem can generate a Pareto optimal solution when the communication topology is disconnected. Then, a switching-topology-based method is proposed to compute multiple Pareto optimal solutions for discretized approximation of Pareto front. Finally, simulation results are discussed to substantiate the performance of the collaborative neurodynamic approach. A portfolio selection application is also given.

  6. Ducted wind turbine optimization : A numerical approach

    NARCIS (Netherlands)

    Dighe, V.V.; De Oliveira Andrade, G.L.; van Bussel, G.J.W.

    2017-01-01

    The practice of ducting wind turbines has shown a beneficial effect on the overall performance, when compared to an open turbine of the same rotor diameter1. However, an optimization study specifically for ducted wind turbines (DWT’s) is missing or incomplete. This work focuses on a numerical

  7. Russian Loanword Adaptation in Persian; Optimal Approach

    Science.gov (United States)

    Kambuziya, Aliye Kord Zafaranlu; Hashemi, Eftekhar Sadat

    2011-01-01

    In this paper we analyzed some of the phonological rules of Russian loanword adaptation in Persian, on the view of Optimal Theory (OT) (Prince & Smolensky, 1993/2004). It is the first study of phonological process on Russian loanwords adaptation in Persian. By gathering about 50 current Russian loanwords, we selected some of them to analyze. We…

  8. Concept for integrated environmental monitoring. Scientific approach

    Energy Technology Data Exchange (ETDEWEB)

    Haber, W [comp.; Schoenthaler, K; Kerner, H F; Koeppel, J; Spandau, L

    1998-09-01

    Despite considerable expenditures for environmental protection and intensified efforts in the areas of environmental research and monitoring, environmental damage increasingly occurs, sometimes with global effects, largely due to the lack of early diagnosis. In the past few years various institutions have therefore demanded improvements in environmental monitoring. The Council of Experts on Environmental Issues (`Rat von Sachverstaendigen fuer Umweltfragen`, SRU), in particular, in its `Environmental Report` of 1987 and in its Special Report on `General Ecological Environmental Monitoring` (1990) presented far-reaching demands for a nationwide ecological early warning system which should integrate the various local, regional, national, and even global monitoring levels, and which should encompass an environmental monitoring of entire ecosystems at representative locations. This is aimed at creating the prerequisites for - detection of long-term gradual environmental change, - confirmation of refutation of initial assumptions regarding the causes of these environmental changes, - permitting decisions on preventive actions to stabilize or improve environmental conditions and - making it possible to assess the success of environmental protection policies. This report includes an abbreviated version and documentation of the conference on the `Concept for Integrated Environmental Monitoring` and the final report `Specification of the Concept for Integrated Environmental Monitoring from the Perspective of Nature Conservation`. (orig.)

  9. Optimal redistribution of an urban air quality monitoring network using atmospheric dispersion model and genetic algorithm

    Science.gov (United States)

    Hao, Yufang; Xie, Shaodong

    2018-03-01

    Air quality monitoring networks play a significant role in identifying the spatiotemporal patterns of air pollution, and they need to be deployed efficiently, with a minimum number of sites. The revision and optimal adjustment of existing monitoring networks is crucial for cities that have undergone rapid urban expansion and experience temporal variations in pollution patterns. The approach based on the Weather Research and Forecasting-California PUFF (WRF-CALPUFF) model and genetic algorithm (GA) was developed to design an optimal monitoring network. The maximization of coverage with minimum overlap and the ability to detect violations of standards were developed as the design objectives for redistributed networks. The non-dominated sorting genetic algorithm was applied to optimize the network size and site locations simultaneously for Shijiazhuang city, one of the most polluted cities in China. The assessment on the current network identified the insufficient spatial coverage of SO2 and NO2 monitoring for the expanding city. The optimization results showed that significant improvements were achieved in multiple objectives by redistributing the original network. Efficient coverage of the resulting designs improved to 60.99% and 76.06% of the urban area for SO2 and NO2, respectively. The redistributing design for multi-pollutant including 8 sites was also proposed, with the spatial representation covered 52.30% of the urban area and the overlapped areas decreased by 85.87% compared with the original network. The abilities to detect violations of standards were not improved as much as the other two objectives due to the conflicting nature between the multiple objectives. Additionally, the results demonstrated that the algorithm was slightly sensitive to the parameter settings, with the number of generations presented the most significant effect. Overall, our study presents an effective and feasible procedure for air quality network optimization at a city scale.

  10. Optimization of nonlinear controller with an enhanced biogeography approach

    Directory of Open Access Journals (Sweden)

    Mohammed Salem

    2014-07-01

    Full Text Available This paper is dedicated to the optimization of nonlinear controllers basing of an enhanced Biogeography Based Optimization (BBO approach. Indeed, The BBO is combined to a predator and prey model where several predators are used with introduction of a modified migration operator to increase the diversification along the optimization process so as to avoid local optima and reach the optimal solution quickly. The proposed approach is used in tuning the gains of PID controller for nonlinear systems. Simulations are carried out over a Mass spring damper and an inverted pendulum and has given remarkable results when compared to genetic algorithm and BBO.

  11. CLUSTER ENERGY OPTIMIZATION: A THEORETICAL APPROACH

    OpenAIRE

    Vikram Yadav; G. Sahoo

    2013-01-01

    The optimization of energy consumption in the cloud computing environment is the question how to use various energy conservation strategies to efficiently allocate resources. The need of differentresources in cloud environment is unpredictable. It is observed that load management in cloud is utmost needed in order to provide QOS. The jobs at over-loaded physical machine are shifted to under-loadedphysical machine and turning the idle machine off in order to provide green cloud. For energy opt...

  12. Design Buildings Optimally: A Lifecycle Assessment Approach

    KAUST Repository

    Hosny, Ossama

    2013-01-01

    This paper structures a generic framework to support optimum design for multi-buildings in desert environment. The framework is targeting an environmental friendly design with minimum lifecycle cost, using Genetic Algorithms (Gas). GAs function through a set of success measures which evaluates the design, formulates a proper objective, and reflects possible tangible/intangible constraints. The framework optimizes the design and categorizes it under a certain environmental category at minimum Life Cycle Cost (LCC). It consists of three main modules: (1) a custom Building InformationModel (BIM) for desert buildings with a compatibility checker as a central interactive database; (2) a system evaluator module to evaluate the proposed success measures for the design; and (3) a GAs optimization module to ensure optimum design. The framework functions through three levels: the building components, integrated building, and multi-building levels. At the component level the design team should be able to select components in a designed sequence to ensure compatibility among various components, while at the building level; the team can relatively locate and orient each individual building. Finally, at the multi-building (compound) level the whole design can be evaluated using success measures of natural light, site capacity, shading impact on natural lighting, thermal change, visual access and energy saving. The framework through genetic algorithms optimizes the design by determining proper types of building components and relative buildings locations and orientations which ensure categorizing the design under a specific category or meet certain preferences at minimum lifecycle cost.

  13. Using models for the optimization of hydrologic monitoring

    Science.gov (United States)

    Fienen, Michael N.; Hunt, Randall J.; Doherty, John E.; Reeves, Howard W.

    2011-01-01

    Hydrologists are often asked what kind of monitoring network can most effectively support science-based water-resources management decisions. Currently (2011), hydrologic monitoring locations often are selected by addressing observation gaps in the existing network or non-science issues such as site access. A model might then be calibrated to available data and applied to a prediction of interest (regardless of how well-suited that model is for the prediction). However, modeling tools are available that can inform which locations and types of data provide the most 'bang for the buck' for a specified prediction. Put another way, the hydrologist can determine which observation data most reduce the model uncertainty around a specified prediction. An advantage of such an approach is the maximization of limited monitoring resources because it focuses on the difference in prediction uncertainty with or without additional collection of field data. Data worth can be calculated either through the addition of new data or subtraction of existing information by reducing monitoring efforts (Beven, 1993). The latter generally is not widely requested as there is explicit recognition that the worth calculated is fundamentally dependent on the prediction specified. If a water manager needs a new prediction, the benefits of reducing the scope of a monitoring effort, based on an old prediction, may be erased by the loss of information important for the new prediction. This fact sheet focuses on the worth or value of new data collection by quantifying the reduction in prediction uncertainty achieved be adding a monitoring observation. This calculation of worth can be performed for multiple potential locations (and types) of observations, which then can be ranked for their effectiveness for reducing uncertainty around the specified prediction. This is implemented using a Bayesian approach with the PREDUNC utility in the parameter estimation software suite PEST (Doherty, 2010). The

  14. A hybrid approach for biobjective optimization

    DEFF Research Database (Denmark)

    Stidsen, Thomas Jacob Riis; Andersen, Kim Allan

    2018-01-01

    to singleobjective problems is that no standard multiobjective solvers exist and specialized algorithms need to be programmed from scratch.In this article we will present a hybrid approach, which operates both in decision space and in objective space. The approach enables massive efficient parallelization and can...... be used to a wide variety of biobjective Mixed Integer Programming models. We test the approach on the biobjective extension of the classic traveling salesman problem, on the standard datasets, and determine the full set of nondominated points. This has only been done once before (Florios and Mavrotas...

  15. Multiobjective Optimization Methodology A Jumping Gene Approach

    CERN Document Server

    Tang, KS

    2012-01-01

    Complex design problems are often governed by a number of performance merits. These markers gauge how good the design is going to be, but can conflict with the performance requirements that must be met. The challenge is reconciling these two requirements. This book introduces a newly developed jumping gene algorithm, designed to address the multi-functional objectives problem and supplies a viably adequate solution in speed. The text presents various multi-objective optimization techniques and provides the technical know-how for obtaining trade-off solutions between solution spread and converg

  16. Optimization of Remediation Conditions using Vadose Zone Monitoring Technology

    Science.gov (United States)

    Dahan, O.; Mandelbaum, R.; Ronen, Z.

    2010-12-01

    Success of in-situ bio-remediation of the vadose zone depends mainly on the ability to change and control hydrological, physical and chemical conditions of subsurface. These manipulations enables the development of specific, indigenous, pollutants degrading bacteria or set the environmental conditions for seeded bacteria. As such, the remediation efficiency is dependent on the ability to implement optimal hydraulic and chemical conditions in deep sections of the vadose zone. Enhanced bioremediation of the vadose zone is achieved under field conditions through infiltration of water enriched with chemical additives. Yet, water percolation and solute transport in unsaturated conditions is a complex process and application of water with specific chemical conditions near land surface dose not necessarily result in promoting of desired chemical and hydraulic conditions in deeper sections of the vadose zone. A newly developed vadose-zone monitoring system (VMS) allows continuous monitoring of the hydrological and chemical properties of the percolating water along deep sections of the vadose zone. Implementation of the VMS at sites that undergoes active remediation provides real time information on the chemical and hydrological conditions in the vadose zone as the remediation process progresses. Manipulating subsurface conditions for optimal biodegradation of hydrocarbons is demonstrated through enhanced bio-remediation of the vadose zone at a site that has been contaminated with gasoline products in Tel Aviv. The vadose zone at the site is composed of 6 m clay layer overlying a sandy formation extending to the water table at depth of 20 m bls. The upper 5 m of contaminated soil were removed for ex-situ treatment, and the remaining 15 m vadose zone is treated in-situ through enhanced bioremedaition. Underground drip irrigation system was installed below the surface on the bottom of the excavation. Oxygen and nutrients releasing powder (EHCO, Adventus) was spread below the

  17. Designing optimal greenhouse gas monitoring networks for Australia

    Science.gov (United States)

    Ziehn, T.; Law, R. M.; Rayner, P. J.; Roff, G.

    2016-01-01

    Atmospheric transport inversion is commonly used to infer greenhouse gas (GHG) flux estimates from concentration measurements. The optimal location of ground-based observing stations that supply these measurements can be determined by network design. Here, we use a Lagrangian particle dispersion model (LPDM) in reverse mode together with a Bayesian inverse modelling framework to derive optimal GHG observing networks for Australia. This extends the network design for carbon dioxide (CO2) performed by Ziehn et al. (2014) to also minimise the uncertainty on the flux estimates for methane (CH4) and nitrous oxide (N2O), both individually and in a combined network using multiple objectives. Optimal networks are generated by adding up to five new stations to the base network, which is defined as two existing stations, Cape Grim and Gunn Point, in southern and northern Australia respectively. The individual networks for CO2, CH4 and N2O and the combined observing network show large similarities because the flux uncertainties for each GHG are dominated by regions of biologically productive land. There is little penalty, in terms of flux uncertainty reduction, for the combined network compared to individually designed networks. The location of the stations in the combined network is sensitive to variations in the assumed data uncertainty across locations. A simple assessment of economic costs has been included in our network design approach, considering both establishment and maintenance costs. Our results suggest that, while site logistics change the optimal network, there is only a small impact on the flux uncertainty reductions achieved with increasing network size.

  18. Dynamic programming approach to optimization of approximate decision rules

    KAUST Repository

    Amin, Talha M.; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata

    2013-01-01

    This paper is devoted to the study of an extension of dynamic programming approach which allows sequential optimization of approximate decision rules relative to the length and coverage. We introduce an uncertainty measure R(T) which is the number

  19. Dynamic Programming Approach for Exact Decision Rule Optimization

    KAUST Repository

    Amin, Talha M.; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata

    2013-01-01

    This chapter is devoted to the study of an extension of dynamic programming approach that allows sequential optimization of exact decision rules relative to the length and coverage. It contains also results of experiments with decision tables from

  20. Optimization approaches to volumetric modulated arc therapy planning

    Energy Technology Data Exchange (ETDEWEB)

    Unkelbach, Jan, E-mail: junkelbach@mgh.harvard.edu; Bortfeld, Thomas; Craft, David [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts 02114 (United States); Alber, Markus [Department of Medical Physics and Department of Radiation Oncology, Aarhus University Hospital, Aarhus C DK-8000 (Denmark); Bangert, Mark [Department of Medical Physics in Radiation Oncology, German Cancer Research Center, Heidelberg D-69120 (Germany); Bokrantz, Rasmus [RaySearch Laboratories, Stockholm SE-111 34 (Sweden); Chen, Danny [Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, Indiana 46556 (United States); Li, Ruijiang; Xing, Lei [Department of Radiation Oncology, Stanford University, Stanford, California 94305 (United States); Men, Chunhua [Department of Research, Elekta, Maryland Heights, Missouri 63043 (United States); Nill, Simeon [Joint Department of Physics at The Institute of Cancer Research and The Royal Marsden NHS Foundation Trust, London SM2 5NG (United Kingdom); Papp, Dávid [Department of Mathematics, North Carolina State University, Raleigh, North Carolina 27695 (United States); Romeijn, Edwin [H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332 (United States); Salari, Ehsan [Department of Industrial and Manufacturing Engineering, Wichita State University, Wichita, Kansas 67260 (United States)

    2015-03-15

    Volumetric modulated arc therapy (VMAT) has found widespread clinical application in recent years. A large number of treatment planning studies have evaluated the potential for VMAT for different disease sites based on the currently available commercial implementations of VMAT planning. In contrast, literature on the underlying mathematical optimization methods used in treatment planning is scarce. VMAT planning represents a challenging large scale optimization problem. In contrast to fluence map optimization in intensity-modulated radiotherapy planning for static beams, VMAT planning represents a nonconvex optimization problem. In this paper, the authors review the state-of-the-art in VMAT planning from an algorithmic perspective. Different approaches to VMAT optimization, including arc sequencing methods, extensions of direct aperture optimization, and direct optimization of leaf trajectories are reviewed. Their advantages and limitations are outlined and recommendations for improvements are discussed.

  1. Distributed Cooperative Optimal Control for Multiagent Systems on Directed Graphs: An Inverse Optimal Approach.

    Science.gov (United States)

    Zhang, Huaguang; Feng, Tao; Yang, Guang-Hong; Liang, Hongjing

    2015-07-01

    In this paper, the inverse optimal approach is employed to design distributed consensus protocols that guarantee consensus and global optimality with respect to some quadratic performance indexes for identical linear systems on a directed graph. The inverse optimal theory is developed by introducing the notion of partial stability. As a result, the necessary and sufficient conditions for inverse optimality are proposed. By means of the developed inverse optimal theory, the necessary and sufficient conditions are established for globally optimal cooperative control problems on directed graphs. Basic optimal cooperative design procedures are given based on asymptotic properties of the resulting optimal distributed consensus protocols, and the multiagent systems can reach desired consensus performance (convergence rate and damping rate) asymptotically. Finally, two examples are given to illustrate the effectiveness of the proposed methods.

  2. An approach for optimizing arc welding applications

    International Nuclear Information System (INIS)

    Chapuis, Julien

    2011-01-01

    The dynamic and transport mechanisms involved in the arc plasma and the weld pool of arc welding operations are numerous and strongly coupled. They produce a medium the magnitudes of which exhibit rapid time variations and very marked gradients which make any experimental analysis complex in this disrupted environment. In this work, we study the TIG and MIG processes. An experimental platform was developed to allow synchronized measurement of various physical quantities associated with welding (process parameters, temperatures, clamping forces, metal transfer, etc.). Numerical libraries dedicated to applied studies in arc welding are developed. They enable the treatment of a large flow of data (signals, images) with a systematic and global method. The advantages of this approach for the enrichment of numerical simulation and arc process control are shown in different situations. Finally, this experimental approach is used in the context of the chosen application to obtain rich measurements to describe the dynamic behavior of the weld pool in P-GMAW. Dimensional analysis of these experimental measurements allows to identify the predominant mechanisms involved and to determine experimentally the characteristic times associated. This type of approach includes better description of the behavior of a macro-drop of molten metal or the phenomena occurring in the humping instabilities. (author)

  3. Biased Monte Carlo optimization: the basic approach

    International Nuclear Information System (INIS)

    Campioni, Luca; Scardovelli, Ruben; Vestrucci, Paolo

    2005-01-01

    It is well-known that the Monte Carlo method is very successful in tackling several kinds of system simulations. It often happens that one has to deal with rare events, and the use of a variance reduction technique is almost mandatory, in order to have Monte Carlo efficient applications. The main issue associated with variance reduction techniques is related to the choice of the value of the biasing parameter. Actually, this task is typically left to the experience of the Monte Carlo user, who has to make many attempts before achieving an advantageous biasing. A valuable result is provided: a methodology and a practical rule addressed to establish an a priori guidance for the choice of the optimal value of the biasing parameter. This result, which has been obtained for a single component system, has the notable property of being valid for any multicomponent system. In particular, in this paper, the exponential and the uniform biases of exponentially distributed phenomena are investigated thoroughly

  4. Strain sensors optimal placement for vibration-based structural health monitoring. The effect of damage on the initially optimal configuration

    Science.gov (United States)

    Loutas, T. H.; Bourikas, A.

    2017-12-01

    We revisit the optimal sensor placement of engineering structures problem with an emphasis on in-plane dynamic strain measurements and to the direction of modal identification as well as vibration-based damage detection for structural health monitoring purposes. The approach utilized is based on the maximization of a norm of the Fisher Information Matrix built with numerically obtained mode shapes of the structure and at the same time prohibit the sensorization of neighbor degrees of freedom as well as those carrying similar information, in order to obtain a satisfactory coverage. A new convergence criterion of the Fisher Information Matrix (FIM) norm is proposed in order to deal with the issue of choosing an appropriate sensor redundancy threshold, a concept recently introduced but not further investigated concerning its choice. The sensor configurations obtained via a forward sequential placement algorithm are sub-optimal in terms of FIM norm values but the selected sensors are not allowed to be placed in neighbor degrees of freedom providing thus a better coverage of the structure and a subsequent better identification of the experimental mode shapes. The issue of how service induced damage affects the initially nominated as optimal sensor configuration is also investigated and reported. The numerical model of a composite sandwich panel serves as a representative aerospace structure upon which our investigations are based.

  5. Monitoring Distributed Systems: A Relational Approach.

    Science.gov (United States)

    1982-12-01

    4. Summary 200 • 0" List of Figures Iv List of Figures Figure 2-1: Relationships between Primitive and Derived Events and Periods 15 Figure 3-1...structure in order to ensure specified invariants. usually relating to synchronization [Hoare 74). Both definitions emphasize the control, rather than the...monitor must understand that there are such things as proces- sors. processes, memory, message ports, semaphores , etc. and that certain relationships

  6. A Deep Learning Approach to Drone Monitoring

    OpenAIRE

    Chen, Yueru; Aggarwal, Pranav; Choi, Jongmoo; Kuo, C. -C. Jay

    2017-01-01

    A drone monitoring system that integrates deep-learning-based detection and tracking modules is proposed in this work. The biggest challenge in adopting deep learning methods for drone detection is the limited amount of training drone images. To address this issue, we develop a model-based drone augmentation technique that automatically generates drone images with a bounding box label on drone's location. To track a small flying drone, we utilize the residual information between consecutive i...

  7. Reliability-based optimal structural design by the decoupling approach

    International Nuclear Information System (INIS)

    Royset, J.O.; Der Kiureghian, A.; Polak, E.

    2001-01-01

    A decoupling approach for solving optimal structural design problems involving reliability terms in the objective function, the constraint set or both is discussed and extended. The approach employs a reformulation of each problem, in which reliability terms are replaced by deterministic functions. The reformulated problems can be solved by existing semi-infinite optimization algorithms and computational reliability methods. It is shown that the reformulated problems produce solutions that are identical to those of the original problems when the limit-state functions defining the reliability problem are affine. For nonaffine limit-state functions, approximate solutions are obtained by solving series of reformulated problems. An important advantage of the approach is that the required reliability and optimization calculations are completely decoupled, thus allowing flexibility in the choice of the optimization algorithm and the reliability computation method

  8. A "Hybrid" Approach for Synthesizing Optimal Controllers of Hybrid Systems

    DEFF Research Database (Denmark)

    Zhao, Hengjun; Zhan, Naijun; Kapur, Deepak

    2012-01-01

    to discretization manageable and within bounds. A major advantage of our approach is not only that it avoids errors due to numerical computation, but it also gives a better optimal controller. In order to illustrate our approach, we use the real industrial example of an oil pump provided by the German company HYDAC...

  9. An Optimization Approach to the Dynamic Allocation of Economic Capital

    NARCIS (Netherlands)

    Laeven, R.J.A.; Goovaerts, M.J.

    2004-01-01

    We propose an optimization approach to allocating economic capital, distinguishing between an allocation or raising principle and a measure for the risk residual. The approach is applied both at the aggregate (conglomerate) level and at the individual (subsidiary) level and yields an integrated

  10. A practical multiscale approach for optimization of structural damping

    DEFF Research Database (Denmark)

    Andreassen, Erik; Jensen, Jakob Søndergaard

    2016-01-01

    A simple and practical multiscale approach suitable for topology optimization of structural damping in a component ready for additive manufacturing is presented.The approach consists of two steps: First, the homogenized loss factor of a two-phase material is maximized. This is done in order...

  11. Optimal design of hydrometric monitoring networks with dynamic components based on Information Theory

    Science.gov (United States)

    Alfonso, Leonardo; Chacon, Juan; Solomatine, Dimitri

    2016-04-01

    The EC-FP7 WeSenseIt project proposes the development of a Citizen Observatory of Water, aiming at enhancing environmental monitoring and forecasting with the help of citizens equipped with low-cost sensors and personal devices such as smartphones and smart umbrellas. In this regard, Citizen Observatories may complement the limited data availability in terms of spatial and temporal density, which is of interest, among other areas, to improve hydraulic and hydrological models. At this point, the following question arises: how can citizens, who are part of a citizen observatory, be optimally guided so that the data they collect and send is useful to improve modelling and water management? This research proposes a new methodology to identify the optimal location and timing of potential observations coming from moving sensors of hydrological variables. The methodology is based on Information Theory, which has been widely used in hydrometric monitoring design [1-4]. In particular, the concepts of Joint Entropy, as a measure of the amount of information that is contained in a set of random variables, which, in our case, correspond to the time series of hydrological variables captured at given locations in a catchment. The methodology presented is a step forward in the state of the art because it solves the multiobjective optimisation problem of getting simultaneously the minimum number of informative and non-redundant sensors needed for a given time, so that the best configuration of monitoring sites is found at every particular moment in time. To this end, the existing algorithms have been improved to make them efficient. The method is applied to cases in The Netherlands, UK and Italy and proves to have a great potential to complement the existing in-situ monitoring networks. [1] Alfonso, L., A. Lobbrecht, and R. Price (2010a), Information theory-based approach for location of monitoring water level gauges in polders, Water Resour. Res., 46(3), W03528 [2] Alfonso, L., A

  12. Air pollution monitoring - a methodological approach

    International Nuclear Information System (INIS)

    Trajkovska Trpevska, Magdalena

    2002-01-01

    Methodology for monitoring the emission of polluters in the air is a complex concept that in general embraces following fazes: sampling, laboratory treatment, and interpretation of results. In Company for technological and laboratory investigation and environmental protection - Mining Institute Skopje, the control of emission of polluters in the air is performing according methodology based in general on the recommendation of standard VDI 2.066 prescribe from Ministry of Ecology in Germany, because adequate legislation in our country does not exist. In this article the basic treatment of methodology for the air polluters emission control is presented. (Original)

  13. An Efficient PageRank Approach for Urban Traffic Optimization

    Directory of Open Access Journals (Sweden)

    Florin Pop

    2012-01-01

    to determine optimal decisions for each traffic light, based on the solution given by Larry Page for page ranking in Web environment (Page et al. (1999. Our approach is similar with work presented by Sheng-Chung et al. (2009 and Yousef et al. (2010. We consider that the traffic lights are controlled by servers and a score for each road is computed based on efficient PageRank approach and is used in cost function to determine optimal decisions. We demonstrate that the cumulative contribution of each car in the traffic respects the main constrain of PageRank approach, preserving all the properties of matrix consider in our model.

  14. Random Matrix Approach for Primal-Dual Portfolio Optimization Problems

    Science.gov (United States)

    Tada, Daichi; Yamamoto, Hisashi; Shinzato, Takashi

    2017-12-01

    In this paper, we revisit the portfolio optimization problems of the minimization/maximization of investment risk under constraints of budget and investment concentration (primal problem) and the maximization/minimization of investment concentration under constraints of budget and investment risk (dual problem) for the case that the variances of the return rates of the assets are identical. We analyze both optimization problems by the Lagrange multiplier method and the random matrix approach. Thereafter, we compare the results obtained from our proposed approach with the results obtained in previous work. Moreover, we use numerical experiments to validate the results obtained from the replica approach and the random matrix approach as methods for analyzing both the primal and dual portfolio optimization problems.

  15. Toward Intelligent Hemodynamic Monitoring: A Functional Approach

    Directory of Open Access Journals (Sweden)

    Pierre Squara

    2012-01-01

    Full Text Available Technology is now available to allow a complete haemodynamic analysis; however this is only used in a small proportion of patients and seems to occur when the medical staff have the time and inclination. As a result of this, significant delays occur between an event, its diagnosis and therefore, any treatment required. We can speculate that we should be able to collect enough real time information to make a complete, real time, haemodynamic diagnosis in all critically ill patients. This article advocates for “intelligent haemodynamic monitoring”. Following the steps of a functional analysis, we answered six basic questions. (1 What is the actual best theoretical model for describing haemodynamic disorders? (2 What are the needed and necessary input/output data for describing this model? (3 What are the specific quality criteria and tolerances for collecting each input variable? (4 Based on these criteria, what are the validated available technologies for monitoring each input variable, continuously, real time, and if possible non-invasively? (5 How can we integrate all the needed reliably monitored input variables into the same system for continuously describing the global haemodynamic model? (6 Is it possible to implement this global model into intelligent programs that are able to differentiate clinically relevant changes as opposed to artificial changes and to display intelligent messages and/or diagnoses?

  16. Indirect Tire Monitoring System - Machine Learning Approach

    Science.gov (United States)

    Svensson, O.; Thelin, S.; Byttner, S.; Fan, Y.

    2017-10-01

    The heavy vehicle industry has today no requirement to provide a tire pressure monitoring system by law. This has created issues surrounding unknown tire pressure and thread depth during active service. There is also no standardization for these kind of systems which means that different manufacturers and third party solutions work after their own principles and it can be hard to know what works for a given vehicle type. The objective is to create an indirect tire monitoring system that can generalize a method that detect both incorrect tire pressure and thread depth for different type of vehicles within a fleet without the need for additional physical sensors or vehicle specific parameters. The existing sensors that are connected communicate through CAN and are interpreted by the Drivec Bridge hardware that exist in the fleet. By using supervised machine learning a classifier was created for each axle where the main focus was the front axle which had the most issues. The classifier will classify the vehicles tires condition and will be implemented in Drivecs cloud service where it will receive its data. The resulting classifier is a random forest implemented in Python. The result from the front axle with a data set consisting of 9767 samples of buses with correct tire condition and 1909 samples of buses with incorrect tire condition it has an accuracy of 90.54% (0.96%). The data sets are created from 34 unique measurements from buses between January and May 2017. This classifier has been exported and is used inside a Node.js module created for Drivecs cloud service which is the result of the whole implementation. The developed solution is called Indirect Tire Monitoring System (ITMS) and is seen as a process. This process will predict bad classes in the cloud which will lead to warnings. The warnings are defined as incidents. They contain only the information needed and the bandwidth of the incidents are also controlled so incidents are created within an

  17. Monitoring and optimization of ATLAS Tier 2 center GoeGrid

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00219638; Quadt, Arnulf; Yahyapour, Ramin

    The demand on computational and storage resources is growing along with the amount of information that needs to be processed and preserved. In order to ease the provisioning of the digital services to the growing number of consumers, more and more distributed computing systems and platforms are actively developed and employed. The building block of the distributed computing infrastructure are single computing centers, similar to the Worldwide LHC Computing Grid, Tier 2 centre GoeGrid. The main motivation of this thesis was the optimization of GoeGrid performance by efficient monitoring. The goal has been achieved by means of the GoeGrid monitoring information analysis. The data analysis approach was based on the adaptive-network-based fuzzy inference system (ANFIS) and machine learning algorithm such as Linear Support Vector Machine (SVM). The main object of the research was the digital service, since availability, reliability and serviceability of the computing platform can be measured according to the const...

  18. A novel approach for optimal chiller loading using particle swarm optimization

    Energy Technology Data Exchange (ETDEWEB)

    Ardakani, A. Jahanbani; Ardakani, F. Fattahi; Hosseinian, S.H. [Department of Electrical Engineering, Amirkabir University of Technology (Tehran Polytechnic), Hafez Avenue, Tehran 15875-4413 (Iran, Islamic Republic of)

    2008-07-01

    This study employs two new methods to solve optimal chiller loading (OCL) problem. These methods are continuous genetic algorithm (GA) and particle swarm optimization (PSO). Because of continuous nature of variables in OCL problem, continuous GA and PSO easily overcome deficiencies in other conventional optimization methods. Partial load ratio (PLR) of the chiller is chosen as the variable to be optimized and consumption power of the chiller is considered as fitness function. Both of these methods find the optimal solution while the equality constraint is exactly satisfied. Some of the major advantages of proposed approaches over other conventional methods can be mentioned as fast convergence, escaping from getting into local optima, simple implementation as well as independency of the solution from the problem. Abilities of proposed methods are examined with reference to an example system. To demonstrate these abilities, results are compared with binary genetic algorithm method. The proposed approaches can be perfectly applied to air-conditioning systems. (author)

  19. Scientific Opportunities for Monitoring at Environmental Remediation Sites (SOMERS): Integrated Systems-Based Approaches to Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Bunn, Amoret L.; Wellman, Dawn M.; Deeb, Rula A.; Hawley, Elizabeth L.; Truex, Michael J.; Peterson, Mark; Freshley, Mark D.; Pierce, Eric M.; McCord, John; Young, Michael H.; Gilmore, Tyler J.; Miller, Rick; Miracle, Ann L.; Kaback, Dawn; Eddy-Dilek, Carol; Rossabi, Joe; Lee, Michelle H.; Bush, Richard P.; Beam , Paul; Chamberlain, G. M.; Marble, Justin; Whitehurst, Latrincy; Gerdes, Kurt D.; Collazo, Yvette

    2012-05-15

    Through an inter-disciplinary effort, DOE is addressing a need to advance monitoring approaches from sole reliance on cost- and labor-intensive point-source monitoring to integrated systems-based approaches such as flux-based approaches and the use of early indicator parameters. Key objectives include identifying current scientific, technical and implementation opportunities and challenges, prioritizing science and technology strategies to meet current needs within the DOE complex for the most challenging environments, and developing an integrated and risk-informed monitoring framework.

  20. Horsetail matching: a flexible approach to optimization under uncertainty

    Science.gov (United States)

    Cook, L. W.; Jarrett, J. P.

    2018-04-01

    It is important to design engineering systems to be robust with respect to uncertainties in the design process. Often, this is done by considering statistical moments, but over-reliance on statistical moments when formulating a robust optimization can produce designs that are stochastically dominated by other feasible designs. This article instead proposes a formulation for optimization under uncertainty that minimizes the difference between a design's cumulative distribution function and a target. A standard target is proposed that produces stochastically non-dominated designs, but the formulation also offers enough flexibility to recover existing approaches for robust optimization. A numerical implementation is developed that employs kernels to give a differentiable objective function. The method is applied to algebraic test problems and a robust transonic airfoil design problem where it is compared to multi-objective, weighted-sum and density matching approaches to robust optimization; several advantages over these existing methods are demonstrated.

  1. Optimized Enhanced Bioremediation Through 4D Geophysical Monitoring and Autonomous Data Collection, Processing and Analysis

    Science.gov (United States)

    2014-09-01

    ER-200717) Optimized Enhanced Bioremediation Through 4D Geophysical Monitoring and Autonomous Data Collection, Processing and Analysis...N/A 3. DATES COVERED - 4. TITLE AND SUBTITLE Optimized Enhanced Bioremediation Through 4D Geophysical Monitoring and Autonomous Data...8 2.1.2 The Geophysical Signatures of Bioremediation ......................................... 8 2.2 PRIOR

  2. System Approach of Logistic Costs Optimization Solution in Supply Chain

    OpenAIRE

    Majerčák, Peter; Masárová, Gabriela; Buc, Daniel; Majerčáková, Eva

    2013-01-01

    This paper is focused on the possibility of using the costs simulation in supply chain, which are on relative high level. Our goal is to determine the costs using logistic costs optimization which must necessarily be used in business activities in the supply chain management. The paper emphasizes the need to perform not isolated optimization in the whole supply chain. Our goal is to compare classic approach, when every part tracks its costs isolated, a try to minimize them, with the system (l...

  3. Systems engineering approach towards performance monitoring of emergency diesel generator

    International Nuclear Information System (INIS)

    Nurhayati Ramli; Lee, Y.K.

    2013-01-01

    Full-text: Systems engineering is an interdisciplinary approach and means to enable the realization of successful systems. In this study, systems engineering approach towards the performance monitoring of Emergency Diesel Generator (EDG) is presented. Performance monitoring is part and parcel of predictive maintenance where the systems and components conditions can be detected before they result into failures. In an effort to identify the proposal for addressing performance monitoring, the EDG boundary has been defined. Based on the Probabilistic Safety Analysis (PSA) results and industry operating experiences, the most critical component is identified. This paper proposed a systems engineering concept development framework towards EDG performance monitoring. The expected output of this study is that the EDG reliability can be improved by the performance monitoring alternatives through the systems engineering concept development effort. (author)

  4. Solving Unconstrained Global Optimization Problems via Hybrid Swarm Intelligence Approaches

    Directory of Open Access Journals (Sweden)

    Jui-Yu Wu

    2013-01-01

    Full Text Available Stochastic global optimization (SGO algorithms such as the particle swarm optimization (PSO approach have become popular for solving unconstrained global optimization (UGO problems. The PSO approach, which belongs to the swarm intelligence domain, does not require gradient information, enabling it to overcome this limitation of traditional nonlinear programming methods. Unfortunately, PSO algorithm implementation and performance depend on several parameters, such as cognitive parameter, social parameter, and constriction coefficient. These parameters are tuned by using trial and error. To reduce the parametrization of a PSO method, this work presents two efficient hybrid SGO approaches, namely, a real-coded genetic algorithm-based PSO (RGA-PSO method and an artificial immune algorithm-based PSO (AIA-PSO method. The specific parameters of the internal PSO algorithm are optimized using the external RGA and AIA approaches, and then the internal PSO algorithm is applied to solve UGO problems. The performances of the proposed RGA-PSO and AIA-PSO algorithms are then evaluated using a set of benchmark UGO problems. Numerical results indicate that, besides their ability to converge to a global minimum for each test UGO problem, the proposed RGA-PSO and AIA-PSO algorithms outperform many hybrid SGO algorithms. Thus, the RGA-PSO and AIA-PSO approaches can be considered alternative SGO approaches for solving standard-dimensional UGO problems.

  5. Design of pressure vessels using shape optimization: An integrated approach

    Energy Technology Data Exchange (ETDEWEB)

    Carbonari, R.C., E-mail: ronny@usp.br [Department of Mechatronic Engineering, Escola Politecnica da Universidade de Sao Paulo, Av. Prof. Mello Moraes, 2231 Sao Paulo, SP 05508-900 (Brazil); Munoz-Rojas, P.A., E-mail: pablo@joinville.udesc.br [Department of Mechanical Engineering, Universidade do Estado de Santa Catarina, Bom Retiro, Joinville, SC 89223-100 (Brazil); Andrade, E.Q., E-mail: edmundoq@petrobras.com.br [CENPES, PDP/Metodos Cientificos, Petrobras (Brazil); Paulino, G.H., E-mail: paulino@uiuc.edu [Newmark Laboratory, Department of Civil and Environmental Engineering, University of Illinois at Urbana-Champaign, 205 North Mathews Av., Urbana, IL 61801 (United States); Department of Mechanical Science and Engineering, University of Illinois at Urbana-Champaign, 158 Mechanical Engineering Building, 1206 West Green Street, Urbana, IL 61801-2906 (United States); Nishimoto, K., E-mail: knishimo@usp.br [Department of Naval Architecture and Ocean Engineering, Escola Politecnica da Universidade de Sao Paulo, Av. Prof. Mello Moraes, 2231 Sao Paulo, SP 05508-900 (Brazil); Silva, E.C.N., E-mail: ecnsilva@usp.br [Department of Mechatronic Engineering, Escola Politecnica da Universidade de Sao Paulo, Av. Prof. Mello Moraes, 2231 Sao Paulo, SP 05508-900 (Brazil)

    2011-05-15

    Previous papers related to the optimization of pressure vessels have considered the optimization of the nozzle independently from the dished end. This approach generates problems such as thickness variation from nozzle to dished end (coupling cylindrical region) and, as a consequence, it reduces the optimality of the final result which may also be influenced by the boundary conditions. Thus, this work discusses shape optimization of axisymmetric pressure vessels considering an integrated approach in which the entire pressure vessel model is used in conjunction with a multi-objective function that aims to minimize the von-Mises mechanical stress from nozzle to head. Representative examples are examined and solutions obtained for the entire vessel considering temperature and pressure loading. It is noteworthy that different shapes from the usual ones are obtained. Even though such different shapes may not be profitable considering present manufacturing processes, they may be competitive for future manufacturing technologies, and contribute to a better understanding of the actual influence of shape in the behavior of pressure vessels. - Highlights: > Shape optimization of entire pressure vessel considering an integrated approach. > By increasing the number of spline knots, the convergence stability is improved. > The null angle condition gives lower stress values resulting in a better design. > The cylinder stresses are very sensitive to the cylinder length. > The shape optimization of the entire vessel must be considered for cylinder length.

  6. Computational Approaches to Simulation and Optimization of Global Aircraft Trajectories

    Science.gov (United States)

    Ng, Hok Kwan; Sridhar, Banavar

    2016-01-01

    This study examines three possible approaches to improving the speed in generating wind-optimal routes for air traffic at the national or global level. They are: (a) using the resources of a supercomputer, (b) running the computations on multiple commercially available computers and (c) implementing those same algorithms into NASAs Future ATM Concepts Evaluation Tool (FACET) and compares those to a standard implementation run on a single CPU. Wind-optimal aircraft trajectories are computed using global air traffic schedules. The run time and wait time on the supercomputer for trajectory optimization using various numbers of CPUs ranging from 80 to 10,240 units are compared with the total computational time for running the same computation on a single desktop computer and on multiple commercially available computers for potential computational enhancement through parallel processing on the computer clusters. This study also re-implements the trajectory optimization algorithm for further reduction of computational time through algorithm modifications and integrates that with FACET to facilitate the use of the new features which calculate time-optimal routes between worldwide airport pairs in a wind field for use with existing FACET applications. The implementations of trajectory optimization algorithms use MATLAB, Python, and Java programming languages. The performance evaluations are done by comparing their computational efficiencies and based on the potential application of optimized trajectories. The paper shows that in the absence of special privileges on a supercomputer, a cluster of commercially available computers provides a feasible approach for national and global air traffic system studies.

  7. Vector-model-supported approach in prostate plan optimization

    International Nuclear Information System (INIS)

    Liu, Eva Sau Fan; Wu, Vincent Wing Cheung; Harris, Benjamin; Lehman, Margot; Pryor, David; Chan, Lawrence Wing Chi

    2017-01-01

    Lengthy time consumed in traditional manual plan optimization can limit the use of step-and-shoot intensity-modulated radiotherapy/volumetric-modulated radiotherapy (S&S IMRT/VMAT). A vector model base, retrieving similar radiotherapy cases, was developed with respect to the structural and physiologic features extracted from the Digital Imaging and Communications in Medicine (DICOM) files. Planning parameters were retrieved from the selected similar reference case and applied to the test case to bypass the gradual adjustment of planning parameters. Therefore, the planning time spent on the traditional trial-and-error manual optimization approach in the beginning of optimization could be reduced. Each S&S IMRT/VMAT prostate reference database comprised 100 previously treated cases. Prostate cases were replanned with both traditional optimization and vector-model-supported optimization based on the oncologists' clinical dose prescriptions. A total of 360 plans, which consisted of 30 cases of S&S IMRT, 30 cases of 1-arc VMAT, and 30 cases of 2-arc VMAT plans including first optimization and final optimization with/without vector-model-supported optimization, were compared using the 2-sided t-test and paired Wilcoxon signed rank test, with a significance level of 0.05 and a false discovery rate of less than 0.05. For S&S IMRT, 1-arc VMAT, and 2-arc VMAT prostate plans, there was a significant reduction in the planning time and iteration with vector-model-supported optimization by almost 50%. When the first optimization plans were compared, 2-arc VMAT prostate plans had better plan quality than 1-arc VMAT plans. The volume receiving 35 Gy in the femoral head for 2-arc VMAT plans was reduced with the vector-model-supported optimization compared with the traditional manual optimization approach. Otherwise, the quality of plans from both approaches was comparable. Vector-model-supported optimization was shown to offer much shortened planning time and iteration

  8. Vector-model-supported approach in prostate plan optimization

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Eva Sau Fan [Department of Radiation Oncology, Princess Alexandra Hospital, Brisbane (Australia); Department of Health Technology and Informatics, The Hong Kong Polytechnic University (Hong Kong); Wu, Vincent Wing Cheung [Department of Health Technology and Informatics, The Hong Kong Polytechnic University (Hong Kong); Harris, Benjamin [Department of Radiation Oncology, Princess Alexandra Hospital, Brisbane (Australia); Lehman, Margot; Pryor, David [Department of Radiation Oncology, Princess Alexandra Hospital, Brisbane (Australia); School of Medicine, University of Queensland (Australia); Chan, Lawrence Wing Chi, E-mail: wing.chi.chan@polyu.edu.hk [Department of Health Technology and Informatics, The Hong Kong Polytechnic University (Hong Kong)

    2017-07-01

    Lengthy time consumed in traditional manual plan optimization can limit the use of step-and-shoot intensity-modulated radiotherapy/volumetric-modulated radiotherapy (S&S IMRT/VMAT). A vector model base, retrieving similar radiotherapy cases, was developed with respect to the structural and physiologic features extracted from the Digital Imaging and Communications in Medicine (DICOM) files. Planning parameters were retrieved from the selected similar reference case and applied to the test case to bypass the gradual adjustment of planning parameters. Therefore, the planning time spent on the traditional trial-and-error manual optimization approach in the beginning of optimization could be reduced. Each S&S IMRT/VMAT prostate reference database comprised 100 previously treated cases. Prostate cases were replanned with both traditional optimization and vector-model-supported optimization based on the oncologists' clinical dose prescriptions. A total of 360 plans, which consisted of 30 cases of S&S IMRT, 30 cases of 1-arc VMAT, and 30 cases of 2-arc VMAT plans including first optimization and final optimization with/without vector-model-supported optimization, were compared using the 2-sided t-test and paired Wilcoxon signed rank test, with a significance level of 0.05 and a false discovery rate of less than 0.05. For S&S IMRT, 1-arc VMAT, and 2-arc VMAT prostate plans, there was a significant reduction in the planning time and iteration with vector-model-supported optimization by almost 50%. When the first optimization plans were compared, 2-arc VMAT prostate plans had better plan quality than 1-arc VMAT plans. The volume receiving 35 Gy in the femoral head for 2-arc VMAT plans was reduced with the vector-model-supported optimization compared with the traditional manual optimization approach. Otherwise, the quality of plans from both approaches was comparable. Vector-model-supported optimization was shown to offer much shortened planning time and iteration

  9. [Study on the optimization of monitoring indicators of drinking water quality during health supervision].

    Science.gov (United States)

    Ye, Bixiong; E, Xueli; Zhang, Lan

    2015-01-01

    To optimize non-regular drinking water quality indices (except Giardia and Cryptosporidium) of urban drinking water. Several methods including drinking water quality exceed the standard, the risk of exceeding standard, the frequency of detecting concentrations below the detection limit, water quality comprehensive index evaluation method, and attribute reduction algorithm of rough set theory were applied, redundancy factor of water quality indicators were eliminated, control factors that play a leading role in drinking water safety were found. Optimization results showed in 62 unconventional water quality monitoring indicators of urban drinking water, 42 water quality indicators could be optimized reduction by comprehensively evaluation combined with attribute reduction of rough set. Optimization of the water quality monitoring indicators and reduction of monitoring indicators and monitoring frequency could ensure the safety of drinking water quality while lowering monitoring costs and reducing monitoring pressure of the sanitation supervision departments.

  10. Systems approach for design control at Monitored Retrievable Storage Project

    International Nuclear Information System (INIS)

    Kumar, P.N.; Williams, J.R.

    1994-01-01

    This paper describes the systems approach in establishing design control for the Monitored Retrievable Storage Project design development. Key elements in design control are enumerated and systems engineering aspects are detailed. Application of lessons learned from the Yucca Mountain Project experience is addressed. An integrated approach combining quality assurance and systems engineering requirements is suggested to practice effective design control

  11. Identifying optimal remotely-sensed variables for ecosystem monitoring in Colorado Plateau drylands

    Science.gov (United States)

    Poitras, Travis; Villarreal, Miguel; Waller, Eric K.; Nauman, Travis; Miller, Mark E.; Duniway, Michael C.

    2018-01-01

    Water-limited ecosystems often recover slowly following anthropogenic or natural disturbance. Multitemporal remote sensing can be used to monitor ecosystem recovery after disturbance; however, dryland vegetation cover can be challenging to accurately measure due to sparse cover and spectral confusion between soils and non-photosynthetic vegetation. With the goal of optimizing a monitoring approach for identifying both abrupt and gradual vegetation changes, we evaluated the ability of Landsat-derived spectral variables to characterize surface variability of vegetation cover and bare ground across a range of vegetation community types. Using three year composites of Landsat data, we modeled relationships between spectral information and field data collected at monitoring sites near Canyonlands National Park, UT. We also developed multiple regression models to assess improvement over single variables. We found that for all vegetation types, percent cover bare ground could be accurately modeled with single indices that included a combination of red and shortwave infrared bands, while near infrared-based vegetation indices like NDVI worked best for quantifying tree cover and total live vegetation cover in woodlands. We applied four models to characterize the spatial distribution of putative grassland ecological states across our study area, illustrating how this approach can be implemented to guide dryland ecosystem management.

  12. A Statistical Approach to Optimizing Concrete Mixture Design

    OpenAIRE

    Ahmad, Shamsad; Alghamdi, Saeid A.

    2014-01-01

    A step-by-step statistical approach is proposed to obtain optimum proportioning of concrete mixtures using the data obtained through a statistically planned experimental program. The utility of the proposed approach for optimizing the design of concrete mixture is illustrated considering a typical case in which trial mixtures were considered according to a full factorial experiment design involving three factors and their three levels (33). A total of 27 concrete mixtures with three replicate...

  13. Terminal Control Area Aircraft Scheduling and Trajectory Optimization Approaches

    Directory of Open Access Journals (Sweden)

    Samà Marcella

    2017-01-01

    Full Text Available Aviation authorities are seeking optimization methods to better use the available infrastructure and better manage aircraft movements. This paper deals with the realtime scheduling of take-off and landing aircraft at a busy terminal control area and with the optimization of aircraft trajectories during the landing procedures. The first problem aims to reduce the propagation of delays, while the second problem aims to either minimize the travel time or reduce the fuel consumption. Both problems are particularly complex, since the first one is NP-hard while the second one is nonlinear and a combined solution needs to be computed in a short-time during operations. This paper proposes a framework for the lexicographic optimization of the two problems. Computational experiments are performed for the Milano Malpensa airport and show the existing gaps between the performance indicators of the two problems when different lexicographic optimization approaches are considered.

  14. An intutionistic fuzzy optimization approach to vendor selection problem

    Directory of Open Access Journals (Sweden)

    Prabjot Kaur

    2016-09-01

    Full Text Available Selecting the right vendor is an important business decision made by any organization. The decision involves multiple criteria and if the objectives vary in preference and scope, then nature of decision becomes multiobjective. In this paper, a vendor selection problem has been formulated as an intutionistic fuzzy multiobjective optimization where appropriate number of vendors is to be selected and order allocated to them. The multiobjective problem includes three objectives: minimizing the net price, maximizing the quality, and maximizing the on time deliveries subject to supplier's constraints. The objection function and the demand are treated as intutionistic fuzzy sets. An intutionistic fuzzy set has its ability to handle uncertainty with additional degrees of freedom. The Intutionistic fuzzy optimization (IFO problem is converted into a crisp linear form and solved using optimization software Tora. The advantage of IFO is that they give better results than fuzzy/crisp optimization. The proposed approach is explained by a numerical example.

  15. A Novel Measurement Matrix Optimization Approach for Hyperspectral Unmixing

    Directory of Open Access Journals (Sweden)

    Su Xu

    2017-01-01

    Full Text Available Each pixel in the hyperspectral unmixing process is modeled as a linear combination of endmembers, which can be expressed in the form of linear combinations of a number of pure spectral signatures that are known in advance. However, the limitation of Gaussian random variables on its computational complexity or sparsity affects the efficiency and accuracy. This paper proposes a novel approach for the optimization of measurement matrix in compressive sensing (CS theory for hyperspectral unmixing. Firstly, a new Toeplitz-structured chaotic measurement matrix (TSCMM is formed by pseudo-random chaotic elements, which can be implemented by a simple hardware; secondly, rank revealing QR factorization with eigenvalue decomposition is presented to speed up the measurement time; finally, orthogonal gradient descent method for measurement matrix optimization is used to achieve optimal incoherence. Experimental results demonstrate that the proposed approach can lead to better CS reconstruction performance with low extra computational cost in hyperspectral unmixing.

  16. Optimal Charging of Electric Drive Vehicles: A Dynamic Programming Approach

    DEFF Research Database (Denmark)

    Delikaraoglou, Stefanos; Capion, Karsten Emil; Juul, Nina

    2013-01-01

    , therefore, we propose an ex ante vehicle aggregation approach. We illustrate the results in a Danish case study and find that, although optimal management of the vehicles does not allow for storage and day-to-day flexibility in the electricity system, the market provides incentive for intra-day flexibility....

  17. Dynamic Programming Approach for Exact Decision Rule Optimization

    KAUST Repository

    Amin, Talha

    2013-01-01

    This chapter is devoted to the study of an extension of dynamic programming approach that allows sequential optimization of exact decision rules relative to the length and coverage. It contains also results of experiments with decision tables from UCI Machine Learning Repository. © Springer-Verlag Berlin Heidelberg 2013.

  18. Approaches to the Optimal Nonlinear Analysis of Microcalorimeter Pulses

    Science.gov (United States)

    Fowler, J. W.; Pappas, C. G.; Alpert, B. K.; Doriese, W. B.; O'Neil, G. C.; Ullom, J. N.; Swetz, D. S.

    2018-03-01

    We consider how to analyze microcalorimeter pulses for quantities that are nonlinear in the data, while preserving the signal-to-noise advantages of linear optimal filtering. We successfully apply our chosen approach to compute the electrothermal feedback energy deficit (the "Joule energy") of a pulse, which has been proposed as a linear estimator of the deposited photon energy.

  19. New approaches to optimization in aerospace conceptual design

    Science.gov (United States)

    Gage, Peter J.

    1995-01-01

    Aerospace design can be viewed as an optimization process, but conceptual studies are rarely performed using formal search algorithms. Three issues that restrict the success of automatic search are identified in this work. New approaches are introduced to address the integration of analyses and optimizers, to avoid the need for accurate gradient information and a smooth search space (required for calculus-based optimization), and to remove the restrictions imposed by fixed complexity problem formulations. (1) Optimization should be performed in a flexible environment. A quasi-procedural architecture is used to conveniently link analysis modules and automatically coordinate their execution. It efficiently controls a large-scale design tasks. (2) Genetic algorithms provide a search method for discontinuous or noisy domains. The utility of genetic optimization is demonstrated here, but parameter encodings and constraint-handling schemes must be carefully chosen to avoid premature convergence to suboptimal designs. The relationship between genetic and calculus-based methods is explored. (3) A variable-complexity genetic algorithm is created to permit flexible parameterization, so that the level of description can change during optimization. This new optimizer automatically discovers novel designs in structural and aerodynamic tasks.

  20. A design approach for integrating thermoelectric devices using topology optimization

    International Nuclear Information System (INIS)

    Soprani, S.; Haertel, J.H.K.; Lazarov, B.S.; Sigmund, O.; Engelbrecht, K.

    2016-01-01

    Highlights: • The integration of a thermoelectric (TE) cooler into a robotic tool is optimized. • Topology optimization is suggested as design tool for TE integrated systems. • A 3D optimization technique using temperature dependent TE properties is presented. • The sensitivity of the optimization process to the boundary conditions is studied. • A working prototype is constructed and compared to the model results. - Abstract: Efficient operation of thermoelectric devices strongly relies on the thermal integration into the energy conversion system in which they operate. Effective thermal integration reduces the temperature differences between the thermoelectric module and its thermal reservoirs, allowing the system to operate more efficiently. This work proposes and experimentally demonstrates a topology optimization approach as a design tool for efficient integration of thermoelectric modules into systems with specific design constraints. The approach allows thermal layout optimization of thermoelectric systems for different operating conditions and objective functions, such as temperature span, efficiency, and power recovery rate. As a specific application, the integration of a thermoelectric cooler into the electronics section of a downhole oil well intervention tool is investigated, with the objective of minimizing the temperature of the cooled electronics. Several challenges are addressed: ensuring effective heat transfer from the load, minimizing the thermal resistances within the integrated system, maximizing the thermal protection of the cooled zone, and enhancing the conduction of the rejected heat to the oil well. The design method incorporates temperature dependent properties of the thermoelectric device and other materials. The 3D topology optimization model developed in this work was used to design a thermoelectric system, complete with insulation and heat sink, that was produced and tested. Good agreement between experimental results and

  1. Optimal Design of Multitype Groundwater Monitoring Networks Using Easily Accessible Tools.

    Science.gov (United States)

    Wöhling, Thomas; Geiges, Andreas; Nowak, Wolfgang

    2016-11-01

    Monitoring networks are expensive to establish and to maintain. In this paper, we extend an existing data-worth estimation method from the suite of PEST utilities with a global optimization method for optimal sensor placement (called optimal design) in groundwater monitoring networks. Design optimization can include multiple simultaneous sensor locations and multiple sensor types. Both location and sensor type are treated simultaneously as decision variables. Our method combines linear uncertainty quantification and a modified genetic algorithm for discrete multilocation, multitype search. The efficiency of the global optimization is enhanced by an archive of past samples and parallel computing. We demonstrate our methodology for a groundwater monitoring network at the Steinlach experimental site, south-western Germany, which has been established to monitor river-groundwater exchange processes. The target of optimization is the best possible exploration for minimum variance in predicting the mean travel time of the hyporheic exchange. Our results demonstrate that the information gain of monitoring network designs can be explored efficiently and with easily accessible tools prior to taking new field measurements or installing additional measurement points. The proposed methods proved to be efficient and can be applied for model-based optimal design of any type of monitoring network in approximately linear systems. Our key contributions are (1) the use of easy-to-implement tools for an otherwise complex task and (2) yet to consider data-worth interdependencies in simultaneous optimization of multiple sensor locations and sensor types. © 2016, National Ground Water Association.

  2. Commonalities and complementarities among approaches to conservation monitoring and evaluation

    DEFF Research Database (Denmark)

    Mascia, Michael B.; Pailler, Sharon; Thieme, Michele L.

    2014-01-01

    Commonalities and complementarities among approaches to conservation monitoring and evaluation (M&E) are not well articulated, creating the potential for confusion, misuse, and missed opportunities to inform conservation policy and practice. We examine the relationships among five approaches...... to conservation M&E, characterizing each approach in eight domains: the focal question driving each approach, when in the project cycle each approach is employed, scale of data collection, the methods of data collection and analysis, the implementers of data collection and analysis, the users of M&E outputs......, and the decisions informed by these outputs. Ambient monitoring measures status and change in ambient social and ecological conditions, independent of any conservation intervention. Management assessment measures management inputs, activities, and outputs, as the basis for investments to build management capacity...

  3. An integrated sampling and analysis approach for improved biodiversity monitoring

    Science.gov (United States)

    DeWan, Amielle A.; Zipkin, Elise F.

    2010-01-01

    Successful biodiversity conservation requires high quality monitoring data and analyses to ensure scientifically defensible policy, legislation, and management. Although monitoring is a critical component in assessing population status and trends, many governmental and non-governmental organizations struggle to develop and implement effective sampling protocols and statistical analyses because of the magnitude and diversity of species in conservation concern. In this article we describe a practical and sophisticated data collection and analysis framework for developing a comprehensive wildlife monitoring program that includes multi-species inventory techniques and community-level hierarchical modeling. Compared to monitoring many species individually, the multi-species approach allows for improved estimates of individual species occurrences, including rare species, and an increased understanding of the aggregated response of a community to landscape and habitat heterogeneity. We demonstrate the benefits and practicality of this approach to address challenges associated with monitoring in the context of US state agencies that are legislatively required to monitor and protect species in greatest conservation need. We believe this approach will be useful to regional, national, and international organizations interested in assessing the status of both common and rare species.

  4. On-line monitoring applications at nuclear power plants. A risk informed approach to calibration reduction

    International Nuclear Information System (INIS)

    Shankar, Ramesh; Hussey, Aaron; Davis, Eddie

    2003-01-01

    On-line monitoring of instrument channels provides increased information about the condition of monitored channels through accurate, more frequent evaluation of each cannel's performance over time. This type of performance monitoring is a methodology that offers an alternate approach to traditional time-directed calibration. EPRI's strategic role in on-line monitoring is to facilitate its implementation and cost-effective use in numerous applications at power plants. To this end, EPRI has sponsored an on-line monitoring implementation project at multiple nuclear plants specifically intended to install and use on-line monitoring technology. The selected on-line monitoring method is based on the Multivariate State Estimation Technique. The project has a planned three-year life; seven plants are participating in the project. The goal is to apply on-line monitoring to all types of power plant applications and document all aspects of the implementation process in a series of EPRI reports. These deliverables cover installation, modeling, optimization, and proven cost-benefit. This paper discusses the actual implementation of on-line monitoring to various nuclear plant instrument systems. Examples of detected instrument drift are provided. (author)

  5. Minimizing transient influence in WHPA delineation: An optimization approach for optimal pumping rate schemes

    Science.gov (United States)

    Rodriguez-Pretelin, A.; Nowak, W.

    2017-12-01

    For most groundwater protection management programs, Wellhead Protection Areas (WHPAs) have served as primarily protection measure. In their delineation, the influence of time-varying groundwater flow conditions is often underestimated because steady-state assumptions are commonly made. However, it has been demonstrated that temporary variations lead to significant changes in the required size and shape of WHPAs. Apart from natural transient groundwater drivers (e.g., changes in the regional angle of flow direction and seasonal natural groundwater recharge), anthropogenic causes such as transient pumping rates are of the most influential factors that require larger WHPAs. We hypothesize that WHPA programs that integrate adaptive and optimized pumping-injection management schemes can counter transient effects and thus reduce the additional areal demand in well protection under transient conditions. The main goal of this study is to present a novel management framework that optimizes pumping schemes dynamically, in order to minimize the impact triggered by transient conditions in WHPA delineation. For optimizing pumping schemes, we consider three objectives: 1) to minimize the risk of pumping water from outside a given WHPA, 2) to maximize the groundwater supply and 3) to minimize the involved operating costs. We solve transient groundwater flow through an available transient groundwater and Lagrangian particle tracking model. The optimization problem is formulated as a dynamic programming problem. Two different optimization approaches are explored: I) the first approach aims for single-objective optimization under objective (1) only. The second approach performs multiobjective optimization under all three objectives where compromise pumping rates are selected from the current Pareto front. Finally, we look for WHPA outlines that are as small as possible, yet allow the optimization problem to find the most suitable solutions.

  6. PARETO: A novel evolutionary optimization approach to multiobjective IMRT planning

    International Nuclear Information System (INIS)

    Fiege, Jason; McCurdy, Boyd; Potrebko, Peter; Champion, Heather; Cull, Andrew

    2011-01-01

    Purpose: In radiation therapy treatment planning, the clinical objectives of uniform high dose to the planning target volume (PTV) and low dose to the organs-at-risk (OARs) are invariably in conflict, often requiring compromises to be made between them when selecting the best treatment plan for a particular patient. In this work, the authors introduce Pareto-Aware Radiotherapy Evolutionary Treatment Optimization (pareto), a multiobjective optimization tool to solve for beam angles and fluence patterns in intensity-modulated radiation therapy (IMRT) treatment planning. Methods: pareto is built around a powerful multiobjective genetic algorithm (GA), which allows us to treat the problem of IMRT treatment plan optimization as a combined monolithic problem, where all beam fluence and angle parameters are treated equally during the optimization. We have employed a simple parameterized beam fluence representation with a realistic dose calculation approach, incorporating patient scatter effects, to demonstrate feasibility of the proposed approach on two phantoms. The first phantom is a simple cylindrical phantom containing a target surrounded by three OARs, while the second phantom is more complex and represents a paraspinal patient. Results: pareto results in a large database of Pareto nondominated solutions that represent the necessary trade-offs between objectives. The solution quality was examined for several PTV and OAR fitness functions. The combination of a conformity-based PTV fitness function and a dose-volume histogram (DVH) or equivalent uniform dose (EUD) -based fitness function for the OAR produced relatively uniform and conformal PTV doses, with well-spaced beams. A penalty function added to the fitness functions eliminates hotspots. Comparison of resulting DVHs to those from treatment plans developed with a single-objective fluence optimizer (from a commercial treatment planning system) showed good correlation. Results also indicated that pareto shows

  7. PARETO: A novel evolutionary optimization approach to multiobjective IMRT planning.

    Science.gov (United States)

    Fiege, Jason; McCurdy, Boyd; Potrebko, Peter; Champion, Heather; Cull, Andrew

    2011-09-01

    In radiation therapy treatment planning, the clinical objectives of uniform high dose to the planning target volume (PTV) and low dose to the organs-at-risk (OARs) are invariably in conflict, often requiring compromises to be made between them when selecting the best treatment plan for a particular patient. In this work, the authors introduce Pareto-Aware Radiotherapy Evolutionary Treatment Optimization (pareto), a multiobjective optimization tool to solve for beam angles and fluence patterns in intensity-modulated radiation therapy (IMRT) treatment planning. pareto is built around a powerful multiobjective genetic algorithm (GA), which allows us to treat the problem of IMRT treatment plan optimization as a combined monolithic problem, where all beam fluence and angle parameters are treated equally during the optimization. We have employed a simple parameterized beam fluence representation with a realistic dose calculation approach, incorporating patient scatter effects, to demonstrate feasibility of the proposed approach on two phantoms. The first phantom is a simple cylindrical phantom containing a target surrounded by three OARs, while the second phantom is more complex and represents a paraspinal patient. pareto results in a large database of Pareto nondominated solutions that represent the necessary trade-offs between objectives. The solution quality was examined for several PTV and OAR fitness functions. The combination of a conformity-based PTV fitness function and a dose-volume histogram (DVH) or equivalent uniform dose (EUD) -based fitness function for the OAR produced relatively uniform and conformal PTV doses, with well-spaced beams. A penalty function added to the fitness functions eliminates hotspots. Comparison of resulting DVHs to those from treatment plans developed with a single-objective fluence optimizer (from a commercial treatment planning system) showed good correlation. Results also indicated that pareto shows promise in optimizing the number

  8. Optimizing study design for multi-species avian monitoring programmes

    Science.gov (United States)

    Jamie S. Sanderlin; William M. Block; Joseph L. Ganey

    2014-01-01

    Many monitoring programmes are successful at monitoring common species, whereas rare species, which are often of highest conservation concern, may be detected infrequently. Study designs that increase the probability of detecting rare species at least once over the study period, while collecting adequate data on common species, strengthen programme ability to address...

  9. Application of probabilistic risk based optimization approaches in environmental restoration

    International Nuclear Information System (INIS)

    Goldammer, W.

    1995-01-01

    The paper presents a general approach to site-specific risk assessments and optimization procedures. In order to account for uncertainties in the assessment of the current situation and future developments, optimization parameters are treated as probabilistic distributions. The assessments are performed within the framework of a cost-benefit analysis. Radiation hazards and conventional risks are treated within an integrated approach. Special consideration is given to consequences of low probability events such as, earthquakes or major floods. Risks and financial costs are combined to an overall figure of detriment allowing one to distinguish between benefits of available reclamation options. The probabilistic analysis uses a Monte Carlo simulation technique. The paper demonstrates the applicability of this approach in aiding the reclamation planning using an example from the German reclamation program for uranium mining and milling sites

  10. Stochastic optimization in insurance a dynamic programming approach

    CERN Document Server

    Azcue, Pablo

    2014-01-01

    The main purpose of the book is to show how a viscosity approach can be used to tackle control problems in insurance. The problems covered are the maximization of survival probability as well as the maximization of dividends in the classical collective risk model. The authors consider the possibility of controlling the risk process by reinsurance as well as by investments. They show that optimal value functions are characterized as either the unique or the smallest viscosity solution of the associated Hamilton-Jacobi-Bellman equation; they also study the structure of the optimal strategies and show how to find them. The viscosity approach was widely used in control problems related to mathematical finance but until quite recently it was not used to solve control problems related to actuarial mathematical science. This book is designed to familiarize the reader on how to use this approach. The intended audience is graduate students as well as researchers in this area.

  11. Compact approach to monitored retrievable storage of spent fuel

    International Nuclear Information System (INIS)

    Muir, D.W.

    1984-09-01

    Recent federal waste-management legislation has raised national interest in monitored retrievable storage (MRS) of unprocessed spent fuel from civilian nuclear power plants. We have reviewed the current MRS design approaches, and we have examined an alternative concept that is extremely compact in terms of total land use. This approach may offer substantial advantages in the areas of monitoring and in safeguards against theft, as well as in reducing the chances of groundwater contamination. Total facility costs are roughly estimated and found to be generally competitive with other MRS concepts. 4 references, 3 figures, 3 tables

  12. A Hybrid Heuristic Optimization Approach for Leak Detection in Pipe Networks Using Ordinal Optimization Approach and the Symbiotic Organism Search

    Directory of Open Access Journals (Sweden)

    Chao-Chih Lin

    2017-10-01

    Full Text Available A new transient-based hybrid heuristic approach is developed to optimize a transient generation process and to detect leaks in pipe networks. The approach couples the ordinal optimization approach (OOA and the symbiotic organism search (SOS to solve the optimization problem by means of iterations. A pipe network analysis model (PNSOS is first used to determine steady-state head distribution and pipe flow rates. The best transient generation point and its relevant valve operation parameters are optimized by maximizing the objective function of transient energy. The transient event is created at the chosen point, and the method of characteristics (MOC is used to analyze the transient flow. The OOA is applied to sift through the candidate pipes and the initial organisms with leak information. The SOS is employed to determine the leaks by minimizing the sum of differences between simulated and computed head at the observation points. Two synthetic leaking scenarios, a simple pipe network and a water distribution network (WDN, are chosen to test the performance of leak detection ordinal symbiotic organism search (LDOSOS. Leak information can be accurately identified by the proposed approach for both of the scenarios. The presented technique makes a remarkable contribution to the success of leak detection in the pipe networks.

  13. OPTIMIZING RADIOLOGICAL MONITOR SITING OVER THE CONTINENTAL U.S

    International Nuclear Information System (INIS)

    Chen, K; Robert Buckley, R; Robert Kurzeja, R; Lance Osteen, L; Saleem Salaymeh, S

    2007-01-01

    The US Environmental Protection Agency (EPA) is installing a network of sensors in the US to monitor background radiation and elevated radiation levels expected from a possible nuclear incident. The network (RadNet) of 180 fixed sensors is intended to provide a basic estimate of the radiation level throughout the US and enhanced accuracy near population centers. This report discusses one of the objective methods for locating these monitors based on criteria outlined by the EPA. The analysis employs a representative climatology of incident scenarios that includes 50 release locations, four seasons and four times of the day. This climatology was calculated from 5,600 simulations generated with NOAA-ARL's HYSPLIT Lagrangian trajectory model. The method treats the release plumes as targets and monitors are located to maximize the number of plumes detected with the network. Weighting schemes based on detection only, dose-weighted detection and population-dose weighted detection were evaluated. The result shows that most of the monitors are located around the population centers, as expected. However, there are monitors quite uniformly distributed around the less populated areas. The monitors at the populated areas will provide early warning to protect the general public, and the monitors spread across the country will provide valuable data for modelers to estimate the extent and the transport of the radioactive contamination

  14. Hybrid Swarm Intelligence Optimization Approach for Optimal Data Storage Position Identification in Wireless Sensor Networks

    Science.gov (United States)

    Mohanasundaram, Ranganathan; Periasamy, Pappampalayam Sanmugam

    2015-01-01

    The current high profile debate with regard to data storage and its growth have become strategic task in the world of networking. It mainly depends on the sensor nodes called producers, base stations, and also the consumers (users and sensor nodes) to retrieve and use the data. The main concern dealt here is to find an optimal data storage position in wireless sensor networks. The works that have been carried out earlier did not utilize swarm intelligence based optimization approaches to find the optimal data storage positions. To achieve this goal, an efficient swam intelligence approach is used to choose suitable positions for a storage node. Thus, hybrid particle swarm optimization algorithm has been used to find the suitable positions for storage nodes while the total energy cost of data transmission is minimized. Clustering-based distributed data storage is utilized to solve clustering problem using fuzzy-C-means algorithm. This research work also considers the data rates and locations of multiple producers and consumers to find optimal data storage positions. The algorithm is implemented in a network simulator and the experimental results show that the proposed clustering and swarm intelligence based ODS strategy is more effective than the earlier approaches. PMID:25734182

  15. Robust and optimal control a two-port framework approach

    CERN Document Server

    Tsai, Mi-Ching

    2014-01-01

    A Two-port Framework for Robust and Optimal Control introduces an alternative approach to robust and optimal controller synthesis procedures for linear, time-invariant systems, based on the two-port system widespread in electrical engineering. The novel use of the two-port system in this context allows straightforward engineering-oriented solution-finding procedures to be developed, requiring no mathematics beyond linear algebra. A chain-scattering description provides a unified framework for constructing the stabilizing controller set and for synthesizing H2 optimal and H∞ sub-optimal controllers. Simple yet illustrative examples explain each step. A Two-port Framework for Robust and Optimal Control  features: ·         a hands-on, tutorial-style presentation giving the reader the opportunity to repeat the designs presented and easily to modify them for their own programs; ·         an abundance of examples illustrating the most important steps in robust and optimal design; and ·   �...

  16. Empirical data and optimal monitoring policies: the case of four Russian sea harbours

    Energy Technology Data Exchange (ETDEWEB)

    Deissenberg, C. [CEFI-CNRS, Les Milles (France); Gurman, V.; Shevchuk, E. [RAS, Program Systems Inst., Pereslavl-Zalessky (Russian Federation); Ryumina, E. [Russian Academy of Sciences, Moscow (Russian Federation). Inst. of Economic Market Problems; Shevlyagin, K. [State Committee of the Environment Protection of the Russian Federation, Moscow (Russian Federation). Marine Environment Dept.

    2001-07-01

    In this paper, we describe the present state of empirical information about oil spills and oil monitoring activities in Russian harbours. We explain how we gathered, organized, and estimated the data needed to run the monitoring efforts optimization model of Deissenberg et al. (2001). We present, analyse, and discuss the results of the optimizations carried out with this model on the basis of the empirical data. These results show, in particular, that the economic efficiency of the monitoring activities decreases rapidly as the corresponding budget increases. This suggests that, rather urgently, measures other than monitoring should be initiated to control sea harbour pollution. (Author)

  17. Development of a Multi-Event Trajectory Optimization Tool for Noise-Optimized Approach Route Design

    NARCIS (Netherlands)

    Braakenburg, M.L.; Hartjes, S.; Visser, H.G.; Hebly, S.J.

    2011-01-01

    This paper presents preliminary results from an ongoing research effort towards the development of a multi-event trajectory optimization methodology that allows to synthesize RNAV approach routes that minimize a cumulative measure of noise, taking into account the total noise effect aggregated for

  18. Site specific optimization of wind turbines energy cost: Iterative approach

    International Nuclear Information System (INIS)

    Rezaei Mirghaed, Mohammad; Roshandel, Ramin

    2013-01-01

    Highlights: • Optimization model of wind turbine parameters plus rectangular farm layout is developed. • Results show that levelized cost for single turbine fluctuates between 46.6 and 54.5 $/MW h. • Modeling results for two specific farms reported optimal sizing and farm layout. • Results show that levelized cost of the wind farms fluctuates between 45.8 and 67.2 $/MW h. - Abstract: The present study was aimed at developing a model to optimize the sizing parameters and farm layout of wind turbines according to the wind resource and economic aspects. The proposed model, including aerodynamic, economic and optimization sub-models, is used to achieve minimum levelized cost of electricity. The blade element momentum theory is utilized for aerodynamic modeling of pitch-regulated horizontal axis wind turbines. Also, a comprehensive cost model including capital costs of all turbine components is considered. An iterative approach is used to develop the optimization model. The modeling results are presented for three potential regions in Iran: Khaf, Ahar and Manjil. The optimum configurations and sizing for a single turbine with minimum levelized cost of electricity are presented. The optimal cost of energy for one turbine is calculated about 46.7, 54.5 and 46.6 dollars per MW h in the studied sites, respectively. In addition, optimal size of turbines, annual electricity production, capital cost, and wind farm layout for two different rectangular and square shaped farms in the proposed areas have been recognized. According to the results, optimal system configuration corresponds to minimum levelized cost of electricity about 45.8 to 67.2 dollars per MW h in the studied wind farms

  19. APPROACH ON INTELLIGENT OPTIMIZATION DESIGN BASED ON COMPOUND KNOWLEDGE

    Institute of Scientific and Technical Information of China (English)

    Yao Jianchu; Zhou Ji; Yu Jun

    2003-01-01

    A concept of an intelligent optimal design approach is proposed, which is organized by a kind of compound knowledge model. The compound knowledge consists of modularized quantitative knowledge, inclusive experience knowledge and case-based sample knowledge. By using this compound knowledge model, the abundant quantity information of mathematical programming and the symbolic knowledge of artificial intelligence can be united together in this model. The intelligent optimal design model based on such a compound knowledge and the automatically generated decomposition principles based on it are also presented. Practically, it is applied to the production planning, process schedule and optimization of production process of a refining & chemical work and a great profit is achieved. Specially, the methods and principles are adaptable not only to continuous process industry, but also to discrete manufacturing one.

  20. Data driven approaches for diagnostics and optimization of NPP operation

    International Nuclear Information System (INIS)

    Pliska, J.; Machat, Z.

    2014-01-01

    The efficiency and heat rate is an important indicator of both the health of the power plant equipment and the quality of power plant operation. To achieve this challenges powerful tool is a statistical data processing of large data sets which are stored in data historians. These large data sets contain useful information about process quality and equipment and sensor health. The paper discusses data-driven approaches for model building of main power plant equipment such as condenser, cooling tower and the overall thermal cycle as well using multivariate regression techniques based on so called a regression triplet - data, model and method. Regression models comprise a base for diagnostics and optimization tasks. Diagnostics and optimization tasks are demonstrated on practical cases - diagnostics of main power plant equipment to early identify equipment fault, and optimization task of cooling circuit by cooling water flow control to achieve for a given boundary conditions the highest power output. (authors)

  1. A Hybrid Harmony Search Algorithm Approach for Optimal Power Flow

    Directory of Open Access Journals (Sweden)

    Mimoun YOUNES

    2012-08-01

    Full Text Available Optimal Power Flow (OPF is one of the main functions of Power system operation. It determines the optimal settings of generating units, bus voltage, transformer tap and shunt elements in Power System with the objective of minimizing total production costs or losses while the system is operating within its security limits. The aim of this paper is to propose a novel methodology (BCGAs-HSA that solves OPF including both active and reactive power dispatch It is based on combining the binary-coded genetic algorithm (BCGAs and the harmony search algorithm (HSA to determine the optimal global solution. This method was tested on the modified IEEE 30 bus test system. The results obtained by this method are compared with those obtained with BCGAs or HSA separately. The results show that the BCGAs-HSA approach can converge to the optimum solution with accuracy compared to those reported recently in the literature.

  2. A PSO approach for preventive maintenance scheduling optimization

    International Nuclear Information System (INIS)

    Pereira, C.M.N.A.; Lapa, C.M.F.; Mol, A.C.A.; Luz, A.F. da

    2009-01-01

    This work presents a Particle Swarm Optimization (PSO) approach for preventive maintenance policy optimization, focused in reliability and cost. The probabilistic model for reliability and cost evaluation is developed in such a way that flexible intervals between maintenance are allowed. As PSO is skilled for realcoded continuous spaces, a non-conventional codification has been developed in order to allow PSO to solve scheduling problems (which is discrete) with variable number of maintenance interventions. In order to evaluate the proposed methodology, the High Pressure Injection System (HPIS) of a typical 4-loop PWR has been considered. Results demonstrate ability in finding optimal solutions, for which expert knowledge had to be automatically discovered by PSO. (author)

  3. Nonlinear Cointegration Approach for Condition Monitoring of Wind Turbines

    Directory of Open Access Journals (Sweden)

    Konrad Zolna

    2015-01-01

    Full Text Available Monitoring of trends and removal of undesired trends from operational/process parameters in wind turbines is important for their condition monitoring. This paper presents the homoscedastic nonlinear cointegration for the solution to this problem. The cointegration approach used leads to stable variances in cointegration residuals. The adapted Breusch-Pagan test procedure is developed to test for the presence of heteroscedasticity in cointegration residuals obtained from the nonlinear cointegration analysis. Examples using three different time series data sets—that is, one with a nonlinear quadratic deterministic trend, another with a nonlinear exponential deterministic trend, and experimental data from a wind turbine drivetrain—are used to illustrate the method and demonstrate possible practical applications. The results show that the proposed approach can be used for effective removal of nonlinear trends form various types of data, allowing for possible condition monitoring applications.

  4. Optimizing the Energy and Throughput of a Water-Quality Monitoring System.

    Science.gov (United States)

    Olatinwo, Segun O; Joubert, Trudi-H

    2018-04-13

    This work presents a new approach to the maximization of energy and throughput in a wireless sensor network (WSN), with the intention of applying the approach to water-quality monitoring. Water-quality monitoring using WSN technology has become an interesting research area. Energy scarcity is a critical issue that plagues the widespread deployment of WSN systems. Different power supplies, harvesting energy from sustainable sources, have been explored. However, when energy-efficient models are not put in place, energy harvesting based WSN systems may experience an unstable energy supply, resulting in an interruption in communication, and low system throughput. To alleviate these problems, this paper presents the joint maximization of the energy harvested by sensor nodes and their information-transmission rate using a sum-throughput technique. A wireless information and power transfer (WIPT) method is considered by harvesting energy from dedicated radio frequency sources. Due to the doubly near-far condition that confronts WIPT systems, a new WIPT system is proposed to improve the fairness of resource utilization in the network. Numerical simulation results are presented to validate the mathematical formulations for the optimization problem, which maximize the energy harvested and the overall throughput rate. Defining the performance metrics of achievable throughput and fairness in resource sharing, the proposed WIPT system outperforms an existing state-of-the-art WIPT system, with the comparison based on numerical simulations of both systems. The improved energy efficiency of the proposed WIPT system contributes to addressing the problem of energy scarcity.

  5. Optimizing the Energy and Throughput of a Water-Quality Monitoring System

    Directory of Open Access Journals (Sweden)

    Segun O. Olatinwo

    2018-04-01

    Full Text Available This work presents a new approach to the maximization of energy and throughput in a wireless sensor network (WSN, with the intention of applying the approach to water-quality monitoring. Water-quality monitoring using WSN technology has become an interesting research area. Energy scarcity is a critical issue that plagues the widespread deployment of WSN systems. Different power supplies, harvesting energy from sustainable sources, have been explored. However, when energy-efficient models are not put in place, energy harvesting based WSN systems may experience an unstable energy supply, resulting in an interruption in communication, and low system throughput. To alleviate these problems, this paper presents the joint maximization of the energy harvested by sensor nodes and their information-transmission rate using a sum-throughput technique. A wireless information and power transfer (WIPT method is considered by harvesting energy from dedicated radio frequency sources. Due to the doubly near–far condition that confronts WIPT systems, a new WIPT system is proposed to improve the fairness of resource utilization in the network. Numerical simulation results are presented to validate the mathematical formulations for the optimization problem, which maximize the energy harvested and the overall throughput rate. Defining the performance metrics of achievable throughput and fairness in resource sharing, the proposed WIPT system outperforms an existing state-of-the-art WIPT system, with the comparison based on numerical simulations of both systems. The improved energy efficiency of the proposed WIPT system contributes to addressing the problem of energy scarcity.

  6. Monitoring the shorebirds of North America: Towards a unified approach

    Science.gov (United States)

    Skagen, S.K.; Bart, J.; Andres, B.; Brown, S.; Donaldson, G.; Harrington, B.; Johnston, V.; Jones, S.L.; Morrison, R.I.G.

    2003-01-01

    The Program for Regional and International Shorebird Monitoring (PRISM) has recently developed a single blueprint for monitoring shorebirds in Canada and the United States in response to needs identified by recent shorebird conservation plans. The goals of PRISM are to: (1) estimate the size of breeding populations of 74 shorebird taxa in North America; (2) describe the distribution, abundance, and habitat relationships for these taxa; (3) monitor trends in shorebird population size; (4) monitor shorebird numbers at stopover locations, and; (5) assist local managers in meeting their shorebird conservation goals. The initial focus has been on developing methods to estimate trends in population size. A three-part approach for estimating trends includes: (1) breeding surveys in arctic, boreal, and temperate regions, (2) migration surveys, and (3) wintering surveys.

  7. A robust quantitative near infrared modeling approach for blend monitoring.

    Science.gov (United States)

    Mohan, Shikhar; Momose, Wataru; Katz, Jeffrey M; Hossain, Md Nayeem; Velez, Natasha; Drennen, James K; Anderson, Carl A

    2018-01-30

    This study demonstrates a material sparing Near-Infrared modeling approach for powder blend monitoring. In this new approach, gram scale powder mixtures are subjected to compression loads to simulate the effect of scale using an Instron universal testing system. Models prepared by the new method development approach (small-scale method) and by a traditional method development (blender-scale method) were compared by simultaneously monitoring a 1kg batch size blend run. Both models demonstrated similar model performance. The small-scale method strategy significantly reduces the total resources expended to develop Near-Infrared calibration models for on-line blend monitoring. Further, this development approach does not require the actual equipment (i.e., blender) to which the method will be applied, only a similar optical interface. Thus, a robust on-line blend monitoring method can be fully developed before any large-scale blending experiment is viable, allowing the blend method to be used during scale-up and blend development trials. Copyright © 2017. Published by Elsevier B.V.

  8. Monitoring and optimization of energy consumption of base transceiver stations

    International Nuclear Information System (INIS)

    Spagnuolo, Antonio; Petraglia, Antonio; Vetromile, Carmela; Formosi, Roberto; Lubritto, Carmine

    2015-01-01

    The growth and development of the mobile phone network has led to an increased demand for energy by the telecommunications sector, with a noticeable impact on the environment. Monitoring of energy consumption is a great tool for understanding how to better manage this consumption and find the best strategy to adopt in order to maximize reduction of unnecessary usage of electricity. This paper reports on a monitoring campaign performed on six BSs (Base Transceiver Stations) located central Italy, with different technology, typology and technical characteristics. The study focuses on monitoring energy consumption and environmental parameters (temperature, noise, and global radiation), linking energy consumption with the load of telephone traffic and with the air conditioning functions used to cool the transmission equipment. Moreover, using experimental data collected, it is shown, with a Monte Carlo simulation based on power saving features, how the BS monitored could save energy. - Highlights: • Energy consumption and environmental parameters of a base transceiver system have been monitored. • Energy consumption is related to the air conditioning functions and to the load of telephone traffic. • Energy saving can be obtained by careful choice of cooling parameters and by turn off BS transceivers. • Energy saving parameters can be estimated by a simulation Monte Carlo method

  9. A Hybrid Genetic Algorithm Approach for Optimal Power Flow

    Directory of Open Access Journals (Sweden)

    Sydulu Maheswarapu

    2011-08-01

    Full Text Available This paper puts forward a reformed hybrid genetic algorithm (GA based approach to the optimal power flow. In the approach followed here, continuous variables are designed using real-coded GA and discrete variables are processed as binary strings. The outcomes are compared with many other methods like simple genetic algorithm (GA, adaptive genetic algorithm (AGA, differential evolution (DE, particle swarm optimization (PSO and music based harmony search (MBHS on a IEEE30 bus test bed, with a total load of 283.4 MW. Its found that the proposed algorithm is found to offer lowest fuel cost. The proposed method is found to be computationally faster, robust, superior and promising form its convergence characteristics.

  10. A design approach for integrating thermoelectric devices using topology optimization

    DEFF Research Database (Denmark)

    Soprani, Stefano; Haertel, Jan Hendrik Klaas; Lazarov, Boyan Stefanov

    2016-01-01

    Efficient operation of thermoelectric devices strongly relies on the thermal integration into the energy conversion system in which they operate. Effective thermal integration reduces the temperature differences between the thermoelectric module and its thermal reservoirs, allowing the system...... to operate more efficiently. This work proposes and experimentally demonstrates a topology optimization approach as a design tool for efficient integration of thermoelectric modules into systems with specific design constraints. The approach allows thermal layout optimization of thermoelectric systems...... for different operating conditions and objective functions, such as temperature span, efficiency, and power recoveryrate. As a specific application, the integration of a thermoelectric cooler into the electronics section ofa downhole oil well intervention tool is investigated, with the objective of minimizing...

  11. An Innovative Approach for online Meta Search Engine Optimization

    OpenAIRE

    Manral, Jai; Hossain, Mohammed Alamgir

    2015-01-01

    This paper presents an approach to identify efficient techniques used in Web Search Engine Optimization (SEO). Understanding SEO factors which can influence page ranking in search engine is significant for webmasters who wish to attract large number of users to their website. Different from previous relevant research, in this study we developed an intelligent Meta search engine which aggregates results from various search engines and ranks them based on several important SEO parameters. The r...

  12. A measure theoretic approach to traffic flow optimization on networks

    OpenAIRE

    Cacace, Simone; Camilli, Fabio; De Maio, Raul; Tosin, Andrea

    2018-01-01

    We consider a class of optimal control problems for measure-valued nonlinear transport equations describing traffic flow problems on networks. The objective isto minimise/maximise macroscopic quantities, such as traffic volume or average speed,controlling few agents, for example smart traffic lights and automated cars. The measuretheoretic approach allows to study in a same setting local and nonlocal drivers interactionsand to consider the control variables as additional measures interacting ...

  13. Log-Optimal Portfolio Selection Using the Blackwell Approachability Theorem

    OpenAIRE

    V'yugin, Vladimir

    2014-01-01

    We present a method for constructing the log-optimal portfolio using the well-calibrated forecasts of market values. Dawid's notion of calibration and the Blackwell approachability theorem are used for computing well-calibrated forecasts. We select a portfolio using this "artificial" probability distribution of market values. Our portfolio performs asymptotically at least as well as any stationary portfolio that redistributes the investment at each round using a continuous function of side in...

  14. Optical Performance Monitoring and Signal Optimization in Optical Networks

    DEFF Research Database (Denmark)

    Petersen, Martin Nordal

    2006-01-01

    The thesis studies performance monitoring for the next generation optical networks. The focus is on all-optical networks with bit-rates of 10 Gb/s or above. Next generation all-optical networks offer large challenges as the optical transmitted distance increases and the occurrence of electrical-optical......-electrical regeneration points decreases. This thesis evaluates the impact of signal degrading effects that are becoming of increasing concern in all-optical high-speed networks due to all-optical switching and higher bit-rates. Especially group-velocity-dispersion (GVD) and a number of nonlinear effects will require...... enhanced attention to avoid signal degradations. The requirements for optical performance monitoring features are discussed, and the thesis evaluates the advantages and necessity of increasing the level of performance monitoring parameters in the physical layer. In particular, methods for optical...

  15. Adjoint current-based approaches to prostate brachytherapy optimization

    International Nuclear Information System (INIS)

    Roberts, J. A.; Henderson, D. L.

    2009-01-01

    This paper builds on previous work done at the Univ. of Wisconsin - Madison to employ the adjoint concept of nuclear reactor physics in the so-called greedy heuristic of brachytherapy optimization. Whereas that previous work focused on the adjoint flux, i.e. the importance, this work has included use of the adjoint current to increase the amount of information available in optimizing. Two current-based approaches were developed for 2-D problems, and each was compared to the most recent form of the flux-based methodology. The first method aimed to take a treatment plan from the flux-based greedy heuristic and adjust via application of the current-displacement, or a vector displacement based on a combination of tissue (adjoint) and seed (forward) currents acting as forces on a seed. This method showed promise in improving key urethral and rectal dosimetric quantities. The second method uses the normed current-displacement as the greedy criterion such that seeds are placed in regions of least force. This method, coupled with the dose-update scheme, generated treatment plans with better target irradiation and sparing of the urethra and normal tissues than the flux-based approach. Tables of these parameters are given for both approaches. In summary, these preliminary results indicate adjoint current methods are useful in optimization and further work in 3-D should be performed. (authors)

  16. A Machine-Learning and Filtering Based Data Assimilation Framework for Geologic Carbon Sequestration Monitoring Optimization

    Science.gov (United States)

    Chen, B.; Harp, D. R.; Lin, Y.; Keating, E. H.; Pawar, R.

    2017-12-01

    Monitoring is a crucial aspect of geologic carbon sequestration (GCS) risk management. It has gained importance as a means to ensure CO2 is safely and permanently stored underground throughout the lifecycle of a GCS project. Three issues are often involved in a monitoring project: (i) where is the optimal location to place the monitoring well(s), (ii) what type of data (pressure, rate and/or CO2 concentration) should be measured, and (iii) What is the optimal frequency to collect the data. In order to address these important issues, a filtering-based data assimilation procedure is developed to perform the monitoring optimization. The optimal monitoring strategy is selected based on the uncertainty reduction of the objective of interest (e.g., cumulative CO2 leak) for all potential monitoring strategies. To reduce the computational cost of the filtering-based data assimilation process, two machine-learning algorithms: Support Vector Regression (SVR) and Multivariate Adaptive Regression Splines (MARS) are used to develop the computationally efficient reduced-order-models (ROMs) from full numerical simulations of CO2 and brine flow. The proposed framework for GCS monitoring optimization is demonstrated with two examples: a simple 3D synthetic case and a real field case named Rock Spring Uplift carbon storage site in Southwestern Wyoming.

  17. The use of hierarchical clustering for the design of optimized monitoring networks

    Science.gov (United States)

    Soares, Joana; Makar, Paul Andrew; Aklilu, Yayne; Akingunola, Ayodeji

    2018-05-01

    Associativity analysis is a powerful tool to deal with large-scale datasets by clustering the data on the basis of (dis)similarity and can be used to assess the efficacy and design of air quality monitoring networks. We describe here our use of Kolmogorov-Zurbenko filtering and hierarchical clustering of NO2 and SO2 passive and continuous monitoring data to analyse and optimize air quality networks for these species in the province of Alberta, Canada. The methodology applied in this study assesses dissimilarity between monitoring station time series based on two metrics: 1 - R, R being the Pearson correlation coefficient, and the Euclidean distance; we find that both should be used in evaluating monitoring site similarity. We have combined the analytic power of hierarchical clustering with the spatial information provided by deterministic air quality model results, using the gridded time series of model output as potential station locations, as a proxy for assessing monitoring network design and for network optimization. We demonstrate that clustering results depend on the air contaminant analysed, reflecting the difference in the respective emission sources of SO2 and NO2 in the region under study. Our work shows that much of the signal identifying the sources of NO2 and SO2 emissions resides in shorter timescales (hourly to daily) due to short-term variation of concentrations and that longer-term averages in data collection may lose the information needed to identify local sources. However, the methodology identifies stations mainly influenced by seasonality, if larger timescales (weekly to monthly) are considered. We have performed the first dissimilarity analysis based on gridded air quality model output and have shown that the methodology is capable of generating maps of subregions within which a single station will represent the entire subregion, to a given level of dissimilarity. We have also shown that our approach is capable of identifying different

  18. Optimizing bulk milk dioxin monitoring based on costs and effectiveness

    NARCIS (Netherlands)

    Lascano Alcoser, V.; Velthuis, A.G.J.; Fels-Klerx, van der H.J.; Hoogenboom, L.A.P.; Oude Lansink, A.G.J.M.

    2013-01-01

    Dioxins are environmental pollutants, potentially present in milk products, which have negative consequences for human health and for the firms and farms involved in the dairy chain. Dioxin monitoring in feed and food has been implemented to detect their presence and estimate their levels in food

  19. Optimized cross-organizational business process monitoring : design and enactment

    NARCIS (Netherlands)

    Comuzzi, M.; Vanderfeesten, I.T.P.; Wang, T.

    2013-01-01

    Organizations can implement the agility required to survive in the rapidly evolving business landscape by focusing on their core business and engaging in collaborations with other partners. This entails the need for organizations to monitor the behavior of the partners with which they collaborate.

  20. Population Modeling Approach to Optimize Crop Harvest Strategy. The Case of Field Tomato.

    Science.gov (United States)

    Tran, Dinh T; Hertog, Maarten L A T M; Tran, Thi L H; Quyen, Nguyen T; Van de Poel, Bram; Mata, Clara I; Nicolaï, Bart M

    2017-01-01

    In this study, the aim is to develop a population model based approach to optimize fruit harvesting strategies with regard to fruit quality and its derived economic value. This approach was applied to the case of tomato fruit harvesting under Vietnamese conditions. Fruit growth and development of tomato (cv. "Savior") was monitored in terms of fruit size and color during both the Vietnamese winter and summer growing seasons. A kinetic tomato fruit growth model was applied to quantify biological fruit-to-fruit variation in terms of their physiological maturation. This model was successfully calibrated. Finally, the model was extended to translate the fruit-to-fruit variation at harvest into the economic value of the harvested crop. It can be concluded that a model based approach to the optimization of harvest date and harvest frequency with regard to economic value of the crop as such is feasible. This approach allows growers to optimize their harvesting strategy by harvesting the crop at more uniform maturity stages meeting the stringent retail demands for homogeneous high quality product. The total farm profit would still depend on the impact a change in harvesting strategy might have on related expenditures. This model based harvest optimisation approach can be easily transferred to other fruit and vegetable crops improving homogeneity of the postharvest product streams.

  1. OPTIMAL TRAFFIC MANAGEMENT FOR AIRCRAFT APPROACHING THE AERODROME LANDING AREA

    Directory of Open Access Journals (Sweden)

    Igor B. Ivenin

    2018-01-01

    Full Text Available The research proposes a mathematical optimization approach of arriving aircraft traffic at the aerodrome zone. The airfield having two parallel runways, capable of operating independently of each other, is modeled. The incoming traffic of aircraft is described by a Poisson flow of random events. The arriving aircraft are distributed by the air traffic controller between two runways. There is one approach flight path for each runway. Both approach paths have a common starting point. Each approach path has a different length. The approach trajectories do not overlap. For each of the two approach procedures, the air traffic controller sets the average speed of the aircraft. The given model of airfield and airfield zone is considered as the two-channel system of mass service with refusals in service. Each of the two servicing units includes an approach trajectory, a glide path and a runway. The servicing unit can be in one of two states – free and busy. The probabilities of the states of the servicing units are described by the Kolmogorov system of differential equations. The number of refusals in service on the simulated time interval is used as criterion for assessment of mass service system quality of functioning. This quality of functioning criterion is described by an integral functional. The functions describing the distribution of aircraft flows between the runways, as well as the functions describing the average speed of the aircraft, are control parameters. The optimization problem consists in finding such values of the control parameters for which the value of the criterion functional is minimal. To solve the formulated optimization problem, the L.S. Pontryagin maximum principle is applied. The form of the Hamiltonian function and the conjugate system of differential equations is given. The structure of optimal control has been studied for two different cases of restrictions on the control of the distribution of incoming aircraft

  2. Managing environmental radioactivity monitoring data: a geographic information system approach

    International Nuclear Information System (INIS)

    Heywood, I.; Cornelius, S.

    1993-01-01

    An overview of the current British approach to environmental radiation monitoring is presented here, followed by a discussion of the major issues which would have to be considered in formulating a geographical information system (GIS) for the management of radiation monitoring data. Finally, examples illustrating the use of spatial data handling and automated cartographic techniques are provided from work undertaken by the authors. These examples are discussed in the context of developing a National Radiological Spatial Information System (NRSIS) demonstrator utilising GIS technology. (Author)

  3. Optimization of Investment Planning Based on Game-Theoretic Approach

    Directory of Open Access Journals (Sweden)

    Elena Vladimirovna Butsenko

    2018-03-01

    Full Text Available The game-theoretic approach has a vast potential in solving economic problems. On the other hand, the theory of games itself can be enriched by the studies of real problems of decision-making. Hence, this study is aimed at developing and testing the game-theoretic technique to optimize the management of investment planning. This technique enables to forecast the results and manage the processes of investment planning. The proposed method of optimizing the management of investment planning allows to choose the best development strategy of an enterprise. This technique uses the “game with nature” model, and the Wald criterion, the maximum criterion and the Hurwitz criterion as criteria. The article presents a new algorithm for constructing the proposed econometric method to optimize investment project management. This algorithm combines the methods of matrix games. Furthermore, I show the implementation of this technique in a block diagram. The algorithm includes the formation of initial data, the elements of the payment matrix, as well as the definition of maximin, maximal, compromise and optimal management strategies. The methodology is tested on the example of the passenger transportation enterprise of the Sverdlovsk Railway in Ekaterinburg. The application of the proposed methodology and the corresponding algorithm allowed to obtain an optimal price strategy for transporting passengers for one direction of traffic. This price strategy contributes to an increase in the company’s income with minimal risk from the launch of this direction. The obtained results and conclusions show the effectiveness of using the developed methodology for optimizing the management of investment processes in the enterprise. The results of the research can be used as a basis for the development of an appropriate tool and applied by any economic entity in its investment activities.

  4. Self-optimizing approach for automated laser resonator alignment

    Science.gov (United States)

    Brecher, C.; Schmitt, R.; Loosen, P.; Guerrero, V.; Pyschny, N.; Pavim, A.; Gatej, A.

    2012-02-01

    Nowadays, the assembly of laser systems is dominated by manual operations, involving elaborate alignment by means of adjustable mountings. From a competition perspective, the most challenging problem in laser source manufacturing is price pressure, a result of cost competition exerted mainly from Asia. From an economical point of view, an automated assembly of laser systems defines a better approach to produce more reliable units at lower cost. However, the step from today's manual solutions towards an automated assembly requires parallel developments regarding product design, automation equipment and assembly processes. This paper introduces briefly the idea of self-optimizing technical systems as a new approach towards highly flexible automation. Technically, the work focuses on the precision assembly of laser resonators, which is one of the final and most crucial assembly steps in terms of beam quality and laser power. The paper presents a new design approach for miniaturized laser systems and new automation concepts for a robot-based precision assembly, as well as passive and active alignment methods, which are based on a self-optimizing approach. Very promising results have already been achieved, considerably reducing the duration and complexity of the laser resonator assembly. These results as well as future development perspectives are discussed.

  5. Bifurcation-based approach reveals synergism and optimal combinatorial perturbation.

    Science.gov (United States)

    Liu, Yanwei; Li, Shanshan; Liu, Zengrong; Wang, Ruiqi

    2016-06-01

    Cells accomplish the process of fate decisions and form terminal lineages through a series of binary choices in which cells switch stable states from one branch to another as the interacting strengths of regulatory factors continuously vary. Various combinatorial effects may occur because almost all regulatory processes are managed in a combinatorial fashion. Combinatorial regulation is crucial for cell fate decisions because it may effectively integrate many different signaling pathways to meet the higher regulation demand during cell development. However, whether the contribution of combinatorial regulation to the state transition is better than that of a single one and if so, what the optimal combination strategy is, seem to be significant issue from the point of view of both biology and mathematics. Using the approaches of combinatorial perturbations and bifurcation analysis, we provide a general framework for the quantitative analysis of synergism in molecular networks. Different from the known methods, the bifurcation-based approach depends only on stable state responses to stimuli because the state transition induced by combinatorial perturbations occurs between stable states. More importantly, an optimal combinatorial perturbation strategy can be determined by investigating the relationship between the bifurcation curve of a synergistic perturbation pair and the level set of a specific objective function. The approach is applied to two models, i.e., a theoretical multistable decision model and a biologically realistic CREB model, to show its validity, although the approach holds for a general class of biological systems.

  6. City transport monitoring and routes optimal management system

    Directory of Open Access Journals (Sweden)

    V. Gargasas

    2008-06-01

    Full Text Available The article analyses the problem of further development of geographic informational systems with traffic monitoring channel (GIS-TMC in order to present the road users with effective information about the fastest (the shortest in respect of time routes and thus to improve the use of existing city transport infrastructure. To solve this task it is suggested to create dynamic (automatically updated in real time street passing duration base, for support of which a city transport monitoring system operating in real time is necessary, consisting of a network of sensors, a data collection communications system and a data processing system. In the article it is shown that to predict the street passing duration it is enough to measure speed of transport in the characteristic points of the street. Measurements of traffic density do not significantly improve accuracy of forecasting the street passing time. Analytical formulas are presented meant to forecast the street passing time.

  7. Optimizing communication satellites payload configuration with exact approaches

    Science.gov (United States)

    Stathakis, Apostolos; Danoy, Grégoire; Bouvry, Pascal; Talbi, El-Ghazali; Morelli, Gianluigi

    2015-12-01

    The satellite communications market is competitive and rapidly evolving. The payload, which is in charge of applying frequency conversion and amplification to the signals received from Earth before their retransmission, is made of various components. These include reconfigurable switches that permit the re-routing of signals based on market demand or because of some hardware failure. In order to meet modern requirements, the size and the complexity of current communication payloads are increasing significantly. Consequently, the optimal payload configuration, which was previously done manually by the engineers with the use of computerized schematics, is now becoming a difficult and time consuming task. Efficient optimization techniques are therefore required to find the optimal set(s) of switch positions to optimize some operational objective(s). In order to tackle this challenging problem for the satellite industry, this work proposes two Integer Linear Programming (ILP) models. The first one is single-objective and focuses on the minimization of the length of the longest channel path, while the second one is bi-objective and additionally aims at minimizing the number of switch changes in the payload switch matrix. Experiments are conducted on a large set of instances of realistic payload sizes using the CPLEX® solver and two well-known exact multi-objective algorithms. Numerical results demonstrate the efficiency and limitations of the ILP approach on this real-world problem.

  8. Portfolio optimization in enhanced index tracking with goal programming approach

    Science.gov (United States)

    Siew, Lam Weng; Jaaman, Saiful Hafizah Hj.; Ismail, Hamizun bin

    2014-09-01

    Enhanced index tracking is a popular form of passive fund management in stock market. Enhanced index tracking aims to generate excess return over the return achieved by the market index without purchasing all of the stocks that make up the index. This can be done by establishing an optimal portfolio to maximize the mean return and minimize the risk. The objective of this paper is to determine the portfolio composition and performance using goal programming approach in enhanced index tracking and comparing it to the market index. Goal programming is a branch of multi-objective optimization which can handle decision problems that involve two different goals in enhanced index tracking, a trade-off between maximizing the mean return and minimizing the risk. The results of this study show that the optimal portfolio with goal programming approach is able to outperform the Malaysia market index which is FTSE Bursa Malaysia Kuala Lumpur Composite Index because of higher mean return and lower risk without purchasing all the stocks in the market index.

  9. Spatiotemporal radiotherapy planning using a global optimization approach

    Science.gov (United States)

    Adibi, Ali; Salari, Ehsan

    2018-02-01

    This paper aims at quantifying the extent of potential therapeutic gain, measured using biologically effective dose (BED), that can be achieved by altering the radiation dose distribution over treatment sessions in fractionated radiotherapy. To that end, a spatiotemporally integrated planning approach is developed, where the spatial and temporal dose modulations are optimized simultaneously. The concept of equivalent uniform BED (EUBED) is used to quantify and compare the clinical quality of spatiotemporally heterogeneous dose distributions in target and critical structures. This gives rise to a large-scale non-convex treatment-plan optimization problem, which is solved using global optimization techniques. The proposed spatiotemporal planning approach is tested on two stylized cancer cases resembling two different tumor sites and sensitivity analysis is performed for radio-biological and EUBED parameters. Numerical results validate that spatiotemporal plans are capable of delivering a larger BED to the target volume without increasing the BED in critical structures compared to conventional time-invariant plans. In particular, this additional gain is attributed to the irradiation of different regions of the target volume at different treatment sessions. Additionally, the trade-off between the potential therapeutic gain and the number of distinct dose distributions is quantified, which suggests a diminishing marginal gain as the number of dose distributions increases.

  10. A case study of optimization in the decision process: Siting groundwater monitoring wells

    International Nuclear Information System (INIS)

    Cardwell, H.; Huff, D.; Douthitt, J.; Sale, M.

    1993-12-01

    Optimization is one of the tools available to assist decision makers in balancing multiple objectives and concerns. In a case study of the siting decision for groundwater monitoring wells, we look at the influence of the optimization models on the decisions made by the responsible groundwater specialist. This paper presents a multi-objective integer programming model for determining the location of monitoring wells associated with a groundwater pump-and-treat remediation. After presenting the initial optimization results, we analyze the actual decision and revise the model to incorporate elements of the problem that were later identified as important in the decision-making process. The results of a revised model are compared to the actual siting plans, the recommendations from the initial optimization runs, and the initial monitoring network proposed by the decision maker

  11. Optimal spatio-temporal design of water quality monitoring networks for reservoirs: Application of the concept of value of information

    Science.gov (United States)

    Maymandi, Nahal; Kerachian, Reza; Nikoo, Mohammad Reza

    2018-03-01

    This paper presents a new methodology for optimizing Water Quality Monitoring (WQM) networks of reservoirs and lakes using the concept of the value of information (VOI) and utilizing results of a calibrated numerical water quality simulation model. With reference to the value of information theory, water quality of every checkpoint with a specific prior probability differs in time. After analyzing water quality samples taken from potential monitoring points, the posterior probabilities are updated using the Baye's theorem, and VOI of the samples is calculated. In the next step, the stations with maximum VOI is selected as optimal stations. This process is repeated for each sampling interval to obtain optimal monitoring network locations for each interval. The results of the proposed VOI-based methodology is compared with those obtained using an entropy theoretic approach. As the results of the two methodologies would be partially different, in the next step, the results are combined using a weighting method. Finally, the optimal sampling interval and location of WQM stations are chosen using the Evidential Reasoning (ER) decision making method. The efficiency and applicability of the methodology are evaluated using available water quantity and quality data of the Karkheh Reservoir in the southwestern part of Iran.

  12. Optimization of the monitoring of landfill gas and leachate in closed methanogenic landfills.

    Science.gov (United States)

    Jovanov, Dejan; Vujić, Bogdana; Vujić, Goran

    2018-06-15

    Monitoring of the gas and leachate parameters in a closed landfill is a long-term activity defined by national legislative worldwide. Serbian Waste Disposal Law defines the monitoring of a landfill at least 30 years after its closing, but the definition of the monitoring extent (number and type of parameters) is incomplete. In order to define and clear all the uncertainties, this research focuses on process of monitoring optimization, using the closed landfill in Zrenjanin, Serbia, as the experimental model. The aim of optimization was to find representative parameters which would define the physical, chemical and biological processes in the closed methanogenic landfill and to make this process less expensive. Research included development of the five monitoring models with different number of gas and leachate parameters and each model has been processed in open source software GeoGebra which is often used for solving optimization problems. The results of optimization process identified the most favorable monitoring model which fulfills all the defined criteria not only from the point of view of mathematical analyses, but also from the point of view of environment protection. The final outcome of this research - the minimal required parameters which should be included in the landfill monitoring are precisely defined. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Does intense monitoring matter? A quantile regression approach

    Directory of Open Access Journals (Sweden)

    Fekri Ali Shawtari

    2017-06-01

    Full Text Available Corporate governance has become a centre of attention in corporate management at both micro and macro levels due to adverse consequences and repercussion of insufficient accountability. In this study, we include the Malaysian stock market as sample to explore the impact of intense monitoring on the relationship between intellectual capital performance and market valuation. The objectives of the paper are threefold: i to investigate whether intense monitoring affects the intellectual capital performance of listed companies; ii to explore the impact of intense monitoring on firm value; iii to examine the extent to which the directors serving more than two board committees affects the linkage between intellectual capital performance and firms' value. We employ two approaches, namely, the Ordinary Least Square (OLS and the quantile regression approach. The purpose of the latter is to estimate and generate inference about conditional quantile functions. This method is useful when the conditional distribution does not have a standard shape such as an asymmetric, fat-tailed, or truncated distribution. In terms of variables, the intellectual capital is measured using the value added intellectual coefficient (VAIC, while the market valuation is proxied by firm's market capitalization. The findings of the quantile regression shows that some of the results do not coincide with the results of OLS. We found that intensity of monitoring does not influence the intellectual capital of all firms. It is also evident that intensity of monitoring does not influence the market valuation. However, to some extent, it moderates the relationship between intellectual capital performance and market valuation. This paper contributes to the existing literature as it presents new empirical evidences on the moderating effects of the intensity of monitoring of the board committees on the relationship between performance and intellectual capital.

  14. Multiobjective design of aquifer monitoring networks for optimal spatial prediction and geostatistical parameter estimation

    Science.gov (United States)

    Alzraiee, Ayman H.; Bau, Domenico A.; Garcia, Luis A.

    2013-06-01

    Effective sampling of hydrogeological systems is essential in guiding groundwater management practices. Optimal sampling of groundwater systems has previously been formulated based on the assumption that heterogeneous subsurface properties can be modeled using a geostatistical approach. Therefore, the monitoring schemes have been developed to concurrently minimize the uncertainty in the spatial distribution of systems' states and parameters, such as the hydraulic conductivity K and the hydraulic head H, and the uncertainty in the geostatistical model of system parameters using a single objective function that aggregates all objectives. However, it has been shown that the aggregation of possibly conflicting objective functions is sensitive to the adopted aggregation scheme and may lead to distorted results. In addition, the uncertainties in geostatistical parameters affect the uncertainty in the spatial prediction of K and H according to a complex nonlinear relationship, which has often been ineffectively evaluated using a first-order approximation. In this study, we propose a multiobjective optimization framework to assist the design of monitoring networks of K and H with the goal of optimizing their spatial predictions and estimating the geostatistical parameters of the K field. The framework stems from the combination of a data assimilation (DA) algorithm and a multiobjective evolutionary algorithm (MOEA). The DA algorithm is based on the ensemble Kalman filter, a Monte-Carlo-based Bayesian update scheme for nonlinear systems, which is employed to approximate the posterior uncertainty in K, H, and the geostatistical parameters of K obtained by collecting new measurements. Multiple MOEA experiments are used to investigate the trade-off among design objectives and identify the corresponding monitoring schemes. The methodology is applied to design a sampling network for a shallow unconfined groundwater system located in Rocky Ford, Colorado. Results indicate that

  15. Numerical Optimization Design of Dynamic Quantizer via Matrix Uncertainty Approach

    Directory of Open Access Journals (Sweden)

    Kenji Sawada

    2013-01-01

    Full Text Available In networked control systems, continuous-valued signals are compressed to discrete-valued signals via quantizers and then transmitted/received through communication channels. Such quantization often degrades the control performance; a quantizer must be designed that minimizes the output difference between before and after the quantizer is inserted. In terms of the broadbandization and the robustness of the networked control systems, we consider the continuous-time quantizer design problem. In particular, this paper describes a numerical optimization method for a continuous-time dynamic quantizer considering the switching speed. Using a matrix uncertainty approach of sampled-data control, we clarify that both the temporal and spatial resolution constraints can be considered in analysis and synthesis, simultaneously. Finally, for the slow switching, we compare the proposed and the existing methods through numerical examples. From the examples, a new insight is presented for the two-step design of the existing continuous-time optimal quantizer.

  16. Optimal trading strategies—a time series approach

    Science.gov (United States)

    Bebbington, Peter A.; Kühn, Reimer

    2016-05-01

    Motivated by recent advances in the spectral theory of auto-covariance matrices, we are led to revisit a reformulation of Markowitz’ mean-variance portfolio optimization approach in the time domain. In its simplest incarnation it applies to a single traded asset and allows an optimal trading strategy to be found which—for a given return—is minimally exposed to market price fluctuations. The model is initially investigated for a range of synthetic price processes, taken to be either second order stationary, or to exhibit second order stationary increments. Attention is paid to consequences of estimating auto-covariance matrices from small finite samples, and auto-covariance matrix cleaning strategies to mitigate against these are investigated. Finally we apply our framework to real world data.

  17. Micro-scale NMR Experiments for Monitoring the Optimization of Membrane Protein Solutions for Structural Biology.

    Science.gov (United States)

    Horst, Reto; Wüthrich, Kurt

    2015-07-20

    Reconstitution of integral membrane proteins (IMP) in aqueous solutions of detergent micelles has been extensively used in structural biology, using either X-ray crystallography or NMR in solution. Further progress could be achieved by establishing a rational basis for the selection of detergent and buffer conditions, since the stringent bottleneck that slows down the structural biology of IMPs is the preparation of diffracting crystals or concentrated solutions of stable isotope labeled IMPs. Here, we describe procedures to monitor the quality of aqueous solutions of [ 2 H, 15 N]-labeled IMPs reconstituted in detergent micelles. This approach has been developed for studies of β-barrel IMPs, where it was successfully applied for numerous NMR structure determinations, and it has also been adapted for use with α-helical IMPs, in particular GPCRs, in guiding crystallization trials and optimizing samples for NMR studies (Horst et al ., 2013). 2D [ 15 N, 1 H]-correlation maps are used as "fingerprints" to assess the foldedness of the IMP in solution. For promising samples, these "inexpensive" data are then supplemented with measurements of the translational and rotational diffusion coefficients, which give information on the shape and size of the IMP/detergent mixed micelles. Using microcoil equipment for these NMR experiments enables data collection with only micrograms of protein and detergent. This makes serial screens of variable solution conditions viable, enabling the optimization of parameters such as the detergent concentration, sample temperature, pH and the composition of the buffer.

  18. Optimal and Approximate Approaches for Deployment of Heterogeneous Sensing Devices

    Directory of Open Access Journals (Sweden)

    Rabie Ramadan

    2007-04-01

    Full Text Available A modeling framework for the problem of deploying a set of heterogeneous sensors in a field with time-varying differential surveillance requirements is presented. The problem is formulated as mixed integer mathematical program with the objective to maximize coverage of a given field. Two metaheuristics are used to solve this problem. The first heuristic adopts a genetic algorithm (GA approach while the second heuristic implements a simulated annealing (SA algorithm. A set of experiments is used to illustrate the capabilities of the developed models and to compare their performance. The experiments investigate the effect of parameters related to the size of the sensor deployment problem including number of deployed sensors, size of the monitored field, and length of the monitoring horizon. They also examine several endogenous parameters related to the developed GA and SA algorithms.

  19. A risk-based approach to liquid effluent monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Hull, L.C.

    1995-10-01

    DOE Order 5400.1 identifies six objectives of a liquid effluent monitoring program. A strategy is proposed that meets these objective in one of two ways: (1) by showing that effluent concentrations are below concentration limits set by permits or are below concentrations that could cause environmental problems or (2) by showing that concentrations in effluent have not changed from a period when treatment processes were in control and there were no unplanned releases. The intensity of liquid effluent monitoring should be graded to the importance of the source being monitored. This can be accomplished by determining the risk posed by the source. A definition of risk is presented that defines risk in terms of the statistical probability of exceeding a release limit and the time available to recover from an exceedance of a release limit. Three examples are presented that show this approach to grading an effluent monitoring program can be implemented at the Idaho National Engineering Laboratory and will reduce monitoring requirements.

  20. A risk-based approach to liquid effluent monitoring

    International Nuclear Information System (INIS)

    Hull, L.C.

    1995-10-01

    DOE Order 5400.1 identifies six objectives of a liquid effluent monitoring program. A strategy is proposed that meets these objective in one of two ways: (1) by showing that effluent concentrations are below concentration limits set by permits or are below concentrations that could cause environmental problems or (2) by showing that concentrations in effluent have not changed from a period when treatment processes were in control and there were no unplanned releases. The intensity of liquid effluent monitoring should be graded to the importance of the source being monitored. This can be accomplished by determining the risk posed by the source. A definition of risk is presented that defines risk in terms of the statistical probability of exceeding a release limit and the time available to recover from an exceedance of a release limit. Three examples are presented that show this approach to grading an effluent monitoring program can be implemented at the Idaho National Engineering Laboratory and will reduce monitoring requirements

  1. Deterministic network interdiction optimization via an evolutionary approach

    International Nuclear Information System (INIS)

    Rocco S, Claudio M.; Ramirez-Marquez, Jose Emmanuel

    2009-01-01

    This paper introduces an evolutionary optimization approach that can be readily applied to solve deterministic network interdiction problems. The network interdiction problem solved considers the minimization of the maximum flow that can be transmitted between a source node and a sink node for a fixed network design when there is a limited amount of resources available to interdict network links. Furthermore, the model assumes that the nominal capacity of each network link and the cost associated with their interdiction can change from link to link. For this problem, the solution approach developed is based on three steps that use: (1) Monte Carlo simulation, to generate potential network interdiction strategies, (2) Ford-Fulkerson algorithm for maximum s-t flow, to analyze strategies' maximum source-sink flow and, (3) an evolutionary optimization technique to define, in probabilistic terms, how likely a link is to appear in the final interdiction strategy. Examples for different sizes of networks and network behavior are used throughout the paper to illustrate the approach. In terms of computational effort, the results illustrate that solutions are obtained from a significantly restricted solution search space. Finally, the authors discuss the need for a reliability perspective to network interdiction, so that solutions developed address more realistic scenarios of such problem

  2. A Swarm Optimization approach for clinical knowledge mining.

    Science.gov (United States)

    Christopher, J Jabez; Nehemiah, H Khanna; Kannan, A

    2015-10-01

    Rule-based classification is a typical data mining task that is being used in several medical diagnosis and decision support systems. The rules stored in the rule base have an impact on classification efficiency. Rule sets that are extracted with data mining tools and techniques are optimized using heuristic or meta-heuristic approaches in order to improve the quality of the rule base. In this work, a meta-heuristic approach called Wind-driven Swarm Optimization (WSO) is used. The uniqueness of this work lies in the biological inspiration that underlies the algorithm. WSO uses Jval, a new metric, to evaluate the efficiency of a rule-based classifier. Rules are extracted from decision trees. WSO is used to obtain different permutations and combinations of rules whereby the optimal ruleset that satisfies the requirement of the developer is used for predicting the test data. The performance of various extensions of decision trees, namely, RIPPER, PART, FURIA and Decision Tables are analyzed. The efficiency of WSO is also compared with the traditional Particle Swarm Optimization. Experiments were carried out with six benchmark medical datasets. The traditional C4.5 algorithm yields 62.89% accuracy with 43 rules for liver disorders dataset where as WSO yields 64.60% with 19 rules. For Heart disease dataset, C4.5 is 68.64% accurate with 98 rules where as WSO is 77.8% accurate with 34 rules. The normalized standard deviation for accuracy of PSO and WSO are 0.5921 and 0.5846 respectively. WSO provides accurate and concise rulesets. PSO yields results similar to that of WSO but the novelty of WSO lies in its biological motivation and it is customization for rule base optimization. The trade-off between the prediction accuracy and the size of the rule base is optimized during the design and development of rule-based clinical decision support system. The efficiency of a decision support system relies on the content of the rule base and classification accuracy. Copyright

  3. Compliance Groundwater Monitoring of Nonpoint Sources - Emerging Approaches

    Science.gov (United States)

    Harter, T.

    2008-12-01

    Groundwater monitoring networks are typically designed for regulatory compliance of discharges from industrial sites. There, the quality of first encountered (shallow-most) groundwater is of key importance. Network design criteria have been developed for purposes of determining whether an actual or potential, permitted or incidental waste discharge has had or will have a degrading effect on groundwater quality. The fundamental underlying paradigm is that such discharge (if it occurs) will form a distinct contamination plume. Networks that guide (post-contamination) mitigation efforts are designed to capture the shape and dynamics of existing, finite-scale plumes. In general, these networks extend over areas less than one to ten hectare. In recent years, regulatory programs such as the EU Nitrate Directive and the U.S. Clean Water Act have forced regulatory agencies to also control groundwater contamination from non-incidental, recharging, non-point sources, particularly agricultural sources (fertilizer, pesticides, animal waste application, biosolids application). Sources and contamination from these sources can stretch over several tens, hundreds, or even thousands of square kilometers with no distinct plumes. A key question in implementing monitoring programs at the local, regional, and national level is, whether groundwater monitoring can be effectively used as a landowner compliance tool, as is currently done at point-source sites. We compare the efficiency of such traditional site-specific compliance networks in nonpoint source regulation with various designs of regional nonpoint source monitoring networks that could be used for compliance monitoring. We discuss advantages and disadvantages of the site vs. regional monitoring approaches with respect to effectively protecting groundwater resources impacted by nonpoint sources: Site-networks provide a tool to enforce compliance by an individual landowner. But the nonpoint source character of the contamination

  4. Primal and dual approaches to adjustable robust optimization

    NARCIS (Netherlands)

    de Ruiter, Frans

    2018-01-01

    Robust optimization has become an important paradigm to deal with optimization under uncertainty. Adjustable robust optimization is an extension that deals with multistage problems. This thesis starts with a short but comprehensive introduction to adjustable robust optimization. Then the two

  5. An approach to maintenance optimization where safety issues are important

    Energy Technology Data Exchange (ETDEWEB)

    Vatn, Jorn, E-mail: jorn.vatn@ntnu.n [NTNU, Production and Quality Engineering, 7491 Trondheim (Norway); Aven, Terje [University of Stavanger (Norway)

    2010-01-15

    The starting point for this paper is a traditional approach to maintenance optimization where an object function is used for optimizing maintenance intervals. The object function reflects maintenance cost, cost of loss of production/services, as well as safety costs, and is based on a classical cost-benefit analysis approach where a value of prevented fatality (VPF) is used to weight the importance of safety. However, the rationale for such an approach could be questioned. What is the meaning of such a VPF figure, and is it sufficient to reflect the importance of safety by calculating the expected fatality loss VPF and potential loss of lives (PLL) as being done in the cost-benefit analyses? Should the VPF be the same number for all type of accidents, or should it be increased in case of multiple fatality accidents to reflect gross accident aversion? In this paper, these issues are discussed. We conclude that we have to see beyond the expected values in situations with high safety impacts. A framework is presented which opens up for a broader decision basis, covering considerations on the potential for gross accidents, the type of uncertainties and lack of knowledge of important risk influencing factors. Decisions with a high safety impact are moved from the maintenance department to the 'Safety Board' for a broader discussion. In this way, we avoid that the object function is used in a mechanical way to optimize the maintenance and important safety-related decisions are made implicit and outside the normal arena for safety decisions, e.g. outside the traditional 'Safety Board'. A case study from the Norwegian railways is used to illustrate the discussions.

  6. An approach to maintenance optimization where safety issues are important

    International Nuclear Information System (INIS)

    Vatn, Jorn; Aven, Terje

    2010-01-01

    The starting point for this paper is a traditional approach to maintenance optimization where an object function is used for optimizing maintenance intervals. The object function reflects maintenance cost, cost of loss of production/services, as well as safety costs, and is based on a classical cost-benefit analysis approach where a value of prevented fatality (VPF) is used to weight the importance of safety. However, the rationale for such an approach could be questioned. What is the meaning of such a VPF figure, and is it sufficient to reflect the importance of safety by calculating the expected fatality loss VPF and potential loss of lives (PLL) as being done in the cost-benefit analyses? Should the VPF be the same number for all type of accidents, or should it be increased in case of multiple fatality accidents to reflect gross accident aversion? In this paper, these issues are discussed. We conclude that we have to see beyond the expected values in situations with high safety impacts. A framework is presented which opens up for a broader decision basis, covering considerations on the potential for gross accidents, the type of uncertainties and lack of knowledge of important risk influencing factors. Decisions with a high safety impact are moved from the maintenance department to the 'Safety Board' for a broader discussion. In this way, we avoid that the object function is used in a mechanical way to optimize the maintenance and important safety-related decisions are made implicit and outside the normal arena for safety decisions, e.g. outside the traditional 'Safety Board'. A case study from the Norwegian railways is used to illustrate the discussions.

  7. A Clustering Based Approach for Observability and Controllability Analysis for Optimal Placement of PMU

    Science.gov (United States)

    Murthy, Ch; MIEEE; Mohanta, D. K.; SMIEE; Meher, Mahendra

    2017-08-01

    Continuous monitoring and control of the power system is essential for its healthy operation. This can be achieved by making the system observable as well as controllable. Many efforts have been made by several researchers to make the system observable by placing the Phasor Measurement Units (PMUs) at the optimal locations. But so far the idea of controllability with PMUs is not considered. This paper contributes how to check whether the system is controllable or not, if not then how make it controllable using a clustering approach. IEEE 14 bus system is considered to illustrate the concept of controllability.

  8. Approaching direct optimization of as-built lens performance

    Science.gov (United States)

    McGuire, James P.; Kuper, Thomas G.

    2012-10-01

    We describe a method approaching direct optimization of the rms wavefront error of a lens including tolerances. By including the effect of tolerances in the error function, the designer can choose to improve the as-built performance with a fixed set of tolerances and/or reduce the cost of production lenses with looser tolerances. The method relies on the speed of differential tolerance analysis and has recently become practical due to the combination of continuing increases in computer hardware speed and multiple core processing We illustrate the method's use on a Cooke triplet, a double Gauss, and two plastic mobile phone camera lenses.

  9. A Hybrid Approach to the Optimization of Multiechelon Systems

    Directory of Open Access Journals (Sweden)

    Paweł Sitek

    2015-01-01

    Full Text Available In freight transportation there are two main distribution strategies: direct shipping and multiechelon distribution. In the direct shipping, vehicles, starting from a depot, bring their freight directly to the destination, while in the multiechelon systems, freight is delivered from the depot to the customers through an intermediate points. Multiechelon systems are particularly useful for logistic issues in a competitive environment. The paper presents a concept and application of a hybrid approach to modeling and optimization of the Multi-Echelon Capacitated Vehicle Routing Problem. Two ways of mathematical programming (MP and constraint logic programming (CLP are integrated in one environment. The strengths of MP and CLP in which constraints are treated in a different way and different methods are implemented and combined to use the strengths of both. The proposed approach is particularly important for the discrete decision models with an objective function and many discrete decision variables added up in multiple constraints. An implementation of hybrid approach in the ECLiPSe system using Eplex library is presented. The Two-Echelon Capacitated Vehicle Routing Problem (2E-CVRP and its variants are shown as an illustrative example of the hybrid approach. The presented hybrid approach will be compared with classical mathematical programming on the same benchmark data sets.

  10. Optimization of signal processing algorithm for digital beam position monitor

    International Nuclear Information System (INIS)

    Lai Longwei; Yi Xing; Leng Yongbin; Yan Yingbing; Chen Zhichu

    2013-01-01

    Based on turn-by-turn (TBT) signal processing, the paper emphasizes on the optimization of system timing and implementation of digital automatic gain control, slow application (SA) modules. Beam position including TBT, fast application (FA) and SA data can be acquired. On-line evaluation on Shanghai Synchrotron Radiation Facility (SSRF) shows that the processor is able to get the multi-rate position data which contain true beam movements. When the storage ring is 174 mA and 500 bunches filled, the resolutions of TBT data, FA data and SA data achieve 0.84, 0.44 and 0.23 μm respectively. The above results prove that the design could meet the performance requirements. (authors)

  11. Optimizing nitrogen fertilizer use: Current approaches and simulation models

    International Nuclear Information System (INIS)

    Baethgen, W.E.

    2000-01-01

    Nitrogen (N) is the most common limiting nutrient in agricultural systems throughout the world. Crops need sufficient available N to achieve optimum yields and adequate grain-protein content. Consequently, sub-optimal rates of N fertilizers typically cause lower economical benefits for farmers. On the other hand, excessive N fertilizer use may result in environmental problems such as nitrate contamination of groundwater and emission of N 2 O and NO. In spite of the economical and environmental importance of good N fertilizer management, the development of optimum fertilizer recommendations is still a major challenge in most agricultural systems. This article reviews the approaches most commonly used for making N recommendations: expected yield level, soil testing and plant analysis (including quick tests). The paper introduces the application of simulation models that complement traditional approaches, and includes some examples of current applications in Africa and South America. (author)

  12. Taxes, subsidies and unemployment - a unified optimization approach

    Directory of Open Access Journals (Sweden)

    Erik Bajalinov

    2010-12-01

    Full Text Available Like a linear programming (LP problem, linear-fractional programming (LFP problem can be usefully applied in a wide range of real-world applications. In the last few decades a lot of research papers and monographs were published throughout the world where authors (mainly mathematicians investigated different theoretical and algorithmic aspects of LFP problems in various forms. In this paper we consider these two approaches to optimization (based on linear and linear-fractional objective functions on the same feasible set, compare results they lead to and give interpretation in terms of taxes, subsidies and manpower requirement. We show that in certain cases both approaches are closely connected with one another and may be fruitfully utilized simultaneously.

  13. Reliability optimization using multiobjective ant colony system approaches

    International Nuclear Information System (INIS)

    Zhao Jianhua; Liu Zhaoheng; Dao, M.-T.

    2007-01-01

    The multiobjective ant colony system (ACS) meta-heuristic has been developed to provide solutions for the reliability optimization problem of series-parallel systems. This type of problems involves selection of components with multiple choices and redundancy levels that produce maximum benefits, and is subject to the cost and weight constraints at the system level. These are very common and realistic problems encountered in conceptual design of many engineering systems. It is becoming increasingly important to develop efficient solutions to these problems because many mechanical and electrical systems are becoming more complex, even as development schedules get shorter and reliability requirements become very stringent. The multiobjective ACS algorithm offers distinct advantages to these problems compared with alternative optimization methods, and can be applied to a more diverse problem domain with respect to the type or size of the problems. Through the combination of probabilistic search, multiobjective formulation of local moves and the dynamic penalty method, the multiobjective ACSRAP, allows us to obtain an optimal design solution very frequently and more quickly than with some other heuristic approaches. The proposed algorithm was successfully applied to an engineering design problem of gearbox with multiple stages

  14. Dynamic programming approach to optimization of approximate decision rules

    KAUST Repository

    Amin, Talha

    2013-02-01

    This paper is devoted to the study of an extension of dynamic programming approach which allows sequential optimization of approximate decision rules relative to the length and coverage. We introduce an uncertainty measure R(T) which is the number of unordered pairs of rows with different decisions in the decision table T. For a nonnegative real number β, we consider β-decision rules that localize rows in subtables of T with uncertainty at most β. Our algorithm constructs a directed acyclic graph Δβ(T) which nodes are subtables of the decision table T given by systems of equations of the kind "attribute = value". This algorithm finishes the partitioning of a subtable when its uncertainty is at most β. The graph Δβ(T) allows us to describe the whole set of so-called irredundant β-decision rules. We can describe all irredundant β-decision rules with minimum length, and after that among these rules describe all rules with maximum coverage. We can also change the order of optimization. The consideration of irredundant rules only does not change the results of optimization. This paper contains also results of experiments with decision tables from UCI Machine Learning Repository. © 2012 Elsevier Inc. All rights reserved.

  15. A robust optimization approach for energy generation scheduling in microgrids

    International Nuclear Information System (INIS)

    Wang, Ran; Wang, Ping; Xiao, Gaoxi

    2015-01-01

    Highlights: • A new uncertainty model is proposed for better describing unstable energy demands. • An optimization problem is formulated to minimize the cost of microgrid operations. • Robust optimization algorithms are developed to transform and solve the problem. • The proposed scheme can prominently reduce energy expenses. • Numerical results provide useful insights for future investment policy making. - Abstract: In this paper, a cost minimization problem is formulated to intelligently schedule energy generations for microgrids equipped with unstable renewable sources and combined heat and power (CHP) generators. In such systems, the fluctuant net demands (i.e., the electricity demands not balanced by renewable energies) and heat demands impose unprecedented challenges. To cope with the uncertainty nature of net demand and heat demand, a new flexible uncertainty model is developed. Specifically, we introduce reference distributions according to predictions and field measurements and then define uncertainty sets to confine net and heat demands. The model allows the net demand and heat demand distributions to fluctuate around their reference distributions. Another difficulty existing in this problem is the indeterminate electricity market prices. We develop chance constraint approximations and robust optimization approaches to firstly transform and then solve the prime problem. Numerical results based on real-world data evaluate the impacts of different parameters. It is shown that our energy generation scheduling strategy performs well and the integration of combined heat and power (CHP) generators effectively reduces the system expenditure. Our research also helps shed some illuminations on the investment policy making for microgrids.

  16. Evolutionary algorithms approach for integrated bioenergy supply chains optimization

    International Nuclear Information System (INIS)

    Ayoub, Nasser; Elmoshi, Elsayed; Seki, Hiroya; Naka, Yuji

    2009-01-01

    In this paper, we propose an optimization model and solution approach for designing and evaluating integrated system of bioenergy production supply chains, SC, at the local level. Designing SC that simultaneously utilize a set of bio-resources together is a complicated task, considered here. The complication arises from the different nature and sources of bio-resources used in bioenergy production i.e., wet, dry or agriculture, industrial etc. Moreover, the different concerns that decision makers should take into account, to overcome the tradeoff anxieties of the socialists and investors, i.e., social, environmental and economical factors, was considered through the options of multi-criteria optimization. A first part of this research was introduced in earlier research work explaining the general Bioenergy Decision System gBEDS [Ayoub N, Martins R, Wang K, Seki H, Naka Y. Two levels decision system for efficient planning and implementation of bioenergy production. Energy Convers Manage 2007;48:709-23]. In this paper, brief introduction and emphasize on gBEDS are given; the optimization model is presented and followed by a case study on designing a supply chain of nine bio-resources at Iida city in the middle part of Japan.

  17. An optimization approach for fitting canonical tensor decompositions.

    Energy Technology Data Exchange (ETDEWEB)

    Dunlavy, Daniel M. (Sandia National Laboratories, Albuquerque, NM); Acar, Evrim; Kolda, Tamara Gibson

    2009-02-01

    Tensor decompositions are higher-order analogues of matrix decompositions and have proven to be powerful tools for data analysis. In particular, we are interested in the canonical tensor decomposition, otherwise known as the CANDECOMP/PARAFAC decomposition (CPD), which expresses a tensor as the sum of component rank-one tensors and is used in a multitude of applications such as chemometrics, signal processing, neuroscience, and web analysis. The task of computing the CPD, however, can be difficult. The typical approach is based on alternating least squares (ALS) optimization, which can be remarkably fast but is not very accurate. Previously, nonlinear least squares (NLS) methods have also been recommended; existing NLS methods are accurate but slow. In this paper, we propose the use of gradient-based optimization methods. We discuss the mathematical calculation of the derivatives and further show that they can be computed efficiently, at the same cost as one iteration of ALS. Computational experiments demonstrate that the gradient-based optimization methods are much more accurate than ALS and orders of magnitude faster than NLS.

  18. Optimization of minoxidil microemulsions using fractional factorial design approach.

    Science.gov (United States)

    Jaipakdee, Napaphak; Limpongsa, Ekapol; Pongjanyakul, Thaned

    2016-01-01

    The objective of this study was to apply fractional factorial and multi-response optimization designs using desirability function approach for developing topical microemulsions. Minoxidil (MX) was used as a model drug. Limonene was used as an oil phase. Based on solubility, Tween 20 and caprylocaproyl polyoxyl-8 glycerides were selected as surfactants, propylene glycol and ethanol were selected as co-solvent in aqueous phase. Experiments were performed according to a two-level fractional factorial design to evaluate the effects of independent variables: Tween 20 concentration in surfactant system (X1), surfactant concentration (X2), ethanol concentration in co-solvent system (X3), limonene concentration (X4) on MX solubility (Y1), permeation flux (Y2), lag time (Y3), deposition (Y4) of MX microemulsions. It was found that Y1 increased with increasing X3 and decreasing X2, X4; whereas Y2 increased with decreasing X1, X2 and increasing X3. While Y3 was not affected by these variables, Y4 increased with decreasing X1, X2. Three regression equations were obtained and calculated for predicted values of responses Y1, Y2 and Y4. The predicted values matched experimental values reasonably well with high determination coefficient. By using optimal desirability function, optimized microemulsion demonstrating the highest MX solubility, permeation flux and skin deposition was confirmed as low level of X1, X2 and X4 but high level of X3.

  19. Renal function monitoring in heart failure - what is the optimal frequency? A narrative review.

    Science.gov (United States)

    Al-Naher, Ahmed; Wright, David; Devonald, Mark Alexander John; Pirmohamed, Munir

    2018-01-01

    The second most common cause of hospitalization due to adverse drug reactions in the UK is renal dysfunction due to diuretics, particularly in patients with heart failure, where diuretic therapy is a mainstay of treatment regimens. Therefore, the optimal frequency for monitoring renal function in these patients is an important consideration for preventing renal failure and hospitalization. This review looks at the current evidence for optimal monitoring practices of renal function in patients with heart failure according to national and international guidelines on the management of heart failure (AHA/NICE/ESC/SIGN). Current guidance of renal function monitoring is in large part based on expert opinion, with a lack of clinical studies that have specifically evaluated the optimal frequency of renal function monitoring in patients with heart failure. Furthermore, there is variability between guidelines, and recommendations are typically nonspecific. Safer prescribing of diuretics in combination with other antiheart failure treatments requires better evidence for frequency of renal function monitoring. We suggest developing more personalized monitoring rather than from the current medication-based guidance. Such flexible clinical guidelines could be implemented using intelligent clinical decision support systems. Personalized renal function monitoring would be more effective in preventing renal decline, rather than reacting to it. © 2017 The Authors. British Journal of Clinical Pharmacology published by John Wiley & Sons Ltd on behalf of British Pharmacological Society.

  20. Optimal Integration of Intermittent Renewables: A System LCOE Stochastic Approach

    Directory of Open Access Journals (Sweden)

    Carlo Lucheroni

    2018-03-01

    Full Text Available We propose a system level approach to value the impact on costs of the integration of intermittent renewable generation in a power system, based on expected breakeven cost and breakeven cost risk. To do this, we carefully reconsider the definition of Levelized Cost of Electricity (LCOE when extended to non-dispatchable generation, by examining extra costs and gains originated by the costly management of random power injections. We are thus lead to define a ‘system LCOE’ as a system dependent LCOE that takes properly into account intermittent generation. In order to include breakeven cost risk we further extend this deterministic approach to a stochastic setting, by introducing a ‘stochastic system LCOE’. This extension allows us to discuss the optimal integration of intermittent renewables from a broad, system level point of view. This paper thus aims to provide power producers and policy makers with a new methodological scheme, still based on the LCOE but which updates this valuation technique to current energy system configurations characterized by a large share of non-dispatchable production. Quantifying and optimizing the impact of intermittent renewables integration on power system costs, risk and CO 2 emissions, the proposed methodology can be used as powerful tool of analysis for assessing environmental and energy policies.

  1. Optimal Subinterval Selection Approach for Power System Transient Stability Simulation

    Directory of Open Access Journals (Sweden)

    Soobae Kim

    2015-10-01

    Full Text Available Power system transient stability analysis requires an appropriate integration time step to avoid numerical instability as well as to reduce computational demands. For fast system dynamics, which vary more rapidly than what the time step covers, a fraction of the time step, called a subinterval, is used. However, the optimal value of this subinterval is not easily determined because the analysis of the system dynamics might be required. This selection is usually made from engineering experiences, and perhaps trial and error. This paper proposes an optimal subinterval selection approach for power system transient stability analysis, which is based on modal analysis using a single machine infinite bus (SMIB system. Fast system dynamics are identified with the modal analysis and the SMIB system is used focusing on fast local modes. An appropriate subinterval time step from the proposed approach can reduce computational burden and achieve accurate simulation responses as well. The performance of the proposed method is demonstrated with the GSO 37-bus system.

  2. A statistical approach to optimizing concrete mixture design.

    Science.gov (United States)

    Ahmad, Shamsad; Alghamdi, Saeid A

    2014-01-01

    A step-by-step statistical approach is proposed to obtain optimum proportioning of concrete mixtures using the data obtained through a statistically planned experimental program. The utility of the proposed approach for optimizing the design of concrete mixture is illustrated considering a typical case in which trial mixtures were considered according to a full factorial experiment design involving three factors and their three levels (3(3)). A total of 27 concrete mixtures with three replicates (81 specimens) were considered by varying the levels of key factors affecting compressive strength of concrete, namely, water/cementitious materials ratio (0.38, 0.43, and 0.48), cementitious materials content (350, 375, and 400 kg/m(3)), and fine/total aggregate ratio (0.35, 0.40, and 0.45). The experimental data were utilized to carry out analysis of variance (ANOVA) and to develop a polynomial regression model for compressive strength in terms of the three design factors considered in this study. The developed statistical model was used to show how optimization of concrete mixtures can be carried out with different possible options.

  3. A Statistical Approach to Optimizing Concrete Mixture Design

    Directory of Open Access Journals (Sweden)

    Shamsad Ahmad

    2014-01-01

    Full Text Available A step-by-step statistical approach is proposed to obtain optimum proportioning of concrete mixtures using the data obtained through a statistically planned experimental program. The utility of the proposed approach for optimizing the design of concrete mixture is illustrated considering a typical case in which trial mixtures were considered according to a full factorial experiment design involving three factors and their three levels (33. A total of 27 concrete mixtures with three replicates (81 specimens were considered by varying the levels of key factors affecting compressive strength of concrete, namely, water/cementitious materials ratio (0.38, 0.43, and 0.48, cementitious materials content (350, 375, and 400 kg/m3, and fine/total aggregate ratio (0.35, 0.40, and 0.45. The experimental data were utilized to carry out analysis of variance (ANOVA and to develop a polynomial regression model for compressive strength in terms of the three design factors considered in this study. The developed statistical model was used to show how optimization of concrete mixtures can be carried out with different possible options.

  4. Transfer of European Approach to Groundwater Monitoring in China

    Science.gov (United States)

    Zhou, Y.

    2007-12-01

    Major groundwater development in North China has been a key factor in the huge economic growth and the achievement of self sufficiency in food production. Groundwater accounts for more than 70 percent of urban water supply and provides important source of irrigation water during dry period. This has however caused continuous groundwater level decline and many associated problems: hundreds of thousands of dry wells, dry river beds, land subsidence, seawater intrusion and groundwater quality deterioration. Groundwater levels in the shallow unconfined aquifers have fallen 10m up to 50m, at an average rate of 1m/year. In the deep confined aquifers groundwater levels have commonly fallen 30m up to 90m, at an average rate of 3 to 5m/year. Furthermore, elevated nitrate concentrations have been found in shallow groundwater in large scale. Pesticides have been detected in vulnerable aquifers. Urgent actions are necessary for aquifer recovery and mitigating groundwater pollution. Groundwater quantity and quality monitoring plays a very important role in formulating cost-effective groundwater protection strategies. In 2000 European Union initiated a Water Framework Directive (2000/60/EC) to protect all waters in Europe. The objective is to achieve good water and ecological status by 2015 cross all member states. The Directive requires monitoring surface and groundwater in all river basins. A guidance document for monitoring was developed and published in 2003. Groundwater monitoring programs are distinguished into groundwater level monitoring and groundwater quality monitoring. Groundwater quality monitoring is further divided into surveillance monitoring and operational monitoring. The monitoring guidance specifies key principles for the design and operation of monitoring networks. A Sino-Dutch cooperation project was developed to transfer European approach to groundwater monitoring in China. The project aims at building a China Groundwater Information Centre. Case studies

  5. Decentralized DC Microgrid Monitoring and Optimization via Primary Control Perturbations

    Science.gov (United States)

    Angjelichinoski, Marko; Scaglione, Anna; Popovski, Petar; Stefanovic, Cedomir

    2018-06-01

    We treat the emerging power systems with direct current (DC) MicroGrids, characterized with high penetration of power electronic converters. We rely on the power electronics to propose a decentralized solution for autonomous learning of and adaptation to the operating conditions of the DC Mirogrids; the goal is to eliminate the need to rely on an external communication system for such purpose. The solution works within the primary droop control loops and uses only local bus voltage measurements. Each controller is able to estimate (i) the generation capacities of power sources, (ii) the load demands, and (iii) the conductances of the distribution lines. To define a well-conditioned estimation problem, we employ decentralized strategy where the primary droop controllers temporarily switch between operating points in a coordinated manner, following amplitude-modulated training sequences. We study the use of the estimator in a decentralized solution of the Optimal Economic Dispatch problem. The evaluations confirm the usefulness of the proposed solution for autonomous MicroGrid operation.

  6. Monitoring and optimization of thermal recovery wells at Nexen's Long Lake project

    Energy Technology Data Exchange (ETDEWEB)

    Furtado, S.; Howe, A.; Wozney, G.; Zaffar, S. [Nexen Inc. (Canada); Nelson, A. [Matrikon Inc. (Canada)

    2011-07-01

    The Long Lake project, operated by Nexen and situated in the Athabasca Oil Sands area in Alberta, Canada is a steam assisted gravity drainage scheme. In such thermal recovery processes, access to real time information is crucial. Nexen used specific tools to optimize monitoring in its Long Lake project and the aim of this paper is to present those customized well and facilities dashboards and reservoir trends. Real time and historical data on pressure, temperature injection and production rates are used in a Honeywell PHD Historian connected to a Delta-V DCS system to optimize recovery from the deposit. Results showed that these enhanced monitoring capabilities provided Nexen the ability to react rapidly to abnormal conditions, which resulted in significant financial benefits. The implementation of dashboard and reservoir trends in its Long Lake project helped Nexen to better monitor the reservoir and thus to optimize bitumen recovery.

  7. Efficient approach for reliability-based optimization based on weighted importance sampling approach

    International Nuclear Information System (INIS)

    Yuan, Xiukai; Lu, Zhenzhou

    2014-01-01

    An efficient methodology is presented to perform the reliability-based optimization (RBO). It is based on an efficient weighted approach for constructing an approximation of the failure probability as an explicit function of the design variables which is referred to as the ‘failure probability function (FPF)’. It expresses the FPF as a weighted sum of sample values obtained in the simulation-based reliability analysis. The required computational effort for decoupling in each iteration is just single reliability analysis. After the approximation of the FPF is established, the target RBO problem can be decoupled into a deterministic one. Meanwhile, the proposed weighted approach is combined with a decoupling approach and a sequential approximate optimization framework. Engineering examples are given to demonstrate the efficiency and accuracy of the presented methodology

  8. Optimization of rootkit revealing system resources – A game theoretic approach

    Directory of Open Access Journals (Sweden)

    K. Muthumanickam

    2015-10-01

    Full Text Available Malicious rootkit is a collection of programs designed with the intent of infecting and monitoring the victim computer without the user’s permission. After the victim has been compromised, the remote attacker can easily cause further damage. In order to infect, compromise and monitor, rootkits adopt Native Application Programming Interface (API hooking technique. To reveal the hidden rootkits, current rootkit detection techniques check different data structures which hold reference to Native APIs. To verify these data structures, a large amount of system resources are required. This is because of the number of APIs in these data structures being quite large. Game theoretic approach is a useful mathematical tool to simulate network attacks. In this paper, a mathematical model is framed to optimize resource consumption using game-theory. To the best of our knowledge, this is the first work to be proposed for optimizing resource consumption while revealing rootkit presence using game theory. Non-cooperative game model is taken to discuss the problem. Analysis and simulation results show that our game theoretic model can effectively reduce the resource consumption by selectively monitoring the number of APIs in windows platform.

  9. Optimization of in-vivo monitoring program for radiation emergency response

    Energy Technology Data Exchange (ETDEWEB)

    Ha, Wi Ho; Kim, Jong Kyung [Dept. of Nuclear Engineering, Hanyang University, Seoul (Korea, Republic of)

    2016-12-15

    In case of radiation emergencies, internal exposure monitoring for the members of public will be required to confirm internal contamination of each individual. In-vivo monitoring technique using portable gamma spectrometer can be easily applied for internal exposure monitoring in the vicinity of the on-site area. In this study, minimum detectable doses (MDDs) for '1'3'4Cs, {sup 137}Cs, and {sup 131}I were calculated adjusting minimum detectable activities (MDAs) from 50 to 1,000 Bq to find out the optimal in-vivo counting condition. DCAL software was used to derive retention fraction of Cs and I isotopes in the whole body and thyroid, respectively. A minimum detectable level was determined to set committed effective dose of 0.1 mSv for emergency response. We found that MDDs at each MDA increased along with the elapsed time. 1,000 Bq for {sup 134}Cs and {sup 137}Cs, and 100 Bq for {sup 131}I were suggested as optimal MDAs to provide in-vivo monitoring service in case of radiation emergencies. In-vivo monitoring program for emergency response should be designed to achieve the optimal MDA suggested from the present work. We expect that a reduction of counting time compared with routine monitoring program can achieve the high throughput system in case of radiation emergencies.

  10. The Transcranial Doppler Sonography for Optimal Monitoring and Optimization of Cerebral Perfusion in Aortic Arch Surgery: A Case Series.

    Science.gov (United States)

    Ghazy, Tamer; Darwisch, Ayham; Schmidt, Torsten; Nguyen, Phong; Elmihy, Sohaila; Fajfrova, Zuzana; Zickmüller, Claudia; Matschke, Klaus; Kappert, Utz

    2017-06-16

    To analyze the feasibility and advantages of transcranial doppler sonography (TCD) for monitoring and optimization of selective cerebral perfusion (SCP) in aortic arch surgery. From April 2013 to April 2014, nine patients with extensive aortic pathology underwent surgery under moderate hypothermic cardiac arrest with unilateral antegrade SCP under TCD monitoring in our institution. Adequate sonographic window and visualization of circle of Willis were to be confirmed. Intraoperatively, a cerebral cross-filling of the contralateral cerebral arteries on the unilateral SCP was to be confirmed with TCD. If no cross-filling was confirmed, an optimization of the SCP was performed via increasing cerebral flow and increasing PCO2. If not successful, the SCP was to be switched to bilateral perfusion. Air bubble hits were recorded at the termination of SCP. A sonographic window was confirmed in all patients. Procedural success was 100%. The mean operative time was 298 ± 89 minutes. Adequate cross-filling was confirmed in 8 patients. In 1 patient, inadequate cross-filling was detected by TCD and an optimization of cerebral flow was necessary, which was successfully confirmed by TCD. There was no conversion to bilateral perfusion. Extensive air bubble hits were confirmed in 1 patient, who suffered a postoperative stroke. The 30-day mortality rate was 0. Conclusion: The TCD is feasible for cerebral perfusion monitoring in aortic surgery. It enables a confirmation of adequacy of cerebral perfusion strategy or the need for its optimization. Documentation of calcific or air-bubble hits might add insight into patients suffering postoperative neurological deficits.

  11. Measuring and monitoring IT using a balanced scorecard approach.

    Science.gov (United States)

    Gash, Deborah J; Hatton, Todd

    2007-01-01

    Ensuring that the information technology department is aligned with the overall health system strategy and is performing at a consistently high level is a priority at Saint Luke's Health System in Kansas City, Mo. The information technology department of Saint Luke's Health System has been using the balanced scorecard approach described in this article to measure and monitor its performance for four years. This article will review the structure of the IT department's scorecard; the categories and measures used; how benchmarks are determined; how linkage to the organizational scorecard is made; how results are reported; how changes are made to the scorecard; and tips for using a scorecard in other IT departments.

  12. Contaminated Land Remediation on decommissioned nuclear facilities: an optimized approach

    International Nuclear Information System (INIS)

    Sauer, Emilie

    2016-01-01

    The site of the Monts d'Arree located in Brennilis in the area of Brittany in France is a former 70 MWe heavy water reactor. EDF is now in charge of its decommissioning. The effluent treatment facility (STE) is currently being dismantled. As the future use of the site will exclude any nuclear activity, EDF is taking site release into consideration. Therefore a land management strategy for the land and soil is needed. An optimized approach is being proposed for the STE, to the French Regulator. In France, there is no specific regulation related to contaminated land (either radiologically contaminated or chemically contaminated). The French Nuclear Safety Authority's doctrine for radioactively contaminated land is a reference approach which involves complete clean-up, removing any trace of artificial radioactivity in the ground. If technical difficulties are encountered or the quantity of radioactive waste produced is too voluminous, an optimised clean-up can be implemented. EDF has been engaged since 2008 in drawing up a common guideline with other French nuclear operators (CEA and AREVA). The operators' guideline proposed the first steps to define how to optimise nuclear waste and to carry out a cost-benefits analysis. This is in accordance with the IAEA's prescriptions. Historically, various incidents involving effluent drum spills caused radiological contamination in the building platform and the underlying soil. While conducting the decontamination works in 2004/2005, it was impossible to remove all contamination (that went deeper than expected). A large characterization campaign was carried out in order to map the contamination. For the site investigation, 34 boreholes were drilled from 2 to 5 m under the building platform and 98 samples were analyzed to search for gamma, beta and alpha emitters. With the results, the contamination was mapped using a geostatistical approach developed by Geovariances TM . Main results were: - Soils are

  13. Using Geoscience and Geostatistics to Optimize Groundwater Monitoring Networks at the Savannah River Site

    International Nuclear Information System (INIS)

    Tuckfield, R.C.

    2001-01-01

    A team of scientists, engineers, and statisticians was assembled to review the operation efficiency of groundwater monitoring networks at US Department of Energy Savannah River Site (SRS). Subsequent to a feasibility study, this team selected and conducted an analysis of the A/M area groundwater monitoring well network. The purpose was to optimize the number of groundwater wells requisite for monitoring the plumes of the principal constituent of concern, viz., trichloroethylene (TCE). The project gathered technical expertise from the Savannah River Technology Center (SRTC), the Environmental Restoration Division (ERD), and the Environmental Protection Department (EPD) of SRS

  14. Dynamic programming approach for partial decision rule optimization

    KAUST Repository

    Amin, Talha

    2012-10-04

    This paper is devoted to the study of an extension of dynamic programming approach which allows optimization of partial decision rules relative to the length or coverage. We introduce an uncertainty measure J(T) which is the difference between number of rows in a decision table T and number of rows with the most common decision for T. For a nonnegative real number γ, we consider γ-decision rules (partial decision rules) that localize rows in subtables of T with uncertainty at most γ. Presented algorithm constructs a directed acyclic graph Δ γ(T) which nodes are subtables of the decision table T given by systems of equations of the kind "attribute = value". This algorithm finishes the partitioning of a subtable when its uncertainty is at most γ. The graph Δ γ(T) allows us to describe the whole set of so-called irredundant γ-decision rules. We can optimize such set of rules according to length or coverage. This paper contains also results of experiments with decision tables from UCI Machine Learning Repository.

  15. Design optimization for cost and quality: The robust design approach

    Science.gov (United States)

    Unal, Resit

    1990-01-01

    Designing reliable, low cost, and operable space systems has become the key to future space operations. Designing high quality space systems at low cost is an economic and technological challenge to the designer. A systematic and efficient way to meet this challenge is a new method of design optimization for performance, quality, and cost, called Robust Design. Robust Design is an approach for design optimization. It consists of: making system performance insensitive to material and subsystem variation, thus allowing the use of less costly materials and components; making designs less sensitive to the variations in the operating environment, thus improving reliability and reducing operating costs; and using a new structured development process so that engineering time is used most productively. The objective in Robust Design is to select the best combination of controllable design parameters so that the system is most robust to uncontrollable noise factors. The robust design methodology uses a mathematical tool called an orthogonal array, from design of experiments theory, to study a large number of decision variables with a significantly small number of experiments. Robust design also uses a statistical measure of performance, called a signal-to-noise ratio, from electrical control theory, to evaluate the level of performance and the effect of noise factors. The purpose is to investigate the Robust Design methodology for improving quality and cost, demonstrate its application by the use of an example, and suggest its use as an integral part of space system design process.

  16. Dynamic programming approach for partial decision rule optimization

    KAUST Repository

    Amin, Talha M.; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata

    2012-01-01

    This paper is devoted to the study of an extension of dynamic programming approach which allows optimization of partial decision rules relative to the length or coverage. We introduce an uncertainty measure J(T) which is the difference between number of rows in a decision table T and number of rows with the most common decision for T. For a nonnegative real number γ, we consider γ-decision rules (partial decision rules) that localize rows in subtables of T with uncertainty at most γ. Presented algorithm constructs a directed acyclic graph Δ γ(T) which nodes are subtables of the decision table T given by systems of equations of the kind "attribute = value". This algorithm finishes the partitioning of a subtable when its uncertainty is at most γ. The graph Δ γ(T) allows us to describe the whole set of so-called irredundant γ-decision rules. We can optimize such set of rules according to length or coverage. This paper contains also results of experiments with decision tables from UCI Machine Learning Repository.

  17. Multipurpose Water Reservoir Management: An Evolutionary Multiobjective Optimization Approach

    Directory of Open Access Journals (Sweden)

    Luís A. Scola

    2014-01-01

    Full Text Available The reservoirs that feed large hydropower plants should be managed in order to provide other uses for the water resources. Those uses include, for instance, flood control and avoidance, irrigation, navigability in the rivers, and other ones. This work presents an evolutionary multiobjective optimization approach for the study of multiple water usages in multiple interlinked reservoirs, including both power generation objectives and other objectives not related to energy generation. The classical evolutionary algorithm NSGA-II is employed as the basic multiobjective optimization machinery, being modified in order to cope with specific problem features. The case studies, which include the analysis of a problem which involves an objective of navigability on the river, are tailored in order to illustrate the usefulness of the data generated by the proposed methodology for decision-making on the problem of operation planning of multiple reservoirs with multiple usages. It is shown that it is even possible to use the generated data in order to determine the cost of any new usage of the water, in terms of the opportunity cost that can be measured on the revenues related to electric energy sales.

  18. A convex optimization approach for solving large scale linear systems

    Directory of Open Access Journals (Sweden)

    Debora Cores

    2017-01-01

    Full Text Available The well-known Conjugate Gradient (CG method minimizes a strictly convex quadratic function for solving large-scale linear system of equations when the coefficient matrix is symmetric and positive definite. In this work we present and analyze a non-quadratic convex function for solving any large-scale linear system of equations regardless of the characteristics of the coefficient matrix. For finding the global minimizers, of this new convex function, any low-cost iterative optimization technique could be applied. In particular, we propose to use the low-cost globally convergent Spectral Projected Gradient (SPG method, which allow us to extend this optimization approach for solving consistent square and rectangular linear system, as well as linear feasibility problem, with and without convex constraints and with and without preconditioning strategies. Our numerical results indicate that the new scheme outperforms state-of-the-art iterative techniques for solving linear systems when the symmetric part of the coefficient matrix is indefinite, and also for solving linear feasibility problems.

  19. Optimization of decision rules based on dynamic programming approach

    KAUST Repository

    Zielosko, Beata

    2014-01-14

    This chapter is devoted to the study of an extension of dynamic programming approach which allows optimization of approximate decision rules relative to the length and coverage. We introduce an uncertainty measure that is the difference between number of rows in a given decision table and the number of rows labeled with the most common decision for this table divided by the number of rows in the decision table. We fix a threshold γ, such that 0 ≤ γ < 1, and study so-called γ-decision rules (approximate decision rules) that localize rows in subtables which uncertainty is at most γ. Presented algorithm constructs a directed acyclic graph Δ γ T which nodes are subtables of the decision table T given by pairs "attribute = value". The algorithm finishes the partitioning of a subtable when its uncertainty is at most γ. The chapter contains also results of experiments with decision tables from UCI Machine Learning Repository. © 2014 Springer International Publishing Switzerland.

  20. An analytic approach to optimize tidal turbine fields

    Science.gov (United States)

    Pelz, P.; Metzler, M.

    2013-12-01

    Motivated by global warming due to CO2-emission various technologies for harvesting of energy from renewable sources are developed. Hydrokinetic turbines get applied to surface watercourse or tidal flow to gain electrical energy. Since the available power for hydrokinetic turbines is proportional to the projected cross section area, fields of turbines are installed to scale shaft power. Each hydrokinetic turbine of a field can be considered as a disk actuator. In [1], the first author derives the optimal operation point for hydropower in an open-channel. The present paper concerns about a 0-dimensional model of a disk-actuator in an open-channel flow with bypass, as a special case of [1]. Based on the energy equation, the continuity equation and the momentum balance an analytical approach is made to calculate the coefficient of performance for hydrokinetic turbines with bypass flow as function of the turbine head and the ratio of turbine width to channel width.

  1. Approaches of Russian oil companies to optimal capital structure

    Science.gov (United States)

    Ishuk, T.; Ulyanova, O.; Savchitz, V.

    2015-11-01

    Oil companies play a vital role in Russian economy. Demand for hydrocarbon products will be increasing for the nearest decades simultaneously with the population growth and social needs. Change of raw-material orientation of Russian economy and the transition to the innovative way of the development do not exclude the development of oil industry in future. Moreover, society believes that this sector must bring the Russian economy on to the road of innovative development due to neo-industrialization. To achieve this, the government power as well as capital management of companies are required. To make their optimal capital structure, it is necessary to minimize the capital cost, decrease definite risks under existing limits, and maximize profitability. The capital structure analysis of Russian and foreign oil companies shows different approaches, reasons, as well as conditions and, consequently, equity capital and debt capital relationship and their cost, which demands the effective capital management strategy.

  2. Optimal extraction of petroleum resources: an empirical approach

    International Nuclear Information System (INIS)

    Helmi-Oskoui, B.; Narayanan, R.; Glover, T.; Lyon, K.S.; Sinha, M.

    1992-01-01

    Petroleum reservoir behaviour at different levels of reservoir pressure is estimated with the actual well data and reservoir characteristics. Using the pressure at the bottom of producing wells as control variables, the time paths of profit maximizing joint production of oil and natural gas under various tax policies are obtained using a dynamic optimization approach. The results emerge from numerical solution of the maximization of estimated future expected revenues net of variable costs in the presence of taxation. Higher discount rate shifts the production forward in time and prolongs the production plan. The analysis of the state, corporate income taxes and depletion allowance reveals the changes in the revenues to the firm, the state and the federal governments. 18 refs., 3 figs., 4 tabs

  3. [Optimization of organizational approaches to management of patients with atherosclerosis].

    Science.gov (United States)

    Barbarash, L S; Barbarash, O L; Artamonova, G V; Sumin, A N

    2014-01-01

    Despite undoubted achievements of modern cardiology in prevention and treatment of atherosclerosis, cardiologists, neurologists, and vascular surgeons are still facing severe stenotic atherosclerotic lesions in different vascular regions, both symptomatic and asymptomatic. As a rule hemodynamically significant stenoses of different locations are found after development of acute vascular events. In this regard, active detection of arterial stenoses localized in different areas just at primary contact of patients presenting with symptoms of ischemia of various locations with care providers appears to be crucial. Further monitoring of these stenoses is also important. The article is dedicated to innovative organizational approaches to provision of healthcare to patients suffering from circulatory system diseases that have contributed to improvement of demographic situation in Kuzbass.

  4. A Dynamic Programming Model for Optimizing Frequency of Time-Lapse Seismic Monitoring in Geological CO2 Storage

    Science.gov (United States)

    Bhattacharjya, D.; Mukerji, T.; Mascarenhas, O.; Weyant, J.

    2005-12-01

    programming to optimize over the entire finite time horizon. We use a Monte Carlo approach to explore trade-offs between survey costs, remediation costs, and survey frequency and to analyze the sensitivity to leakage probabilities, and carbon tax. The model can be useful in determining a monitoring regime appropriate to a specific site's risk and set of remediation options, rather than a generic one based on a maximum downside risk threshold for CO2 storage as a whole. This may have implications on the overall costs associated with deploying Carbon capture and storage on a large scale.

  5. Tapping and listening: a new approach to bolt looseness monitoring

    Science.gov (United States)

    Kong, Qingzhao; Zhu, Junxiao; Ho, Siu Chun Michael; Song, Gangbing

    2018-07-01

    Bolted joints are among the most common building blocks used across different types of structures, and are often the key components that sew all other structural parts together. Monitoring and assessment of looseness in bolted structures is one of the most attractive topics in mechanical, aerospace, and civil engineering. This paper presents a new percussion-based non-destructive approach to determine the health condition of bolted joints with the help of machine learning. The proposed method is very similar to the percussive diagnostic techniques used in clinical examinations to diagnose the health of patients. Due to the different interfacial properties among the bolts, nuts and the host structure, bolted joints can generate unique sounds when it is excited by impacts, such as from tapping. Power spectrum density, as a signal feature, was used to recognize and classify recorded tapping data. A machine learning model using the decision tree method was employed to identify the bolt looseness level. Experiments demonstrated that the newly proposed method for bolt looseness detection is very easy to implement by ‘listening to tapping’ and the monitoring accuracy is very high. With the rapid in robotics, the proposed approach has great potential to be implemented with intimately weaving robotics and machine learning to produce a cyber-physical system that can automatically inspect and determine the health of a structure.

  6. Sampling optimization trade-offs for long-term monitoring of gamma dose rates

    NARCIS (Netherlands)

    Melles, S.J.; Heuvelink, G.B.M.; Twenhöfel, C.J.W.; Stöhlker, U.

    2008-01-01

    This paper applies a recently developed optimization method to examine the design of networks that monitor radiation under routine conditions. Annual gamma dose rates were modelled by combining regression with interpolation of the regression residuals using spatially exhaustive predictors and an

  7. Health technology assessment to optimize health technology utilization: using implementation initiatives and monitoring processes.

    Science.gov (United States)

    Frønsdal, Katrine B; Facey, Karen; Klemp, Marianne; Norderhaug, Inger Natvig; Mørland, Berit; Røttingen, John-Arne

    2010-07-01

    The way in which a health technology is used in any particular health system depends on the decisions and actions of a variety of stakeholders, the local culture, and context. In 2009, the HTAi Policy Forum considered how health technology assessment (HTA) could be improved to optimize the use of technologies (in terms of uptake, change in use, or disinvestment) in such complex systems. In scoping, it was agreed to focus on initiatives to implement evidence-based guidance and monitoring activities. A review identified systematic reviews of implementation initiatives and monitoring activities. A two-day deliberative workshop was held to discuss key papers, members' experiences, and collectively address key questions. This consensus paper was developed by email and finalized at a postworkshop meeting. Evidence suggests that the impact and use of HTA could be increased by ensuring timely delivery of relevant reports to clearly determined policy receptor (decision-making) points. To achieve this, the breadth of assessment, implementation initiatives such as incentives and targeted, intelligent dissemination of HTA result, needs to be considered. HTA stakeholders undertake a variety of monitoring activities, which could inform optimal use of a technology. However, the quality of these data varies and is often not submitted to an HTA. Monitoring data should be sufficiently robust so that they can be used in HTA to inform optimal use of technology. Evidence-based implementation initiatives should be developed for HTA, to better inform decision makers at all levels in a health system about the optimal use of technology.

  8. A Risk-Based Multi-Objective Optimization Concept for Early-Warning Monitoring Networks

    Science.gov (United States)

    Bode, F.; Loschko, M.; Nowak, W.

    2014-12-01

    Groundwater is a resource for drinking water and hence needs to be protected from contaminations. However, many well catchments include an inventory of known and unknown risk sources which cannot be eliminated, especially in urban regions. As matter of risk control, all these risk sources should be monitored. A one-to-one monitoring situation for each risk source would lead to a cost explosion and is even impossible for unknown risk sources. However, smart optimization concepts could help to find promising low-cost monitoring network designs.In this work we develop a concept to plan monitoring networks using multi-objective optimization. Our considered objectives are to maximize the probability of detecting all contaminations and the early warning time and to minimize the installation and operating costs of the monitoring network. A qualitative risk ranking is used to prioritize the known risk sources for monitoring. The unknown risk sources can neither be located nor ranked. Instead, we represent them by a virtual line of risk sources surrounding the production well.We classify risk sources into four different categories: severe, medium and tolerable for known risk sources and an extra category for the unknown ones. With that, early warning time and detection probability become individual objectives for each risk class. Thus, decision makers can identify monitoring networks which are valid for controlling the top risk sources, and evaluate the capabilities (or search for least-cost upgrade) to also cover moderate, tolerable and unknown risk sources. Monitoring networks which are valid for the remaining risk also cover all other risk sources but the early-warning time suffers.The data provided for the optimization algorithm are calculated in a preprocessing step by a flow and transport model. Uncertainties due to hydro(geo)logical phenomena are taken into account by Monte-Carlo simulations. To avoid numerical dispersion during the transport simulations we use the

  9. Supplemental Assessment of the Y-12 Groundwater Protection Program Using Monitoring and Remediation Optimization System Software

    Energy Technology Data Exchange (ETDEWEB)

    Elvado Environmental LLC; GSI Environmental LLC

    2009-01-01

    A supplemental quantitative assessment of the Groundwater Protection Program (GWPP) at the Y-12 National Security Complex (Y-12) in Oak Ridge, TN was performed using the Monitoring and Remediation Optimization System (MAROS) software. This application was previously used as part of a similar quantitative assessment of the GWPP completed in December 2005, hereafter referenced as the 'baseline' MAROS assessment (BWXT Y-12 L.L.C. [BWXT] 2005). The MAROS software contains modules that apply statistical analysis techniques to an existing GWPP analytical database in conjunction with hydrogeologic factors, regulatory framework, and the location of potential receptors, to recommend an improved groundwater monitoring network and optimum sampling frequency for individual monitoring locations. The goal of this supplemental MAROS assessment of the Y-12 GWPP is to review and update monitoring network optimization recommendations resulting from the 2005 baseline report using data collected through December 2007. The supplemental MAROS assessment is based on the findings of the baseline MAROS assessment and includes only the groundwater sampling locations (wells and natural springs) currently granted 'Active' status in accordance with the Y-12 GWPP Monitoring Optimization Plan (MOP). The results of the baseline MAROS assessment provided technical rationale regarding the 'Active' status designations defined in the MOP (BWXT 2006). One objective of the current report is to provide a quantitative review of data collected from Active but infrequently sampled wells to confirm concentrations at these locations. This supplemental MAROS assessment does not include the extensive qualitative evaluations similar to those presented in the baseline report.

  10. Workforce Optimization for Bank Operation Centers: A Machine Learning Approach

    Directory of Open Access Journals (Sweden)

    Sefik Ilkin Serengil

    2017-12-01

    Full Text Available Online Banking Systems evolved and improved in recent years with the use of mobile and online technologies, performing money transfer transactions on these channels can be done without delay and human interaction, however commercial customers still tend to transfer money on bank branches due to several concerns. Bank Operation Centers serve to reduce the operational workload of branches. Centralized management also offers personalized service by appointed expert employees in these centers. Inherently, workload volume of money transfer transactions changes dramatically in hours. Therefore, work-force should be planned instantly or early to save labor force and increase operational efficiency. This paper introduces a hybrid multi stage approach for workforce planning in bank operation centers by the application of supervised and unsu-pervised learning algorithms. Expected workload would be predicted as supervised learning whereas employees are clus-tered into different skill groups as unsupervised learning to match transactions and proper employees. Finally, workforce optimization is analyzed for proposed approach on production data.

  11. Optimal Control Approaches to the Aggregate Production Planning Problem

    Directory of Open Access Journals (Sweden)

    Yasser A. Davizón

    2015-12-01

    Full Text Available In the area of production planning and control, the aggregate production planning (APP problem represents a great challenge for decision makers in production-inventory systems. Tradeoff between inventory-capacity is known as the APP problem. To address it, static and dynamic models have been proposed, which in general have several shortcomings. It is the premise of this paper that the main drawback of these proposals is, that they do not take into account the dynamic nature of the APP. For this reason, we propose the use of an Optimal Control (OC formulation via the approach of energy-based and Hamiltonian-present value. The main contribution of this paper is the mathematical model which integrates a second order dynamical system coupled with a first order system, incorporating production rate, inventory level, and capacity as well with the associated cost by work force in the same formulation. Also, a novel result in relation with the Hamiltonian-present value in the OC formulation is that it reduces the inventory level compared with the pure energy based approach for APP. A set of simulations are provided which verifies the theoretical contribution of this work.

  12. Optimizing Concurrent M3-Transactions: A Fuzzy Constraint Satisfaction Approach

    Directory of Open Access Journals (Sweden)

    Peng LI

    2004-10-01

    Full Text Available Due to the high connectivity and great convenience, many E-commerce application systems have a high transaction volume. Consequently, the system state changes rapidly and it is likely that customers issue transactions based on out-of-date state information. Thus, the potential of transaction abortion increases greatly. To address this problem, we proposed an M3-transaction model. An M3-transaction is a generalized transaction where users can issue their preferences in a request by specifying multiple criteria and optional data resources simultaneously within one transaction. In this paper, we introduce the transaction grouping and group evaluation techniques. We consider evaluating a group of M3-transactions arrived to the system within a short duration together. The system makes optimal decisions in allocating data to transactions to achieve better customer satisfaction and lower transaction failure rate. We apply the fuzzy constraint satisfaction approach for decision-making. We also conduct experimental studies to evaluate the performance of our approach. The results show that the M3-transaction with group evaluation is more resilient to failure and yields much better performance than the traditional transaction model.

  13. Treatment of chronic myeloid leukemia: assessing risk, monitoring response, and optimizing outcome.

    Science.gov (United States)

    Shanmuganathan, Naranie; Hiwase, Devendra Keshaorao; Ross, David Morrall

    2017-12-01

    Over the past two decades, tyrosine kinase inhibitors have become the foundation of chronic myeloid leukemia (CML) treatment. The choice between imatinib and newer tyrosine kinase inhibitors (TKIs) needs to be balanced against the known toxicity and efficacy data for each drug, the therapeutic goal being to maximize molecular response assessed by BCR-ABL RQ-PCR assay. There is accumulating evidence that the early achievement of molecular targets is a strong predictor of superior long-term outcomes. Early response assessment provides the opportunity to intervene early with the aim of ensuring an optimal response. Failure to achieve milestones or loss of response can have diverse causes. We describe how clinical and laboratory monitoring can be used to ensure that each patient is achieving an optimal response and, in patients who do not reach optimal response milestones, how the monitoring results can be used to detect resistance and understand its origins.

  14. Ant colony optimization and neural networks applied to nuclear power plant monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Santos, Gean Ribeiro dos; Andrade, Delvonei Alves de; Pereira, Iraci Martinez, E-mail: gean@usp.br, E-mail: delvonei@ipen.br, E-mail: martinez@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2015-07-01

    A recurring challenge in production processes is the development of monitoring and diagnosis systems. Those systems help on detecting unexpected changes and interruptions, preventing losses and mitigating risks. Artificial Neural Networks (ANNs) have been extensively used in creating monitoring systems. Usually the ANNs created to solve this kind of problem are created by taking into account only parameters as the number of inputs, outputs, and hidden layers. The result networks are generally fully connected and have no improvements in its topology. This work intends to use an Ant Colony Optimization (ACO) algorithm to create a tuned neural network. The ACO search algorithm will use Back Error Propagation (BP) to optimize the network topology by suggesting the best neuron connections. The result ANN will be applied to monitoring the IEA-R1 research reactor at IPEN. (author)

  15. Ant colony optimization and neural networks applied to nuclear power plant monitoring

    International Nuclear Information System (INIS)

    Santos, Gean Ribeiro dos; Andrade, Delvonei Alves de; Pereira, Iraci Martinez

    2015-01-01

    A recurring challenge in production processes is the development of monitoring and diagnosis systems. Those systems help on detecting unexpected changes and interruptions, preventing losses and mitigating risks. Artificial Neural Networks (ANNs) have been extensively used in creating monitoring systems. Usually the ANNs created to solve this kind of problem are created by taking into account only parameters as the number of inputs, outputs, and hidden layers. The result networks are generally fully connected and have no improvements in its topology. This work intends to use an Ant Colony Optimization (ACO) algorithm to create a tuned neural network. The ACO search algorithm will use Back Error Propagation (BP) to optimize the network topology by suggesting the best neuron connections. The result ANN will be applied to monitoring the IEA-R1 research reactor at IPEN. (author)

  16. Optimization-Based Approaches to Control of Probabilistic Boolean Networks

    Directory of Open Access Journals (Sweden)

    Koichi Kobayashi

    2017-02-01

    Full Text Available Control of gene regulatory networks is one of the fundamental topics in systems biology. In the last decade, control theory of Boolean networks (BNs, which is well known as a model of gene regulatory networks, has been widely studied. In this review paper, our previously proposed methods on optimal control of probabilistic Boolean networks (PBNs are introduced. First, the outline of PBNs is explained. Next, an optimal control method using polynomial optimization is explained. The finite-time optimal control problem is reduced to a polynomial optimization problem. Furthermore, another finite-time optimal control problem, which can be reduced to an integer programming problem, is also explained.

  17. A combined stochastic programming and optimal control approach to personal finance and pensions

    DEFF Research Database (Denmark)

    Konicz, Agnieszka Karolina; Pisinger, David; Rasmussen, Kourosh Marjani

    2015-01-01

    The paper presents a model that combines a dynamic programming (stochastic optimal control) approach and a multi-stage stochastic linear programming approach (SLP), integrated into one SLP formulation. Stochastic optimal control produces an optimal policy that is easy to understand and implement....

  18. Defending against the Advanced Persistent Threat: An Optimal Control Approach

    Directory of Open Access Journals (Sweden)

    Pengdeng Li

    2018-01-01

    Full Text Available The new cyberattack pattern of advanced persistent threat (APT has posed a serious threat to modern society. This paper addresses the APT defense problem, that is, the problem of how to effectively defend against an APT campaign. Based on a novel APT attack-defense model, the effectiveness of an APT defense strategy is quantified. Thereby, the APT defense problem is modeled as an optimal control problem, in which an optimal control stands for a most effective APT defense strategy. The existence of an optimal control is proved, and an optimality system is derived. Consequently, an optimal control can be figured out by solving the optimality system. Some examples of the optimal control are given. Finally, the influence of some factors on the effectiveness of an optimal control is examined through computer experiments. These findings help organizations to work out policies of defending against APTs.

  19. MVMO-based approach for optimal placement and tuning of ...

    African Journals Online (AJOL)

    bus (New England) test system. Numerical results include performance comparisons with other metaheuristic optimization techniques, namely, comprehensive learning particle swarm optimization (CLPSO), genetic algorithm with multi-parent ...

  20. New Approach to Monitor Transboundary Particulate Pollution over Northeast Asia

    Science.gov (United States)

    Park, M. E.; Song, C. H.; Park, R. S.; Lee, Jaehwa; Kim, J.; Lee, S.; Woo, J. H.; Carmichael, G. R.; Eck, Thomas F.; Holben, Brent N.; hide

    2014-01-01

    A new approach to more accurately monitor and evaluate transboundary particulate matter (PM) pollution is introduced based on aerosol optical products from Korea's Geostationary Ocean Color Imager (GOCI). The area studied is Northeast Asia (including eastern parts of China, the Korean peninsula and Japan), where GOCI has been monitoring since June 2010. The hourly multi-spectral aerosol optical data that were retrieved from GOCI sensor onboard geostationary satellite COMS (Communication, Ocean, and Meteorology Satellite) through the Yonsei aerosol retrieval algorithm were first presented and used in this study. The GOCI-retrieved aerosol optical data are integrated with estimated aerosol distributions from US EPA Models-3/CMAQ (Community Multi-scale Air Quality) v4.5.1 model simulations via data assimilation technique, thereby making the aerosol data spatially continuous and available even for cloud contamination cells. The assimilated aerosol optical data are utilized to provide quantitative estimates of transboundary PM pollution from China to the Korean peninsula and Japan. For the period of 1 April to 31 May, 2011 this analysis yields estimates that AOD as a proxy for PM2.5 or PM10 during long-range transport events increased by 117-265% compared to background average AOD (aerosol optical depth) at the four AERONET sites in Korea, and average AOD increases of 121% were found when averaged over the entire Korean peninsula. This paper demonstrates that the use of multi-spectral AOD retrievals from geostationary satellites can improve estimates of transboundary PM pollution. Such data will become more widely available later this decade when new sensors such as the GEMS (Geostationary Environment Monitoring Spectrometer) and GOCI-2 are scheduled to be launched.

  1. Rational risk-based decision support for drinking water well managers by optimized monitoring designs

    Science.gov (United States)

    Enzenhöfer, R.; Geiges, A.; Nowak, W.

    2011-12-01

    Advection-based well-head protection zones are commonly used to manage the contamination risk of drinking water wells. Considering the insufficient knowledge about hazards and transport properties within the catchment, current Water Safety Plans recommend that catchment managers and stakeholders know, control and monitor all possible hazards within the catchments and perform rational risk-based decisions. Our goal is to supply catchment managers with the required probabilistic risk information, and to generate tools that allow for optimal and rational allocation of resources between improved monitoring versus extended safety margins and risk mitigation measures. To support risk managers with the indispensable information, we address the epistemic uncertainty of advective-dispersive solute transport and well vulnerability (Enzenhoefer et al., 2011) within a stochastic simulation framework. Our framework can separate between uncertainty of contaminant location and actual dilution of peak concentrations by resolving heterogeneity with high-resolution Monte-Carlo simulation. To keep computational costs low, we solve the reverse temporal moment transport equation. Only in post-processing, we recover the time-dependent solute breakthrough curves and the deduced well vulnerability criteria from temporal moments by non-linear optimization. Our first step towards optimal risk management is optimal positioning of sampling locations and optimal choice of data types to reduce best the epistemic prediction uncertainty for well-head delineation, using the cross-bred Likelihood Uncertainty Estimator (CLUE, Leube et al., 2011) for optimal sampling design. Better monitoring leads to more reliable and realistic protection zones and thus helps catchment managers to better justify smaller, yet conservative safety margins. In order to allow an optimal choice in sampling strategies, we compare the trade-off in monitoring versus the delineation costs by accounting for ill

  2. Remotely Sensed Monitoring of Small Reservoir Dynamics: A Bayesian Approach

    Directory of Open Access Journals (Sweden)

    Dirk Eilander

    2014-01-01

    Full Text Available Multipurpose small reservoirs are important for livelihoods in rural semi-arid regions. To manage and plan these reservoirs and to assess their hydrological impact at a river basin scale, it is important to monitor their water storage dynamics. This paper introduces a Bayesian approach for monitoring small reservoirs with radar satellite images. The newly developed growing Bayesian classifier has a high degree of automation, can readily be extended with auxiliary information and reduces the confusion error to the land-water boundary pixels. A case study has been performed in the Upper East Region of Ghana, based on Radarsat-2 data from November 2012 until April 2013. Results show that the growing Bayesian classifier can deal with the spatial and temporal variability in synthetic aperture radar (SAR backscatter intensities from small reservoirs. Due to its ability to incorporate auxiliary information, the algorithm is able to delineate open water from SAR imagery with a low land-water contrast in the case of wind-induced Bragg scattering or limited vegetation on the land surrounding a small reservoir.

  3. A Google Trends-based approach for monitoring NSSI

    Directory of Open Access Journals (Sweden)

    Bragazzi NL

    2013-12-01

    Full Text Available Nicola Luigi Bragazzi DINOGMI, Department of Neuroscience, Rehabilitation, Ophthalmology, Genetics, Maternal and Child Health, Section of Psychiatry, University of Genoa, Genoa, Italy Abstract: Non-suicidal self-injury (NSSI is an intentional, direct, and socially unacceptable behavior resulting in the destruction of one's own body tissues with no intention of dying or committing suicide, even though it is associated with a higher risk of attempted, planned, or just considered suicide. In this preliminary report, we introduce the concept of “NSSI 2.0”; that is to say, the study of the Internet usage by subjects with NSSI, and we introduce a Google Trends-based approach for monitoring NSSI, called NSSI infodemiology and infoveillance. Despite some limitations, Google Trends has already proven to be reliable for infectious diseases monitoring, and here we extend its application and potentiality in the field of suicidology. Ad hoc web portals and surveys could be designed in light of the reported results for helping people with NSSI. Keywords: infodemiology, infoveillance, Internet, non-suicidal self-injury

  4. [Research and design for optimal position of electrocardio-electrodes in monitoring clothing for men].

    Science.gov (United States)

    Liang, Lijun; Hu, Yao; Liu, Hao; Li, Xiaojiu; Li, Jin; He, Yin

    2017-04-01

    In order to reduce the mortality rate of cardiovascular disease patients effectively, improve the electrocardiogram (ECG) accuracy of signal acquisition, and reduce the influence of motion artifacts caused by the electrodes in inappropriate location in the clothing for ECG measurement, we in this article present a research on the optimum place of ECG electrodes in male clothing using three-lead monitoring methods. In the 3-lead ECG monitoring clothing for men we selected test points. Comparing the ECG and power spectrum analysis of the acquired ECG signal quality of each group of points, we determined the best location of ECG electrodes in the male monitoring clothing. The electrode motion artifacts caused by improper location had been significantly improved when electrodes were put in the best position of the clothing for men. The position of electrodes is crucial for ECG monitoring clothing. The stability of the acquired ECG signal could be improved significantly when electrodes are put at optimal locations.

  5. Optimal unit sizing for small-scale integrated energy systems using multi-objective interval optimization and evidential reasoning approach

    International Nuclear Information System (INIS)

    Wei, F.; Wu, Q.H.; Jing, Z.X.; Chen, J.J.; Zhou, X.X.

    2016-01-01

    This paper proposes a comprehensive framework including a multi-objective interval optimization model and evidential reasoning (ER) approach to solve the unit sizing problem of small-scale integrated energy systems, with uncertain wind and solar energies integrated. In the multi-objective interval optimization model, interval variables are introduced to tackle the uncertainties of the optimization problem. Aiming at simultaneously considering the cost and risk of a business investment, the average and deviation of life cycle cost (LCC) of the integrated energy system are formulated. In order to solve the problem, a novel multi-objective optimization algorithm, MGSOACC (multi-objective group search optimizer with adaptive covariance matrix and chaotic search), is developed, employing adaptive covariance matrix to make the search strategy adaptive and applying chaotic search to maintain the diversity of group. Furthermore, ER approach is applied to deal with multiple interests of an investor at the business decision making stage and to determine the final unit sizing solution from the Pareto-optimal solutions. This paper reports on the simulation results obtained using a small-scale direct district heating system (DH) and a small-scale district heating and cooling system (DHC) optimized by the proposed framework. The results demonstrate the superiority of the multi-objective interval optimization model and ER approach in tackling the unit sizing problem of integrated energy systems considering the integration of uncertian wind and solar energies. - Highlights: • Cost and risk of investment in small-scale integrated energy systems are considered. • A multi-objective interval optimization model is presented. • A novel multi-objective optimization algorithm (MGSOACC) is proposed. • The evidential reasoning (ER) approach is used to obtain the final optimal solution. • The MGSOACC and ER can tackle the unit sizing problem efficiently.

  6. A multiscale optimization approach to detect exudates in the macula.

    Science.gov (United States)

    Agurto, Carla; Murray, Victor; Yu, Honggang; Wigdahl, Jeffrey; Pattichis, Marios; Nemeth, Sheila; Barriga, E Simon; Soliz, Peter

    2014-07-01

    Pathologies that occur on or near the fovea, such as clinically significant macular edema (CSME), represent high risk for vision loss. The presence of exudates, lipid residues of serous leakage from damaged capillaries, has been associated with CSME, in particular if they are located one optic disc-diameter away from the fovea. In this paper, we present an automatic system to detect exudates in the macula. Our approach uses optimal thresholding of instantaneous amplitude (IA) components that are extracted from multiple frequency scales to generate candidate exudate regions. For each candidate region, we extract color, shape, and texture features that are used for classification. Classification is performed using partial least squares (PLS). We tested the performance of the system on two different databases of 652 and 400 images. The system achieved an area under the receiver operator characteristic curve (AUC) of 0.96 for the combination of both databases and an AUC of 0.97 for each of them when they were evaluated independently.

  7. A nonlinear optimal control approach for chaotic finance dynamics

    Science.gov (United States)

    Rigatos, G.; Siano, P.; Loia, V.; Tommasetti, A.; Troisi, O.

    2017-11-01

    A new nonlinear optimal control approach is proposed for stabilization of the dynamics of a chaotic finance model. The dynamic model of the financial system, which expresses interaction between the interest rate, the investment demand, the price exponent and the profit margin, undergoes approximate linearization round local operating points. These local equilibria are defined at each iteration of the control algorithm and consist of the present value of the systems state vector and the last value of the control inputs vector that was exerted on it. The approximate linearization makes use of Taylor series expansion and of the computation of the associated Jacobian matrices. The truncation of higher order terms in the Taylor series expansion is considered to be a modelling error that is compensated by the robustness of the control loop. As the control algorithm runs, the temporary equilibrium is shifted towards the reference trajectory and finally converges to it. The control method needs to compute an H-infinity feedback control law at each iteration, and requires the repetitive solution of an algebraic Riccati equation. Through Lyapunov stability analysis it is shown that an H-infinity tracking performance criterion holds for the control loop. This implies elevated robustness against model approximations and external perturbations. Moreover, under moderate conditions the global asymptotic stability of the control loop is proven.

  8. A participatory approach to design monitoring indicators of production diseases in organic dairy farms.

    Science.gov (United States)

    Duval, J E; Fourichon, C; Madouasse, A; Sjöström, K; Emanuelson, U; Bareille, N

    2016-06-01

    Production diseases have an important negative effect on the health and welfare of dairy cows. Although organic animal production systems aim for high animal health levels, compliance with European organic farming regulations does not guarantee that this is achieved. Herd health and production management (HHPM) programs aim at optimizing herd health by preventing disease and production problems, but as yet they have not been consistently implemented by farmers. We hypothesize that one reason is the mismatch between what scientists propose as indicators for herd health monitoring and what farmers would like to use. Herd health monitoring is a key element in HHPM programs as it permits a regular assessment of the functioning of the different components of the production process. Planned observations or measurements of these components are indispensable for this monitoring. In this study, a participatory approach was used to create an environment in which farmers could adapt the indicators proposed by scientists for monitoring the five main production diseases on dairy cattle farms. The adaptations of the indicators were characterized and the farmers' explanations for the changes made were described. The study was conducted in France and Sweden, which differ in terms of their national organic regulations and existing advisory services. In both countries, twenty certified organic dairy farmers and their animal health management advisors participated in the study. All of the farmers adapted the initial monitoring plan proposed by scientists to specific production and animal health situation on their farm. This resulted in forty unique and farm-specific combinations of indicators for herd health monitoring. All but three farmers intended to monitor five health topics simultaneously using the constructed indicators. The qualitative analysis of the explanations given by farmers for their choices enabled an understanding of farmers' reasons for selecting and adapting

  9. The application of entropy weight TOPSIS method to optimal points in monitoring the Xinjiang radiation environment

    International Nuclear Information System (INIS)

    Feng Guangwen; Hu Youhua; Liu Qian

    2009-01-01

    In this paper, the application of the entropy weight TOPSIS method to optimal layout points in monitoring the Xinjiang radiation environment has been indroduced. With the help of SAS software, It has been found that the method is more ideal and feasible. The method can provide a reference for us to monitor radiation environment in the same regions further. As the method could bring great convenience and greatly reduce the inspecting work, it is very simple, flexible and effective for a comprehensive evaluation. (authors)

  10. Optimized Scheduling of Smart Meter Data Access for Real-time Voltage Quality Monitoring

    DEFF Research Database (Denmark)

    Kemal, Mohammed Seifu; Olsen, Rasmus Løvenstein; Schwefel, Hans-Peter

    2018-01-01

    Abstract—Active low-voltage distribution grids that support high integration of distributed generation such as photovoltaics and wind turbines require real-time voltage monitoring. At the same time, countries in Europe such as Denmark have close to 100% rollout of smart metering infrastructure....... The metering infrastructure has limitations to provide real-time measurements with small-time granularity. This paper presents an algorithm for optimized scheduling of smart meter data access to provide real-time voltage quality monitoring. The algorithm is analyzed using a real distribution grid in Denmark...

  11. A modern diagnostic approach for automobile systems condition monitoring

    Science.gov (United States)

    Selig, M.; Shi, Z.; Ball, A.; Schmidt, K.

    2012-05-01

    An important topic in automotive research and development is the area of active and passive safety systems. In general, it is grouped in active safety systems to prevent accidents and passive systems to reduce the impact of a crash. An example for an active system is ABS while a seat belt tensioner represents the group of passive systems. Current developments in the automotive industry try to link active with passive system components to enable a complete event sequence, beginning with the warning of the driver about a critical situation till the automatic emergency call after an accident. The cross-linking has an impact on the current diagnostic approach, which is described in this paper. Therefore, this contribution introduces a new diagnostic approach for automotive mechatronic systems. The concept is based on monitoring the messages which are exchanged via the automotive communication systems, e.g. the CAN bus. According to the authors' assumption, the messages on the bus are changing between faultless and faulty vehicle condition. The transmitted messages of the sensors and control units are different depending on the condition of the car. First experiments are carried and in addition, the hardware design of a suitable diagnostic interface is presented. Finally, first results will be presented and discussed.

  12. A modern diagnostic approach for automobile systems condition monitoring

    International Nuclear Information System (INIS)

    Selig, M; Ball, A; Shi, Z; Schmidt, K

    2012-01-01

    An important topic in automotive research and development is the area of active and passive safety systems. In general, it is grouped in active safety systems to prevent accidents and passive systems to reduce the impact of a crash. An example for an active system is ABS while a seat belt tensioner represents the group of passive systems. Current developments in the automotive industry try to link active with passive system components to enable a complete event sequence, beginning with the warning of the driver about a critical situation till the automatic emergency call after an accident. The cross-linking has an impact on the current diagnostic approach, which is described in this paper. Therefore, this contribution introduces a new diagnostic approach for automotive mechatronic systems. The concept is based on monitoring the messages which are exchanged via the automotive communication systems, e.g. the CAN bus. According to the authors' assumption, the messages on the bus are changing between faultless and faulty vehicle condition. The transmitted messages of the sensors and control units are different depending on the condition of the car. First experiments are carried and in addition, the hardware design of a suitable diagnostic interface is presented. Finally, first results will be presented and discussed.

  13. Soft computing approach for reliability optimization: State-of-the-art survey

    International Nuclear Information System (INIS)

    Gen, Mitsuo; Yun, Young Su

    2006-01-01

    In the broadest sense, reliability is a measure of performance of systems. As systems have grown more complex, the consequences of their unreliable behavior have become severe in terms of cost, effort, lives, etc., and the interest in assessing system reliability and the need for improving the reliability of products and systems have become very important. Most solution methods for reliability optimization assume that systems have redundancy components in series and/or parallel systems and alternative designs are available. Reliability optimization problems concentrate on optimal allocation of redundancy components and optimal selection of alternative designs to meet system requirement. In the past two decades, numerous reliability optimization techniques have been proposed. Generally, these techniques can be classified as linear programming, dynamic programming, integer programming, geometric programming, heuristic method, Lagrangean multiplier method and so on. A Genetic Algorithm (GA), as a soft computing approach, is a powerful tool for solving various reliability optimization problems. In this paper, we briefly survey GA-based approach for various reliability optimization problems, such as reliability optimization of redundant system, reliability optimization with alternative design, reliability optimization with time-dependent reliability, reliability optimization with interval coefficients, bicriteria reliability optimization, and reliability optimization with fuzzy goals. We also introduce the hybrid approaches for combining GA with fuzzy logic, neural network and other conventional search techniques. Finally, we have some experiments with an example of various reliability optimization problems using hybrid GA approach

  14. A New Approach to Monitoring Coastal Marshes for Persistent Flooding

    Science.gov (United States)

    Kalcic, M. T.; Undersood, Lauren W.; Fletcher, Rose

    2012-01-01

    compute the NDWI indices and also the Normalized Difference Soil Index (NDSI). Coastwide Reference Monitoring System (CRMS) water levels from various hydrologic monitoring stations and aerial photography were used to optimize thresholds for MODIS-derived time series of NDWI and to validate resulting flood maps. In most of the profiles produced for post-hurricane assessment, the increase in the NDWI index (from storm surge) is accompanied by a decrease in the vegetation index (NDVI) and then a period of declining water. The NDSI index represents non-green or dead vegetation and increases after the hurricane s destruction of the marsh vegetation. Behavior of these indices over time is indicative of which areas remain flooded, which areas recover to their former levels of vegetative vigor, and which areas are stressed or in transition. Tracking these indices over time shows the recovery rate of vegetation and the relative behavior to inundation persistence. The results from this study demonstrated that identification of persistent marsh flooding, utilizing the tools developed in this study, provided an approximate 70-80 percent accuracy rate when compared to the actual days flooded at the CRMS stations.

  15. Computing Optimal Stochastic Portfolio Execution Strategies: A Parametric Approach Using Simulations

    Science.gov (United States)

    Moazeni, Somayeh; Coleman, Thomas F.; Li, Yuying

    2010-09-01

    Computing optimal stochastic portfolio execution strategies under appropriate risk consideration presents great computational challenge. We investigate a parametric approach for computing optimal stochastic strategies using Monte Carlo simulations. This approach allows reduction in computational complexity by computing coefficients for a parametric representation of a stochastic dynamic strategy based on static optimization. Using this technique, constraints can be similarly handled using appropriate penalty functions. We illustrate the proposed approach to minimize the expected execution cost and Conditional Value-at-Risk (CVaR).

  16. Collaboration pathway(s) using new tools for optimizing `operational' climate monitoring from space

    Science.gov (United States)

    Helmuth, Douglas B.; Selva, Daniel; Dwyer, Morgan M.

    2015-09-01

    Consistently collecting the earth's climate signatures remains a priority for world governments and international scientific organizations. Architecting a long term solution requires transforming scientific missions into an optimized robust `operational' constellation that addresses the collective needs of policy makers, scientific communities and global academic users for trusted data. The application of new tools offers pathways for global architecture collaboration. Recent rule-based expert system (RBES) optimization modeling of the intended NPOESS architecture becomes a surrogate for global operational climate monitoring architecture(s). These rulebased systems tools provide valuable insight for global climate architectures, by comparison/evaluation of alternatives and the sheer range of trade space explored. Optimization of climate monitoring architecture(s) for a partial list of ECV (essential climate variables) is explored and described in detail with dialogue on appropriate rule-based valuations. These optimization tool(s) suggest global collaboration advantages and elicit responses from the audience and climate science community. This paper will focus on recent research exploring joint requirement implications of the high profile NPOESS architecture and extends the research and tools to optimization for a climate centric case study. This reflects work from SPIE RS Conferences 2013 and 2014, abridged for simplification30, 32. First, the heavily securitized NPOESS architecture; inspired the recent research question - was Complexity (as a cost/risk factor) overlooked when considering the benefits of aggregating different missions into a single platform. Now years later a complete reversal; should agencies considering Disaggregation as the answer. We'll discuss what some academic research suggests. Second, using the GCOS requirements of earth climate observations via ECV (essential climate variables) many collected from space-based sensors; and accepting their

  17. A neuro-fuzzy inference system tuned by particle swarm optimization algorithm for sensor monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Mauro Vitor de [Instituto de Engenharia Nuclear (IEN), Rio de Janeiro, RJ (Brazil). Div. de Instrumentacao e Confiabilidade Humana]. E-mail: mvitor@ien.gov.br; Schirru, Roberto [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Lab. de Monitoracao de Processos

    2005-07-01

    A neuro-fuzzy inference system (ANFIS) tuned by particle swarm optimization (PSO) algorithm has been developed for monitor the relevant sensor in a nuclear plant using the information of other sensors. The antecedent parameters of the ANFIS that estimates the relevant sensor signal are optimized by a PSO algorithm and consequent parameters use a least-squares algorithm. The proposed sensor-monitoring algorithm was demonstrated through the estimation of the nuclear power value in a pressurized water reactor using as input to the ANFIS six other correlated signals. The obtained results are compared to two similar ANFIS using one gradient descendent (GD) and other genetic algorithm (GA), as antecedent parameters training algorithm. (author)

  18. A neuro-fuzzy inference system tuned by particle swarm optimization algorithm for sensor monitoring

    International Nuclear Information System (INIS)

    Oliveira, Mauro Vitor de; Schirru, Roberto

    2005-01-01

    A neuro-fuzzy inference system (ANFIS) tuned by particle swarm optimization (PSO) algorithm has been developed for monitor the relevant sensor in a nuclear plant using the information of other sensors. The antecedent parameters of the ANFIS that estimates the relevant sensor signal are optimized by a PSO algorithm and consequent parameters use a least-squares algorithm. The proposed sensor-monitoring algorithm was demonstrated through the estimation of the nuclear power value in a pressurized water reactor using as input to the ANFIS six other correlated signals. The obtained results are compared to two similar ANFIS using one gradient descendent (GD) and other genetic algorithm (GA), as antecedent parameters training algorithm. (author)

  19. A New Approach for Optimal Sizing of Standalone Photovoltaic Systems

    OpenAIRE

    Khatib, Tamer; Mohamed, Azah; Sopian, K.; Mahmoud, M.

    2012-01-01

    This paper presents a new method for determining the optimal sizing of standalone photovoltaic (PV) system in terms of optimal sizing of PV array and battery storage. A standalone PV system energy flow is first analysed, and the MATLAB fitting tool is used to fit the resultant sizing curves in order to derive general formulas for optimal sizing of PV array and battery. In deriving the formulas for optimal sizing of PV array and battery, the data considered are based on five sites in Malaysia...

  20. An interactive and flexible approach to stamping design and optimization

    International Nuclear Information System (INIS)

    Roy, Subir; Kunju, Ravi; Kirby, David

    2004-01-01

    This paper describes an efficient method that integrates finite element analysis (FEA), mesh morphing and response surface based optimization in order to implement an automated and flexible software tool to optimize stamping tool and process design. For FEA, a robust and extremely fast inverse solver is chosen. For morphing, a state of the art mesh morpher that interactively generates shape variables for optimization studies is used. The optimization algorithm utilized in this study enables a global search for a multitude of parameters and is highly flexible with regards to the choice of objective functions. A quality function that minimizes formability defects resulting from stretching and compression is implemented

  1. Scientific and technological basis for maintenance optimization, planning, testing and monitoring for NPP with WWER

    International Nuclear Information System (INIS)

    Kovrizhkin, Yu.L.; Skalozubov, V.I.; Kochneva, V.Yu.

    2009-01-01

    The main results of the developments in the sphere of NPPs with WWER production efficiency increasing by the way of the maintenance optimization planning, testing and monitoring of the equipment and systems are shown. The attention is paid to the metal control during maintenance period of Power Unit. The realization methods of the transition concept at the repair according to the technical condition are resulted

  2. Two-Layer Hierarchy Optimization Model for Communication Protocol in Railway Wireless Monitoring Networks

    Directory of Open Access Journals (Sweden)

    Xiaoping Ma

    2018-01-01

    Full Text Available The wireless monitoring system is always destroyed by the insufficient energy of the sensors in railway. Hence, how to optimize the communication protocol and extend the system lifetime is crucial to ensure the stability of system. However, the existing studies focused primarily on cluster-based or multihop protocols individually, which are ineffective in coping with the complex communication scenarios in the railway wireless monitoring system (RWMS. This study proposes a hybrid protocol which combines the cluster-based and multihop protocols (CMCP to minimize and balance the energy consumption in different sections of the RWMS. In the first hierarchy, the total energy consumption is minimized by optimizing the cluster quantities in the cluster-based protocol and the number of hops and the corresponding hop distances in the multihop protocol. In the second hierarchy, the energy consumption is balanced through rotating the cluster head (CH in the subnetworks and further optimizing the hops and the corresponding hop distances in the backbone network. On this basis, the system lifetime is maximized with the minimum and balance energy consumption among the sensors. Furthermore, the hybrid particle swarm optimization and genetic algorithm (PSO-GA are adopted to optimize the energy consumption from the two-layer hierarchy. Finally, the effectiveness of the proposed CMCP is verified in the simulation. The performances of the proposed CMCP in system lifetime, residual energy, and the corresponding variance are all superior to the LEACH protocol widely applied in the previous research. The effective protocol proposed in this study can facilitate the application of the wireless monitoring network in the railway system and enhance safety operation of the railway.

  3. Optimizing urine drug testing for monitoring medication compliance in pain management.

    Science.gov (United States)

    Melanson, Stacy E F; Ptolemy, Adam S; Wasan, Ajay D

    2013-12-01

    It can be challenging to successfully monitor medication compliance in pain management. Clinicians and laboratorians need to collaborate to optimize patient care and maximize operational efficiency. The test menu, assay cutoffs, and testing algorithms utilized in the urine drug testing panels should be periodically reviewed and tailored to the patient population to effectively assess compliance and avoid unnecessary testing and cost to the patient. Pain management and pathology collaborated on an important quality improvement initiative to optimize urine drug testing for monitoring medication compliance in pain management. We retrospectively reviewed 18 months of data from our pain management center. We gathered data on test volumes, positivity rates, and the frequency of false positive results. We also reviewed the clinical utility of our testing algorithms, assay cutoffs, and adulterant panel. In addition, the cost of each component was calculated. The positivity rate for ethanol and 3,4-methylenedioxymethamphetamine were us to optimize our testing panel for monitoring medication compliance in pain management and reduce cost. Wiley Periodicals, Inc.

  4. A Polynomial Optimization Approach to Constant Rebalanced Portfolio Selection

    NARCIS (Netherlands)

    Takano, Y.; Sotirov, R.

    2010-01-01

    We address the multi-period portfolio optimization problem with the constant rebalancing strategy. This problem is formulated as a polynomial optimization problem (POP) by using a mean-variance criterion. In order to solve the POPs of high degree, we develop a cutting-plane algorithm based on

  5. Hybrid Metaheuristic Approach for Nonlocal Optimization of Molecular Systems.

    Science.gov (United States)

    Dresselhaus, Thomas; Yang, Jack; Kumbhar, Sadhana; Waller, Mark P

    2013-04-09

    Accurate modeling of molecular systems requires a good knowledge of the structure; therefore, conformation searching/optimization is a routine necessity in computational chemistry. Here we present a hybrid metaheuristic optimization (HMO) algorithm, which combines ant colony optimization (ACO) and particle swarm optimization (PSO) for the optimization of molecular systems. The HMO implementation meta-optimizes the parameters of the ACO algorithm on-the-fly by the coupled PSO algorithm. The ACO parameters were optimized on a set of small difluorinated polyenes where the parameters exhibited small variance as the size of the molecule increased. The HMO algorithm was validated by searching for the closed form of around 100 molecular balances. Compared to the gradient-based optimized molecular balance structures, the HMO algorithm was able to find low-energy conformations with a 87% success rate. Finally, the computational effort for generating low-energy conformation(s) for the phenylalanyl-glycyl-glycine tripeptide was approximately 60 CPU hours with the ACO algorithm, in comparison to 4 CPU years required for an exhaustive brute-force calculation.

  6. Optimal angle reduction - a behavioral approach to linear system appromixation

    NARCIS (Netherlands)

    Roorda, B.; Weiland, S.

    2001-01-01

    We investigate the problem of optimal state reduction under minimization of the angle between system behaviors. The angle is defined in a worst-case sense, as the largest angle that can occur between a system trajectory and its optimal approximation in the reduced-order model. This problem is

  7. A polynomial optimization approach to constant rebalanced portfolio selection

    NARCIS (Netherlands)

    Takano, Y.; Sotirov, R.

    2012-01-01

    We address the multi-period portfolio optimization problem with the constant rebalancing strategy. This problem is formulated as a polynomial optimization problem (POP) by using a mean-variance criterion. In order to solve the POPs of high degree, we develop a cutting-plane algorithm based on

  8. A New Approach for Optimal Sizing of Standalone Photovoltaic Systems

    Directory of Open Access Journals (Sweden)

    Tamer Khatib

    2012-01-01

    Full Text Available This paper presents a new method for determining the optimal sizing of standalone photovoltaic (PV system in terms of optimal sizing of PV array and battery storage. A standalone PV system energy flow is first analysed, and the MATLAB fitting tool is used to fit the resultant sizing curves in order to derive general formulas for optimal sizing of PV array and battery. In deriving the formulas for optimal sizing of PV array and battery, the data considered are based on five sites in Malaysia, which are Kuala Lumpur, Johor Bharu, Ipoh, Kuching, and Alor Setar. Based on the results of the designed example for a PV system installed in Kuala Lumpur, the proposed method gives satisfactory optimal sizing results.

  9. A Bayesian maximum entropy-based methodology for optimal spatiotemporal design of groundwater monitoring networks.

    Science.gov (United States)

    Hosseini, Marjan; Kerachian, Reza

    2017-09-01

    This paper presents a new methodology for analyzing the spatiotemporal variability of water table levels and redesigning a groundwater level monitoring network (GLMN) using the Bayesian Maximum Entropy (BME) technique and a multi-criteria decision-making approach based on ordered weighted averaging (OWA). The spatial sampling is determined using a hexagonal gridding pattern and a new method, which is proposed to assign a removal priority number to each pre-existing station. To design temporal sampling, a new approach is also applied to consider uncertainty caused by lack of information. In this approach, different time lag values are tested by regarding another source of information, which is simulation result of a numerical groundwater flow model. Furthermore, to incorporate the existing uncertainties in available monitoring data, the flexibility of the BME interpolation technique is taken into account in applying soft data and improving the accuracy of the calculations. To examine the methodology, it is applied to the Dehgolan plain in northwestern Iran. Based on the results, a configuration of 33 monitoring stations for a regular hexagonal grid of side length 3600 m is proposed, in which the time lag between samples is equal to 5 weeks. Since the variance estimation errors of the BME method are almost identical for redesigned and existing networks, the redesigned monitoring network is more cost-effective and efficient than the existing monitoring network with 52 stations and monthly sampling frequency.

  10. Optimal investment for enhancing social concern about biodiversity conservation: a dynamic approach.

    Science.gov (United States)

    Lee, Joung Hun; Iwasa, Yoh

    2012-11-01

    To maintain biodiversity conservation areas, we need to invest in activities, such as monitoring the condition of the ecosystem, preventing illegal exploitation, and removing harmful alien species. These require a constant supply of resources, the level of which is determined by the concern of the society about biodiversity conservation. In this paper, we study the optimal fraction of the resources to invest in activities for enhancing the social concern y(t) by environmental education, museum displays, publications, and media exposure. We search for the strategy that maximizes the time-integral of the quality of the conservation area x(t) with temporal discounting. Analyses based on dynamic programming and Pontryagin's maximum principle show that the optimal control consists of two phases: (1) in the first phase, the social concern level approaches to the final optimal value y(∗), (2) in the second phase, resources are allocated to both activities, and the social concern level is kept constant y(t) = y(∗). If the social concern starts from a low initial level, the optimal path includes a period in which the quality of the conservation area declines temporarily, because all the resources are invested to enhance the social concern. When the support rate increases with the quality of the conservation area itself x(t) as well as with the level of social concern y(t), both variables may increase simultaneously in the second phase. We discuss the implication of the results to good management of biodiversity conservation areas. 2012 Elsevier Inc. All rights reserved

  11. New approach to radiation monitoring: citizen based radiation measurement

    International Nuclear Information System (INIS)

    Kuca, P.; Helebrant, J.

    2016-01-01

    Both the Fukushima Dai-chi NPP accident in Japan in 2011 and the Chernobyl NPP accident in USSR in 1986 similarly to the first one have shown a necessity to find a way how to improve confidence of the public to official authorities. It is important especially in such a case of severe accidents with significant consequences in large inhabited areas around the damaged NPP. A lack of public confidence to officials was caused mostly by rather poor communication between official authorities and the public, as well by restricted access to the information for the public. It may have extremely negative impacts on the public understanding of actual situation and its possible risks, on public acceptance of necessary protective measures and participation of the public in remediation of the affected areas. One of possible ways to improve the situation can be implementation of citizen radiation monitoring on voluntary basis. Making sure, the official results are compatible with public self-measured ones, the public probably has more confidence in them. In the Czech Republic the implementation of such an approach is tested in the framework of security research founded by the Czech Ministry of the Interior - the research project RAMESIS solved by SURO. (authors)

  12. A practical approach for electron monitor unit calculation

    International Nuclear Information System (INIS)

    Choi, David; Patyal, Baldev; Cho, Jongmin; Cheng, Ing Y; Nookala, Prashanth

    2009-01-01

    Electron monitor unit (MU) calculation requires measured beam data such as the relative output factor (ROF) of a cone, insert correction factor (ICF) and effective source-to-surface distance (ESD). Measuring the beam data to cover all possible clinical cases is not practical for a busy clinic because it takes tremendous time and labor. In this study, we propose a practical approach to reduce the number of data measurements without affecting accuracy. It is based on two findings of dosimetric properties of electron beams. One is that the output ratio of two inserts is independent of the cone used, and the other is that ESD is a function of field size but independent of cone and jaw opening. For the measurements to prove the findings, a parallel plate ion chamber (Markus, PTW 23343) with an electrometer (Cardinal Health 35040) was used. We measured the outputs to determine ROF, ICF and ESD of different energies (5-21 MeV). Measurements were made in a Plastic Water(TM) phantom or in water. Three linear accelerators were used: Siemens MD2 (S/N 2689), Siemens Primus (S/N 3305) and Varian Clinic 21-EX (S/N 1495). With these findings, the number of data set to be measured can be reduced to less than 20% of the data points. (note)

  13. Echelon approach to areas of concern in synoptic regional monitoring

    Science.gov (United States)

    Myers, Wayne; Patil, Ganapati P.; Joly, Kyle

    1997-01-01

    Echelons provide an objective approach to prospecting for areas of potential concern in synoptic regional monitoring of a surface variable. Echelons can be regarded informally as stacked hill forms. The strategy is to identify regions of the surface which are elevated relative to surroundings (Relative ELEVATIONS or RELEVATIONS). These are areas which would continue to expand as islands with receding (virtual) floodwaters. Levels where islands would merge are critical elevations which delimit echelons in the vertical dimension. Families of echelons consist of surface sectors constituting separate islands for deeper waters that merge as water level declines. Pits which would hold water are disregarded in such a progression, but a complementary analysis of pits is obtained using the surface as a virtual mould to cast a counter-surface (bathymetric analysis). An echelon tree is a family tree of echelons with peaks as terminals and the lowest level as root. An echelon tree thus provides a dendrogram representation of surface topology which enables graph theoretic analysis and comparison of surface structures. Echelon top view maps show echelon cover sectors on the base plane. An echelon table summarizes characteristics of echelons as instances or cases of hill form surface structure. Determination of echelons requires only ordinal strength for the surface variable, and is thus appropriate for environmental indices as well as measurements. Since echelons are inherent in a surface rather than perceptual, they provide a basis for computer-intelligent understanding of surfaces. Echelons are given for broad-scale mammalian species richness in Pennsylvania.

  14. Geometrical Optimization Approach to Isomerization: Models and Limitations.

    Science.gov (United States)

    Chang, Bo Y; Shin, Seokmin; Engel, Volker; Sola, Ignacio R

    2017-11-02

    We study laser-driven isomerization reactions through an excited electronic state using the recently developed Geometrical Optimization procedure. Our goal is to analyze whether an initial wave packet in the ground state, with optimized amplitudes and phases, can be used to enhance the yield of the reaction at faster rates, driven by a single picosecond pulse or a pair of femtosecond pulses resonant with the electronic transition. We show that the symmetry of the system imposes limitations in the optimization procedure, such that the method rediscovers the pump-dump mechanism.

  15. Y-12 Groundwater Protection Program Monitoring Optimization Plan for Groundwater Monitoring Wells at the U.S. Department of Energy Y-12 National Security Complex, Oak Ridge, Tennessee

    Energy Technology Data Exchange (ETDEWEB)

    None

    2003-09-30

    This document is the monitoring optimization plan for groundwater monitoring wells associated with the U.S. Department of Energy (DOE) Y-12 National Security Complex (Y-12) in Oak Ridge, Tennessee (Figure 1). The plan describes the technical approach that will be implemented under the Y-12 Groundwater Protection Program (GWPP) to focus available resources on the monitoring wells at Y-12 which provide the most useful hydrologic and water-quality monitoring data. The technical approach is based on the GWPP status designation for each well (Section 2.0). Under this approach, wells granted ''active'' status are used by the GWPP for hydrologic monitoring and/or groundwater sampling (Section 3.0), whereas well granted ''inactive'' status are not used for either purpose. The status designation also determines the frequency at which the GWPP will inspect applicable wells, the scope of these well inspections, and extent of any maintenance actions initiated by the GWPP (Section 4.0). Details regarding the ancillary activities associated with implementation of this plan (e.g., well inspection) are deferred to the referenced GWPP plans and procedures (Section 5.0). This plan applies to groundwater monitoring wells associated with Y-12 and related waste management facilities located within three hydrogeologic regimes (Figure 1): the Bear Creek Hydrogeologic Regime (Bear Creek Regime), the Upper East Fork Poplar Creek Hydrogeologic Regime (East Fork Regime), and the Chestnut Ridge Hydrogeologic Regime (Chestnut Ridge Regime). The Bear Creek Regime encompasses a section of Bear Creek Valley (BCV) immediately west of Y-12. The East Fork Regime encompasses most of the Y-12 process, operations, and support facilities in BCV and, for the purposes of this plan, includes a section of Union Valley east of the DOE Oak Ridge Reservation (ORR) boundary along Scarboro Road. The Chestnut Ridge Regime is directly south of Y-12 and encompasses a section of Chestnut Ridge that is bound to the

  16. From Nonlinear Optimization to Convex Optimization through Firefly Algorithm and Indirect Approach with Applications to CAD/CAM

    Directory of Open Access Journals (Sweden)

    Akemi Gálvez

    2013-01-01

    Full Text Available Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor’s method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently.

  17. Potential and challenges in home care service process optimization : a route optimization approach

    OpenAIRE

    Nakari, Pentti J. E.

    2016-01-01

    Aging of the population is an increasing problem in many countries, including Finland, and it poses a challenge to public services such as home care. Vehicle routing optimization (VRP) type optimization solutions are one possible way to decrease the time required for planning home visits and driving to customer addresses, as well as decreasing transportation costs. Although VRP optimization is widely and succesfully applied to commercial and industrial logistics, the home care ...

  18. Novel approach for optimization of fermentative condition for ...

    African Journals Online (AJOL)

    Jane

    2011-07-20

    Jul 20, 2011 ... School of Biochemical Engineering, Institute of Technology, Banaras Hindu University ... been applied for the optimization of a few biochemical ..... Methanol by Methylo bacterium Extorquens DSMZ. 1340. Iran J. Chem. Chem.

  19. The future of monitoring in clinical research - a holistic approach: linking risk-based monitoring with quality management principles.

    Science.gov (United States)

    Ansmann, Eva B; Hecht, Arthur; Henn, Doris K; Leptien, Sabine; Stelzer, Hans Günther

    2013-01-01

    Since several years risk-based monitoring is the new "magic bullet" for improvement in clinical research. Lots of authors in clinical research ranging from industry and academia to authorities are keen on demonstrating better monitoring-efficiency by reducing monitoring visits, monitoring time on site, monitoring costs and so on, always arguing with the use of risk-based monitoring principles. Mostly forgotten is the fact, that the use of risk-based monitoring is only adequate if all mandatory prerequisites at site and for the monitor and the sponsor are fulfilled.Based on the relevant chapter in ICH GCP (International Conference on Harmonisation of technical requirements for registration of pharmaceuticals for human use - Good Clinical Practice) this publication takes a holistic approach by identifying and describing the requirements for future monitoring and the use of risk-based monitoring. As the authors are operational managers as well as QA (Quality Assurance) experts, both aspects are represented to come up with efficient and qualitative ways of future monitoring according to ICH GCP.

  20. Optical biosensor optimized for continuous in-line glucose monitoring in animal cell culture.

    Science.gov (United States)

    Tric, Mircea; Lederle, Mario; Neuner, Lisa; Dolgowjasow, Igor; Wiedemann, Philipp; Wölfl, Stefan; Werner, Tobias

    2017-09-01

    Biosensors for continuous glucose monitoring in bioreactors could provide a valuable tool for optimizing culture conditions in biotechnological applications. We have developed an optical biosensor for long-term continuous glucose monitoring and demonstrated a tight glucose level control during cell culture in disposable bioreactors. The in-line sensor is based on a commercially available oxygen sensor that is coated with cross-linked glucose oxidase (GOD). The dynamic range of the sensor was tuned by a hydrophilic perforated diffusion membrane with an optimized permeability for glucose and oxygen. The biosensor was thoroughly characterized by experimental data and numerical simulations, which enabled insights into the internal concentration profile of the deactivating by-product hydrogen peroxide. The simulations were carried out with a one-dimensional biosensor model and revealed that, in addition to the internal hydrogen peroxide concentration, the turnover rate of the enzyme GOD plays a crucial role for biosensor stability. In the light of this finding, the glucose sensor was optimized to reach a long functional stability (>52 days) under continuous glucose monitoring conditions with a dynamic range of 0-20 mM and a response time of t 90  ≤ 10 min. In addition, we demonstrated that the sensor was sterilizable with beta and UV irradiation and only subjected to minor cross sensitivity to oxygen, when an oxygen reference sensor was applied. Graphical abstract Measuring setup of a glucose biosensor in a shake flask for continuous glucose monitoring in mammalian cell culture.

  1. A new approach of optimization procedure for superconducting integrated circuits

    International Nuclear Information System (INIS)

    Saitoh, K.; Soutome, Y.; Tarutani, Y.; Takagi, K.

    1999-01-01

    We have developed and tested a new circuit simulation procedure for superconducting integrated circuits which can be used to optimize circuit parameters. This method reveals a stable operation region in the circuit parameter space in connection with the global bias margin by means of a contour plot of the global bias margin versus the circuit parameters. An optimal set of parameters with margins larger than these of the initial values has been found in the stable region. (author)

  2. A Practical Approach to Governance and Optimization of Structured Data Elements.

    Science.gov (United States)

    Collins, Sarah A; Gesner, Emily; Morgan, Steven; Mar, Perry; Maviglia, Saverio; Colburn, Doreen; Tierney, Diana; Rocha, Roberto

    2015-01-01

    Definition and configuration of clinical content in an enterprise-wide electronic health record (EHR) implementation is highly complex. Sharing of data definitions across applications within an EHR implementation project may be constrained by practical limitations, including time, tools, and expertise. However, maintaining rigor in an approach to data governance is important for sustainability and consistency. With this understanding, we have defined a practical approach for governance of structured data elements to optimize data definitions given limited resources. This approach includes a 10 step process: 1) identification of clinical topics, 2) creation of draft reference models for clinical topics, 3) scoring of downstream data needs for clinical topics, 4) prioritization of clinical topics, 5) validation of reference models for clinical topics, and 6) calculation of gap analyses of EHR compared against reference model, 7) communication of validated reference models across project members, 8) requested revisions to EHR based on gap analysis, 9) evaluation of usage of reference models across project, and 10) Monitoring for new evidence requiring revisions to reference model.

  3. An adaptive multi-agent-based approach to smart grids control and optimization

    Energy Technology Data Exchange (ETDEWEB)

    Carvalho, Marco [Florida Institute of Technology, Melbourne, FL (United States); Perez, Carlos; Granados, Adrian [Institute for Human and Machine Cognition, Ocala, FL (United States)

    2012-03-15

    In this paper, we describe a reinforcement learning-based approach to power management in smart grids. The scenarios we consider are smart grid settings where renewable power sources (e.g. Photovoltaic panels) have unpredictable variations in power output due, for example, to weather or cloud transient effects. Our approach builds on a multi-agent system (MAS)-based infrastructure for the monitoring and coordination of smart grid environments with renewable power sources and configurable energy storage devices (battery banks). Software agents are responsible for tracking and reporting power flow variations at different points in the grid, and to optimally coordinate the engagement of battery banks (i.e. charge/idle/discharge modes) to maintain energy requirements to end-users. Agents are able to share information and coordinate control actions through a parallel communications infrastructure, and are also capable of learning, from experience, how to improve their response strategies for different operational conditions. In this paper we describe our approach and address some of the challenges associated with the communications infrastructure for distributed coordination. We also present some preliminary results of our first simulations using the GridLAB-D simulation environment, created by the US Department of Energy (DoE) at Pacific Northwest National Laboratory (PNNL). (orig.)

  4. Revisiting support optimization at the Driskos tunnel using a quantitative risk approach

    Directory of Open Access Journals (Sweden)

    J. Connor Langford

    2016-04-01

    Full Text Available With the scale and cost of geotechnical engineering projects increasing rapidly over the past few decades, there is a clear need for the careful consideration of calculated risks in design. While risk is typically dealt with subjectively through the use of conservative design parameters, with the advent of reliability-based methods, this no longer needs to be the case. Instead, a quantitative risk approach can be considered that incorporates uncertainty in ground conditions directly into the design process to determine the variable ground response and support loads. This allows for the optimization of support on the basis of both worker safety and economic risk. This paper presents the application of such an approach to review the design of the initial lining system along a section of the Driskos twin tunnels as part of the Egnatia Odos highway in northern Greece. Along this section of tunnel, weak rock masses were encountered as well as high in situ stress conditions, which led to excessive deformations and failure of the as built temporary support. Monitoring data were used to validate the rock mass parameters selected in this area and a risk approach was used to determine, in hindsight, the most appropriate support category with respect to the cost of installation and expected cost of failure. Different construction sequences were also considered in the context of both convenience and risk cost.

  5. MVMO-based approach for optimal placement and tuning of supplementary damping controller

    NARCIS (Netherlands)

    Rueda Torres, J.L.; Gonzalez-Longatt, F.

    2015-01-01

    This paper introduces an approach based on the Swarm Variant of the Mean-Variance Mapping Optimization (MVMO-S) to solve the multi-scenario formulation of the optimal placement and coordinated tuning of power system supplementary damping controllers (POCDCs). The effectiveness of the approach is

  6. Geometry optimization of molecules within an LCGTO local-density functional approach

    International Nuclear Information System (INIS)

    Mintmire, J.W.

    1990-01-01

    We describe our implementation of geometry optimization techniques within the linear combination of Gaussian-type orbitals (LCGTO) approach to local-density functional theory. The algorithm for geometry optimization is based on the evaluation of the gradient of the total energy with respect to internal coordinates within the local-density functional scheme. We present optimization results for a range of small molecules which serve as test cases for our approach

  7. Assessing and optimizing infra-sound networks to monitor volcanic eruptions

    International Nuclear Information System (INIS)

    Tailpied, Dorianne

    2016-01-01

    Understanding infra-sound signals is essential to monitor compliance with the Comprehensive Nuclear-Test ban Treaty, and also to demonstrate the potential of the global monitoring infra-sound network for civil and scientific applications. The main objective of this thesis is to develop a robust tool to estimate and optimize the performance of any infra-sound network to monitor explosive sources such as volcanic eruptions. Unlike previous studies, the developed method has the advantage to consider realistic atmospheric specifications along the propagation path, source frequency and noise levels at the stations. It allows to predict the attenuation and the minimum detectable source amplitude. By simulating the performances of any infra-sound networks, it is then possible to define the optimal configuration of the network to monitor a specific region, during a given period. When carefully adding a station to the existing network, performance can be improved by a factor of 2. However, it is not always possible to complete the network. A good knowledge of detection capabilities at large distances is thus essential. To provide a more realistic picture of the performance, we integrate the atmospheric longitudinal variability along the infra-sound propagation path in our simulations. This thesis also contributes in providing a confidence index taking into account the uncertainties related to propagation and atmospheric models. At high frequencies, the error can reach 40 dB. Volcanic eruptions are natural, powerful and valuable calibrating sources of infra-sound, worldwide detected. In this study, the well instrumented volcanoes Yasur, in Vanuatu, and Etna, in Italy, offer a unique opportunity to validate our attenuation model. In particular, accurate comparisons between near-field recordings and far-field detections of these volcanoes have helped to highlight the potential of our simulation tool to remotely monitor volcanoes. Such work could significantly help to prevent

  8. An optimized strategy for real-time hemorrhage monitoring with electrical impedance tomography

    International Nuclear Information System (INIS)

    Xu, Canhua; Dai, Meng; You, Fusheng; Shi, Xuetao; Fu, Feng; Liu, Ruigang; Dong, Xiuzhen

    2011-01-01

    Delayed detection of an internal hemorrhage may result in serious disabilities and possibly death for a patient. Currently, there are no portable medical imaging instruments that are suitable for long-term monitoring of patients at risk of internal hemorrhage. Electrical impedance tomography (EIT) has the potential to monitor patients continuously as a novel functional image modality and instantly detect the occurrence of an internal hemorrhage. However, the low spatial resolution and high sensitivity to noise of this technique have limited its application in clinics. In addition, due to the circular boundary display mode used in current EIT images, it is difficult for clinicians to identify precisely which organ is bleeding using this technique. The aim of this study was to propose an optimized strategy for EIT reconstruction to promote the use of EIT for clinical studies, which mainly includes the use of anatomically accurate boundary shapes, rapid selection of optimal regularization parameters and image fusion of EIT and computed tomography images. The method was evaluated on retroperitoneal and intraperitoneal bleeding piglet data. Both traditional backprojection images and optimized images among different boundary shapes were reconstructed and compared. The experimental results demonstrated that EIT images with precise anatomical information can be reconstructed in which the image resolution and resistance to noise can be improved effectively

  9. Optimizing computed tomography pulmonary angiography using right atrium bolus monitoring combined with spontaneous respiration

    Energy Technology Data Exchange (ETDEWEB)

    Min, Wang; Jian, Li; Rui, Zhai [Jining No. 1 People' s Hospital, Department of Computed Tomography, Jining City, ShanDong Province (China); Wen, Li [Jining No. 1 People' s Hospital, Department of Gastroenterology, Jining, ShanDong (China); Dai, Lun-Hou [Shandong Chest Hospital, Department of Radiology, Jinan, ShanDong (China)

    2015-09-15

    CT pulmonary angiography (CTPA) aims to provide pulmonary arterial opacification in the absence of significant pulmonary venous filling. This requires accurate timing of the imaging acquisition to ensure synchronization with the peak pulmonary artery contrast concentration. This study was designed to test the utility of right atrium (RA) monitoring in ensuring optimal timing of CTPA acquisition. Sixty patients referred for CTPA were divided into two groups. Group A (n = 30): CTPA was performed using bolus triggering from the pulmonary trunk, suspended respiration and 70 ml of contrast agent (CA). Group B (n = 30): CTPA image acquisition was triggered using RA monitoring with spontaneous respiration and 40 ml of CA. Image quality was compared. Subjective image quality, average CT values of pulmonary arteries and density difference between artery and vein pairs were significantly higher whereas CT values of pulmonary veins were significantly lower in group B (all P < 0.05). There was no significant difference between the groups in the proportion of subjects where sixth grade pulmonary arteries were opacified (P > 0.05). RA monitoring combined with spontaneous respiration to trigger image acquisition in CTPA produces optimal contrast enhancement in pulmonary arterial structures with minimal venous filling even with reduced doses of CA. (orig.)

  10. Optimized autonomous space in-situ sensor web for volcano monitoring

    Science.gov (United States)

    Song, W.-Z.; Shirazi, B.; Huang, R.; Xu, M.; Peterson, N.; LaHusen, R.; Pallister, J.; Dzurisin, D.; Moran, S.; Lisowski, M.; Kedar, S.; Chien, S.; Webb, F.; Kiely, A.; Doubleday, J.; Davies, A.; Pieri, D.

    2010-01-01

    In response to NASA's announced requirement for Earth hazard monitoring sensor-web technology, a multidisciplinary team involving sensor-network experts (Washington State University), space scientists (JPL), and Earth scientists (USGS Cascade Volcano Observatory (CVO)), have developed a prototype of dynamic and scalable hazard monitoring sensor-web and applied it to volcano monitoring. The combined Optimized Autonomous Space In-situ Sensor-web (OASIS) has two-way communication capability between ground and space assets, uses both space and ground data for optimal allocation of limited bandwidth resources on the ground, and uses smart management of competing demands for limited space assets. It also enables scalability and seamless infusion of future space and in-situ assets into the sensor-web. The space and in-situ control components of the system are integrated such that each element is capable of autonomously tasking the other. The ground in-situ was deployed into the craters and around the flanks of Mount St. Helens in July 2009, and linked to the command and control of the Earth Observing One (EO-1) satellite. ?? 2010 IEEE.

  11. Y-12 Groundwater Protection Program Monitoring Optimization Plan for Groundwater Monitoring Wells at the U.S. Department of Energy Y-12 National Security Complex

    International Nuclear Information System (INIS)

    2006-01-01

    This document is the monitoring optimization plan for groundwater monitoring wells associated with the U.S. Department of Energy (DOE) Y-12 National Security Complex (Y-12) in Oak Ridge, Tennessee (Figure A.1). The plan describes the technical approach that will be implemented under the Y-12 Groundwater Protection Program (GWPP) to focus available resources on the monitoring wells at Y-12 that provide the most useful hydrologic and water-quality monitoring data. The technical approach is based on the GWPP status designation for each well (Section 2.0). Under this approach, wells granted ''active'' status are used by the GWPP for hydrologic monitoring and/or groundwater quality sampling (Section 3.0), whereas wells granted ''inactive'' status are not used for either purpose. The status designation also defines the frequency at which the GWPP will inspect applicable wells, the scope of these well inspections, and extent of any maintenance actions initiated by the GWPP (Section 3.0). Details regarding the ancillary activities associated with implementation of this plan (e.g., well inspection) are deferred to the referenced GWPP plans and procedures (Section 4.0). This plan applies to groundwater wells associated with Y-12 and related waste management areas and facilities located within three hydrogeologic regimes (Figure A.1): the Bear Creek Hydrogeologic Regime (Bear Creek Regime), the Upper East Fork Poplar Creek Hydrogeologic Regime (East Fork Regime), and the Chestnut Ridge Hydrogeologic Regime (Chestnut Ridge Regime). The Bear Creek Regime encompasses a section of Bear Creek Valley (BCV) immediately west of Y-12. The East Fork Regime encompasses most of the Y-12 process, operations, and support facilities in BCV and, for the purposes of this plan, includes a section of Union Valley east of the DOE Oak Ridge Reservation (ORR) boundary along Scarboro Road. The Chestnut Ridge Regime encompasses a section of Chestnut Ridge directly south of Y-12 that is bound on the

  12. Hierarchical Swarm Model: A New Approach to Optimization

    Directory of Open Access Journals (Sweden)

    Hanning Chen

    2010-01-01

    Full Text Available This paper presents a novel optimization model called hierarchical swarm optimization (HSO, which simulates the natural hierarchical complex system from where more complex intelligence can emerge for complex problems solving. This proposed model is intended to suggest ways that the performance of HSO-based algorithms on complex optimization problems can be significantly improved. This performance improvement is obtained by constructing the HSO hierarchies, which means that an agent in a higher level swarm can be composed of swarms of other agents from lower level and different swarms of different levels evolve on different spatiotemporal scale. A novel optimization algorithm (named PS2O, based on the HSO model, is instantiated and tested to illustrate the ideas of HSO model clearly. Experiments were conducted on a set of 17 benchmark optimization problems including both continuous and discrete cases. The results demonstrate remarkable performance of the PS2O algorithm on all chosen benchmark functions when compared to several successful swarm intelligence and evolutionary algorithms.

  13. Optimization of a Coastal Environmental Monitoring Network Based on the Kriging Method: A Case Study of Quanzhou Bay, China

    Directory of Open Access Journals (Sweden)

    Kai Chen

    2016-01-01

    Full Text Available Environmental monitoring is fundamental in assessing environmental quality and to fulfill protection and management measures with permit conditions. However, coastal environmental monitoring work faces many problems and challenges, including the fact that monitoring information cannot be linked up with evaluation, monitoring data cannot well reflect the current coastal environmental condition, and monitoring activities are limited by cost constraints. For these reasons, protection and management measures cannot be developed and implemented well by policy makers who intend to solve this issue. In this paper, Quanzhou Bay in southeastern China was selected as a case study; and the Kriging method and a geographic information system were employed to evaluate and optimize the existing monitoring network in a semienclosed bay. This study used coastal environmental monitoring data from 15 sites (including COD, DIN, and PO4-P to adequately analyze the water quality from 2009 to 2012 by applying the Trophic State Index. The monitoring network in Quanzhou Bay was evaluated and optimized, with the number of sites increased from 15 to 24, and the monitoring precision improved by 32.9%. The results demonstrated that the proposed advanced monitoring network optimization was appropriate for environmental monitoring in Quanzhou Bay. It might provide technical support for coastal management and pollutant reduction in similar areas.

  14. Optimization of a Coastal Environmental Monitoring Network Based on the Kriging Method: A Case Study of Quanzhou Bay, China

    Science.gov (United States)

    Chen, Kai; Ni, Minjie; Wang, Jun; Huang, Dongren; Chen, Huorong; Wang, Xiao; Liu, Mengyang

    2016-01-01

    Environmental monitoring is fundamental in assessing environmental quality and to fulfill protection and management measures with permit conditions. However, coastal environmental monitoring work faces many problems and challenges, including the fact that monitoring information cannot be linked up with evaluation, monitoring data cannot well reflect the current coastal environmental condition, and monitoring activities are limited by cost constraints. For these reasons, protection and management measures cannot be developed and implemented well by policy makers who intend to solve this issue. In this paper, Quanzhou Bay in southeastern China was selected as a case study; and the Kriging method and a geographic information system were employed to evaluate and optimize the existing monitoring network in a semienclosed bay. This study used coastal environmental monitoring data from 15 sites (including COD, DIN, and PO4-P) to adequately analyze the water quality from 2009 to 2012 by applying the Trophic State Index. The monitoring network in Quanzhou Bay was evaluated and optimized, with the number of sites increased from 15 to 24, and the monitoring precision improved by 32.9%. The results demonstrated that the proposed advanced monitoring network optimization was appropriate for environmental monitoring in Quanzhou Bay. It might provide technical support for coastal management and pollutant reduction in similar areas. PMID:27777951

  15. An opinion formation based binary optimization approach for feature selection

    Science.gov (United States)

    Hamedmoghadam, Homayoun; Jalili, Mahdi; Yu, Xinghuo

    2018-02-01

    This paper proposed a novel optimization method based on opinion formation in complex network systems. The proposed optimization technique mimics human-human interaction mechanism based on a mathematical model derived from social sciences. Our method encodes a subset of selected features to the opinion of an artificial agent and simulates the opinion formation process among a population of agents to solve the feature selection problem. The agents interact using an underlying interaction network structure and get into consensus in their opinions, while finding better solutions to the problem. A number of mechanisms are employed to avoid getting trapped in local minima. We compare the performance of the proposed method with a number of classical population-based optimization methods and a state-of-the-art opinion formation based method. Our experiments on a number of high dimensional datasets reveal outperformance of the proposed algorithm over others.

  16. Deterministic global optimization an introduction to the diagonal approach

    CERN Document Server

    Sergeyev, Yaroslav D

    2017-01-01

    This book begins with a concentrated introduction into deterministic global optimization and moves forward to present new original results from the authors who are well known experts in the field. Multiextremal continuous problems that have an unknown structure with Lipschitz objective functions and functions having the first Lipschitz derivatives defined over hyperintervals are examined. A class of algorithms using several Lipschitz constants is introduced which has its origins in the DIRECT (DIviding RECTangles) method. This new class is based on an efficient strategy that is applied for the search domain partitioning. In addition a survey on derivative free methods and methods using the first derivatives is given for both one-dimensional and multi-dimensional cases. Non-smooth and smooth minorants and acceleration techniques that can speed up several classes of global optimization methods with examples of applications and problems arising in numerical testing of global optimization algorithms are discussed...

  17. Hybrid Quantum-Classical Approach to Quantum Optimal Control.

    Science.gov (United States)

    Li, Jun; Yang, Xiaodong; Peng, Xinhua; Sun, Chang-Pu

    2017-04-14

    A central challenge in quantum computing is to identify more computational problems for which utilization of quantum resources can offer significant speedup. Here, we propose a hybrid quantum-classical scheme to tackle the quantum optimal control problem. We show that the most computationally demanding part of gradient-based algorithms, namely, computing the fitness function and its gradient for a control input, can be accomplished by the process of evolution and measurement on a quantum simulator. By posing queries to and receiving answers from the quantum simulator, classical computing devices update the control parameters until an optimal control solution is found. To demonstrate the quantum-classical scheme in experiment, we use a seven-qubit nuclear magnetic resonance system, on which we have succeeded in optimizing state preparation without involving classical computation of the large Hilbert space evolution.

  18. Replica approach to mean-variance portfolio optimization

    Science.gov (United States)

    Varga-Haszonits, Istvan; Caccioli, Fabio; Kondor, Imre

    2016-12-01

    We consider the problem of mean-variance portfolio optimization for a generic covariance matrix subject to the budget constraint and the constraint for the expected return, with the application of the replica method borrowed from the statistical physics of disordered systems. We find that the replica symmetry of the solution does not need to be assumed, but emerges as the unique solution of the optimization problem. We also check the stability of this solution and find that the eigenvalues of the Hessian are positive for r  =  N/T  optimal in-sample variance is found to vanish at the critical point inversely proportional to the divergent estimation error.

  19. A Simulation Approach to Statistical Estimation of Multiperiod Optimal Portfolios

    Directory of Open Access Journals (Sweden)

    Hiroshi Shiraishi

    2012-01-01

    Full Text Available This paper discusses a simulation-based method for solving discrete-time multiperiod portfolio choice problems under AR(1 process. The method is applicable even if the distributions of return processes are unknown. We first generate simulation sample paths of the random returns by using AR bootstrap. Then, for each sample path and each investment time, we obtain an optimal portfolio estimator, which optimizes a constant relative risk aversion (CRRA utility function. When an investor considers an optimal investment strategy with portfolio rebalancing, it is convenient to introduce a value function. The most important difference between single-period portfolio choice problems and multiperiod ones is that the value function is time dependent. Our method takes care of the time dependency by using bootstrapped sample paths. Numerical studies are provided to examine the validity of our method. The result shows the necessity to take care of the time dependency of the value function.

  20. Integrated radiobioecological monitoring of Semipalatinsk test site: general approach

    International Nuclear Information System (INIS)

    Sejsebaev, A.T.; Shenal', K.; Bakhtin, M.M.; Kadyrova, N.Zh.

    2001-01-01

    This paper presents major research directions and general methodology for establishment of an integrated radio-bio-ecological monitoring system at the territory of the former Semipalatinsk nuclear test site. Also, it briefly provides the first results of monitoring the natural plant and animal populations at STS. (author)

  1. Approaches to monitoring changes in carbon stocks for REDD+

    Science.gov (United States)

    Richard Birdsey; Gregorio Angeles-Perez; Werner A Kurz; Andrew Lister; Marcela Olguin; Yude Pan; Craig Wayson; Barry Wilson; Kristofer Johnson

    2013-01-01

    Reducing emissions from deforestation and forest degradation plus improving forest-management (REDD+) is a mechanism to facilitate tropical countries' participation in climate change mitigation. In this review we focus on the current state of monitoring systems to support implementing REDD+. The main elements of current monitoring systems - Landsat satellites and...

  2. An optimal control approach to manpower planning problem

    Directory of Open Access Journals (Sweden)

    H. W. J. Lee

    2001-01-01

    Full Text Available A manpower planning problem is studied in this paper. The model includes scheduling different types of workers over different tasks, employing and terminating different types of workers, and assigning different types of workers to various trainning programmes. The aim is to find an optimal way to do all these while keeping the time-varying demand for minimum number of workers working on each different tasks satisfied. The problem is posed as an optimal discrete-valued control problem in discrete time. A novel numerical scheme is proposed to solve the problem, and an illustrative example is provided.

  3. Optimally eating a stochastic cake. A recursive utility approach

    International Nuclear Information System (INIS)

    Epaulard, Anne; Pommeret, Aude

    2003-01-01

    In this short paper, uncertainties on resource stock and on technical progress are introduced into an intertemporal equilibrium model of optimal extraction of a non-renewable resource. The representative consumer maximizes a recursive utility function which disentangles between intertemporal elasticity of substitution and risk aversion. A closed-form solution is derived for both the optimal extraction and price paths. The value of the intertemporal elasticity of substitution relative to unity is then crucial in understanding extraction. Moreover, this model leads to a non-renewable resource price following a geometric Brownian motion

  4. An approach of optimal sensitivity applied in the tertiary loop of the automatic generation control

    Energy Technology Data Exchange (ETDEWEB)

    Belati, Edmarcio A. [CIMATEC - SENAI, Salvador, BA (Brazil); Alves, Dilson A. [Electrical Engineering Department, FEIS, UNESP - Sao Paulo State University (Brazil); da Costa, Geraldo R.M. [Electrical Engineering Department, EESC, USP - Sao Paulo University (Brazil)

    2008-09-15

    This paper proposes an approach of optimal sensitivity applied in the tertiary loop of the automatic generation control. The approach is based on the theorem of non-linear perturbation. From an optimal operation point obtained by an optimal power flow a new optimal operation point is directly determined after a perturbation, i.e., without the necessity of an iterative process. This new optimal operation point satisfies the constraints of the problem for small perturbation in the loads. The participation factors and the voltage set point of the automatic voltage regulators (AVR) of the generators are determined by the technique of optimal sensitivity, considering the effects of the active power losses minimization and the network constraints. The participation factors and voltage set point of the generators are supplied directly to a computational program of dynamic simulation of the automatic generation control, named by power sensitivity mode. Test results are presented to show the good performance of this approach. (author)

  5. Particle Swarm Optimization approach to defect detection in armour ceramics.

    Science.gov (United States)

    Kesharaju, Manasa; Nagarajah, Romesh

    2017-03-01

    In this research, various extracted features were used in the development of an automated ultrasonic sensor based inspection system that enables defect classification in each ceramic component prior to despatch to the field. Classification is an important task and large number of irrelevant, redundant features commonly introduced to a dataset reduces the classifiers performance. Feature selection aims to reduce the dimensionality of the dataset while improving the performance of a classification system. In the context of a multi-criteria optimization problem (i.e. to minimize classification error rate and reduce number of features) such as one discussed in this research, the literature suggests that evolutionary algorithms offer good results. Besides, it is noted that Particle Swarm Optimization (PSO) has not been explored especially in the field of classification of high frequency ultrasonic signals. Hence, a binary coded Particle Swarm Optimization (BPSO) technique is investigated in the implementation of feature subset selection and to optimize the classification error rate. In the proposed method, the population data is used as input to an Artificial Neural Network (ANN) based classification system to obtain the error rate, as ANN serves as an evaluator of PSO fitness function. Copyright © 2016. Published by Elsevier B.V.

  6. MVMO-based approach for optimal placement and tuning of ...

    African Journals Online (AJOL)

    DR OKE

    differential evolution DE algorithm with adaptive crossover operator, .... x are assigned by using a sequential scheme which accounts for mean and ... the representative scenarios from probabilistic model based Monte Carlo ... Comparison of average convergence of MVMO-S with other metaheuristic optimization methods.

  7. A compensatory approach to optimal selection with mastery scores

    NARCIS (Netherlands)

    van der Linden, Willem J.; Vos, Hendrik J.

    1994-01-01

    This paper presents some Bayesian theories of simultaneous optimization of decision rules for test-based decisions. Simultaneous decision making arises when an institution has to make a series of selection, placement, or mastery decisions with respect to subjects from a population. An obvious

  8. Design of experiment approach for the process optimization of ...

    African Journals Online (AJOL)

    Mulberry is considered as food-medicine herb, with specific nutritional and medicinal values. In this study, response surface methodology (RSM) was employed to optimize the ultrasonic-assisted extraction of total polysaccharide from mulberry using Box-Behnken design (BBD). Based on single factor experiments, a three ...

  9. Multi-objective optimization approach for air traffic flow management

    Directory of Open Access Journals (Sweden)

    Fadil Rabie

    2017-01-01

    The decision-making stage was then performed with the aid of data clustering techniques to reduce the sizeof the Pareto-optimal set and obtain a smaller representation of the multi-objective design space, there by making it easier for the decision-maker to find satisfactory and meaningful trade-offs, and to select a preferred final design solution.

  10. Optimal pricing in retail: a Cox regression approach

    NARCIS (Netherlands)

    Meijer, R.; Bhulai, S.

    2013-01-01

    Purpose: The purpose of this paper is to study the optimal pricing problem that retailers are challenged with when dealing with seasonal products. The friction between expected demand and realized demand creates a risk that supply during the season is not cleared, thus forcing the retailer to

  11. Taxing Strategies for Carbon Emissions: A Bilevel Optimization Approach

    Directory of Open Access Journals (Sweden)

    Wei Wei

    2014-04-01

    Full Text Available This paper presents a quantitative and computational method to determine the optimal tax rate among generating units. To strike a balance between the reduction of carbon emission and the profit of energy sectors, the proposed bilevel optimization model can be regarded as a Stackelberg game between the government agency and the generation companies. The upper-level, which represents the government agency, aims to limit total carbon emissions within a certain level by setting optimal tax rates among generators according to their emission performances. The lower-level, which represents decision behaviors of the grid operator, tries to minimize the total production cost under the tax rates set by the government. The bilevel optimization model is finally reformulated into a mixed integer linear program (MILP which can be solved by off-the-shelf MILP solvers. Case studies on a 10-unit system as well as a provincial power grid in China demonstrate the validity of the proposed method and its capability in practical applications.

  12. Allomorphs in the Igbo Language: An Optimality Theory Approach ...

    African Journals Online (AJOL)

    Allomorphs are any two or more morphemes that have different forms but perform the same grammatical functions in different linguistic environments. The optimality theory claims that the Universal Grammar is a set of violable constraints and that language-specific grammars rank these constraints in languagespecific ways.

  13. Optimal control of quantum systems: a projection approach

    International Nuclear Information System (INIS)

    Cheng, C.-J.; Hwang, C.-C.; Liao, T.-L.; Chou, G.-L.

    2005-01-01

    This paper considers the optimal control of quantum systems. The controlled quantum systems are described by the probability-density-matrix-based Liouville-von Neumann equation. Using projection operators, the states of the quantum system are decomposed into two sub-spaces, namely the 'main state' space and the 'remaining state' space. Since the control energy is limited, a solution for optimizing the external control force is proposed in which the main state is brought to the desired main state at a certain target time, while the population of the remaining state is simultaneously suppressed in order to diminish its effects on the final population of the main state. The optimization problem is formulated by maximizing a general cost functional of states and control force. An efficient algorithm is developed to solve the optimization problem. Finally, using the hydrogen fluoride (HF) molecular population transfer problem as an illustrative example, the effectiveness of the proposed scheme for a quantum system initially in a mixed state or in a pure state is investigated through numerical simulations

  14. TRACKING AND MONITORING OF TAGGED OBJECTS EMPLOYING PARTICLE SWARM OPTIMIZATION ALGORITHM IN A DEPARTMENTAL STORE

    Directory of Open Access Journals (Sweden)

    Indrajit Bhattacharya

    2011-05-01

    Full Text Available The present paper proposes a departmental store automation system based on Radio Frequency Identification (RFID technology and Particle Swarm Optimization (PSO algorithm. The items in the departmental store spanned over different sections and in multiple floors, are tagged with passive RFID tags. The floor is divided into number of zones depending on different types of items that are placed in their respective racks. Each of the zones is placed with one RFID reader, which constantly monitors the items in their zone and periodically sends that information to the application. The problem of systematic periodic monitoring of the store is addressed in this application so that the locations, distributions and demands of every item in the store can be invigilated with intelligence. The proposed application is successfully demonstrated on a simulated case study.

  15. TH-E-209-02: Dose Monitoring and Protocol Optimization: The Pediatric Perspective

    International Nuclear Information System (INIS)

    MacDougall, R.

    2016-01-01

    Radiation dose monitoring solutions have opened up new opportunities for medical physicists to be more involved in modern clinical radiology practices. In particular, with the help of comprehensive radiation dose data, data-driven protocol management and informed case follow up are now feasible. Significant challenges remain however and the problems faced by medical physicists are highly heterogeneous. Imaging systems from multiple vendors and a wide range of vintages co-exist in the same department and employ data communication protocols that are not fully standardized or implemented making harmonization complex. Many different solutions for radiation dose monitoring have been implemented by imaging facilities over the past few years. Such systems are based on commercial software, home-grown IT solutions, manual PACS data dumping, etc., and diverse pathways can be used to bring the data to impact clinical practice. The speakers will share their experiences with creating or tailoring radiation dose monitoring/management systems and procedures over the past few years, which vary significantly in design and scope. Topics to cover: (1) fluoroscopic dose monitoring and high radiation event handling from a large academic hospital; (2) dose monitoring and protocol optimization in pediatric radiology; and (3) development of a home-grown IT solution and dose data analysis framework. Learning Objectives: Describe the scope and range of radiation dose monitoring and protocol management in a modern radiology practice Review examples of data available from a variety of systems and how it managed and conveyed. Reflect on the role of the physicist in radiation dose awareness.

  16. TH-E-209-02: Dose Monitoring and Protocol Optimization: The Pediatric Perspective

    Energy Technology Data Exchange (ETDEWEB)

    MacDougall, R. [Boston Children’s Hospital (United States)

    2016-06-15

    Radiation dose monitoring solutions have opened up new opportunities for medical physicists to be more involved in modern clinical radiology practices. In particular, with the help of comprehensive radiation dose data, data-driven protocol management and informed case follow up are now feasible. Significant challenges remain however and the problems faced by medical physicists are highly heterogeneous. Imaging systems from multiple vendors and a wide range of vintages co-exist in the same department and employ data communication protocols that are not fully standardized or implemented making harmonization complex. Many different solutions for radiation dose monitoring have been implemented by imaging facilities over the past few years. Such systems are based on commercial software, home-grown IT solutions, manual PACS data dumping, etc., and diverse pathways can be used to bring the data to impact clinical practice. The speakers will share their experiences with creating or tailoring radiation dose monitoring/management systems and procedures over the past few years, which vary significantly in design and scope. Topics to cover: (1) fluoroscopic dose monitoring and high radiation event handling from a large academic hospital; (2) dose monitoring and protocol optimization in pediatric radiology; and (3) development of a home-grown IT solution and dose data analysis framework. Learning Objectives: Describe the scope and range of radiation dose monitoring and protocol management in a modern radiology practice Review examples of data available from a variety of systems and how it managed and conveyed. Reflect on the role of the physicist in radiation dose awareness.

  17. Non-intrusive long-term monitoring approaches

    International Nuclear Information System (INIS)

    Smathers, D.; Mangan, D.

    1998-01-01

    In order to promote internatinal confidence that the US and Russia are disarming per their commitments under Article 6 of the Non-Proliferation Treaty, an international verification regime may be applied to US and Russian excess fissile materials. Initially, it is envisioned that this verification regime would be applied at storage facilities; however, it should be anticipated that the verification regime would continue throughout any material disposition activities, should such activities be pursued. once the materials are accepted into the verification regime, it is assumed that long term monitoring will be used to maintain continuity of knowledge. The requirements for long term storage monitoring include unattended operation for extended periods of time, minimal intrusiveness on the host nation's safety and security activities, data collection incorporating data authentication, and monitoring redundancy to allow resolution of anomalies and to continue coverage in the event of equipment failures. Additional requirements include effective data review and analysis processes, operation during storage facility loading, procedure for removal of inventory items for safety-related surveillance, and low cost, reliable equipment. A monitoring system might include both continuous monitoring of storagecontainers and continuous area monitoring. These would be complemented with periodic on-site inspections. A fissile material storage facility is not a static operation. The initial studies have shown there are a number of valid reasons why a host nation may need them to remove material from the storage facility. A practical monitoring system must be able to accommodate necessary material movements

  18. A Robust Optimization Approach for Improving Service Quality

    OpenAIRE

    Andreas C. Soteriou; Richard B. Chase

    2000-01-01

    Delivering high quality service during the service encounter is central to competitive advantage in service organizations. However, achieving such high quality while controlling for costs is a major challenge for service managers. The purpose of this paper is to present an approach for addressing this challenge. The approach entails developing a model linking service process operational variables to service quality metrics to provide guidelines for service resource allocation. The approach en...

  19. Optimization of monitoring sewage with radionuclide contaminants. Optimizatsiya kontroya stochnykh vod, zagryaznennykh radionuklidami

    Energy Technology Data Exchange (ETDEWEB)

    Egorov, V N [Vsesoyuznyj Nauchno-Issledovatel' skij Inst. Neorganicheskikh Materialov, Moscow (Russian Federation)

    1991-03-01

    Recommendations on optimization of monitoring contaminated sewage aimed at enviromental protection agxinst radioactive contamination at minimum cost are presented. The way of selecting water sampling technique depends on water composition stability and flow rate. Depending on the type of radionuclide distribution in the sewage one can estimate minimum frequency of sampling or number of samples sufficient for assuring reliability of the conclusion on the excess or non-excess of permissible radioactive contamination levels, as well as analysis assigned accuracy. By irregular contaminated sewage-discharge and possibility of short-term releases of different form and duration, sampling should be accomplished through automatic devices of continuons or periodic operation.

  20. Plug-and-play monitoring and performance optimization for industrial automation processes

    CERN Document Server

    Luo, Hao

    2017-01-01

    Dr.-Ing. Hao Luo demonstrates the developments of advanced plug-and-play (PnP) process monitoring and control systems for industrial automation processes. With aid of the so-called Youla parameterization, a novel PnP process monitoring and control architecture (PnP-PMCA) with modularized components is proposed. To validate the developments, a case study on an industrial rolling mill benchmark is performed, and the real-time implementation on a laboratory brushless DC motor is presented. Contents PnP Process Monitoring and Control Architecture Real-Time Configuration Techniques for PnP Process Monitoring Real-Time Configuration Techniques for PnP Performance Optimization Benchmark Study and Real-Time Implementation Target Groups Researchers and students of Automation and Control Engineering Practitioners in the area of Industrial and Production Engineering The Author Hao Luo received the Ph.D. degree at the Institute for Automatic Control and Complex Systems (AKS) at the University of Duisburg-Essen, Germany, ...

  1. Ozone Measurements Monitoring Using Data-Based Approach

    KAUST Repository

    Harrou, Fouzi; Kadri, Farid; Khadraoui, Sofiane; Sun, Ying

    2016-01-01

    The complexity of ozone (O3) formation mechanisms in the troposphere make the fast and accurate modeling of ozone very challenging. In the absence of a process model, principal component analysis (PCA) has been extensively used as a data-based monitoring technique for highly correlated process variables; however conventional PCA-based detection indices often fail to detect small or moderate anomalies. In this work, we propose an innovative method for detecting small anomalies in highly correlated multivariate data. The developed method combine the multivariate exponentially weighted moving average (MEWMA) monitoring scheme with PCA modelling in order to enhance anomaly detection performance. Such a choice is mainly motivated by the greater ability of the MEWMA monitoring scheme to detect small changes in the process mean. The proposed PCA-based MEWMA monitoring scheme is successfully applied to ozone measurements data collected from Upper Normandy region, France, via the network of air quality monitoring stations. The detection results of the proposed method are compared to that declared by Air Normand air monitoring association.

  2. Ozone Measurements Monitoring Using Data-Based Approach

    KAUST Repository

    Harrou, Fouzi

    2016-02-01

    The complexity of ozone (O3) formation mechanisms in the troposphere make the fast and accurate modeling of ozone very challenging. In the absence of a process model, principal component analysis (PCA) has been extensively used as a data-based monitoring technique for highly correlated process variables; however conventional PCA-based detection indices often fail to detect small or moderate anomalies. In this work, we propose an innovative method for detecting small anomalies in highly correlated multivariate data. The developed method combine the multivariate exponentially weighted moving average (MEWMA) monitoring scheme with PCA modelling in order to enhance anomaly detection performance. Such a choice is mainly motivated by the greater ability of the MEWMA monitoring scheme to detect small changes in the process mean. The proposed PCA-based MEWMA monitoring scheme is successfully applied to ozone measurements data collected from Upper Normandy region, France, via the network of air quality monitoring stations. The detection results of the proposed method are compared to that declared by Air Normand air monitoring association.

  3. Optimization based tuning approach for offset free MPC

    DEFF Research Database (Denmark)

    Olesen, Daniel Haugård; Huusom, Jakob Kjøbsted; Jørgensen, John Bagterp

    2012-01-01

    We present an optimization based tuning procedure with certain robustness properties for an offset free Model Predictive Controller (MPC). The MPC is designed for multivariate processes that can be represented by an ARX model. The advantage of ARX model representations is that standard system...... identifiation techniques using convex optimization can be used for identification of such models from input-output data. The stochastic model of the ARX model identified from input-output data is modified with an ARMA model designed as part of the MPC-design procedure to ensure offset-free control. The ARMAX...... model description resulting from the extension can be realized as a state space model in innovation form. The MPC is designed and implemented based on this state space model in innovation form. Expressions for the closed-loop dynamics of the unconstrained system is used to derive the sensitivity...

  4. Handbook of Optimization From Classical to Modern Approach

    CERN Document Server

    Snášel, Václav; Abraham, Ajith

    2013-01-01

    Optimization problems were and still are the focus of mathematics from antiquity to the present. Since the beginning of our civilization, the human race has had to confront numerous technological challenges, such as finding the optimal solution of various problems including control technologies, power sources construction, applications in economy, mechanical engineering and energy distribution amongst others. These examples encompass both ancient as well as modern technologies like the first electrical energy distribution network in USA etc. Some of the key principles formulated in the middle ages were done by Johannes Kepler (Problem of the wine barrels), Johan Bernoulli (brachystochrone problem), Leonhard Euler (Calculus of Variations), Lagrange (Principle multipliers), that were formulated primarily in the ancient world and are of a geometric nature. In the beginning of the modern era, works of L.V. Kantorovich and G.B. Dantzig (so-called linear programming) can be considered amongst others. This book disc...

  5. A Robust Statistics Approach to Minimum Variance Portfolio Optimization

    Science.gov (United States)

    Yang, Liusha; Couillet, Romain; McKay, Matthew R.

    2015-12-01

    We study the design of portfolios under a minimum risk criterion. The performance of the optimized portfolio relies on the accuracy of the estimated covariance matrix of the portfolio asset returns. For large portfolios, the number of available market returns is often of similar order to the number of assets, so that the sample covariance matrix performs poorly as a covariance estimator. Additionally, financial market data often contain outliers which, if not correctly handled, may further corrupt the covariance estimation. We address these shortcomings by studying the performance of a hybrid covariance matrix estimator based on Tyler's robust M-estimator and on Ledoit-Wolf's shrinkage estimator while assuming samples with heavy-tailed distribution. Employing recent results from random matrix theory, we develop a consistent estimator of (a scaled version of) the realized portfolio risk, which is minimized by optimizing online the shrinkage intensity. Our portfolio optimization method is shown via simulations to outperform existing methods both for synthetic and real market data.

  6. A New Interpolation Approach for Linearly Constrained Convex Optimization

    KAUST Repository

    Espinoza, Francisco

    2012-08-01

    In this thesis we propose a new class of Linearly Constrained Convex Optimization methods based on the use of a generalization of Shepard\\'s interpolation formula. We prove the properties of the surface such as the interpolation property at the boundary of the feasible region and the convergence of the gradient to the null space of the constraints at the boundary. We explore several descent techniques such as steepest descent, two quasi-Newton methods and the Newton\\'s method. Moreover, we implement in the Matlab language several versions of the method, particularly for the case of Quadratic Programming with bounded variables. Finally, we carry out performance tests against Matab Optimization Toolbox methods for convex optimization and implementations of the standard log-barrier and active-set methods. We conclude that the steepest descent technique seems to be the best choice so far for our method and that it is competitive with other standard methods both in performance and empirical growth order.

  7. Review of Reliability-Based Design Optimization Approach and Its Integration with Bayesian Method

    Science.gov (United States)

    Zhang, Xiangnan

    2018-03-01

    A lot of uncertain factors lie in practical engineering, such as external load environment, material property, geometrical shape, initial condition, boundary condition, etc. Reliability method measures the structural safety condition and determine the optimal design parameter combination based on the probabilistic theory. Reliability-based design optimization (RBDO) is the most commonly used approach to minimize the structural cost or other performance under uncertainty variables which combines the reliability theory and optimization. However, it cannot handle the various incomplete information. The Bayesian approach is utilized to incorporate this kind of incomplete information in its uncertainty quantification. In this paper, the RBDO approach and its integration with Bayesian method are introduced.

  8. Optimization of PZT ceramic IDT sensors for health monitoring of structures.

    Science.gov (United States)

    Takpara, Rafatou; Duquennoy, Marc; Ouaftouh, Mohammadi; Courtois, Christian; Jenot, Frédéric; Rguiti, Mohamed

    2017-08-01

    Surface acoustic waves (SAW) are particularly suited to effectively monitoring and characterizing structural surfaces (condition of the surface, coating, thin layer, micro-cracks…) as their energy is localized on the surface, within approximately one wavelength. Conventionally, in non-destructive testing, wedge sensors are used to the generation guided waves but they are especially suited to flat surfaces and sized for a given type material (angle of refraction). Additionally, these sensors are quite expensive so it is quite difficult to leave the sensors permanently on the structure for its health monitoring. Therefore we are considering in this study, another type of ultrasonic sensors, able to generate SAW. These sensors are interdigital sensors or IDT sensors for InterDigital Transducer. This paper focuses on optimization of IDT sensors for non-destructive structural testing by using PZT ceramics. The challenge was to optimize the dimensional parameters of the IDT sensors in order to efficiently generate surface waves. Acoustic tests then confirmed these parameters. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Online total organic carbon (TOC) monitoring for water and wastewater treatment plants processes and operations optimization

    Science.gov (United States)

    Assmann, Céline; Scott, Amanda; Biller, Dondra

    2017-08-01

    Organic measurements, such as biological oxygen demand (BOD) and chemical oxygen demand (COD) were developed decades ago in order to measure organics in water. Today, these time-consuming measurements are still used as parameters to check the water treatment quality; however, the time required to generate a result, ranging from hours to days, does not allow COD or BOD to be useful process control parameters - see (1) Standard Method 5210 B; 5-day BOD Test, 1997, and (2) ASTM D1252; COD Test, 2012. Online organic carbon monitoring allows for effective process control because results are generated every few minutes. Though it does not replace BOD or COD measurements still required for compliance reporting, it allows for smart, data-driven and rapid decision-making to improve process control and optimization or meet compliances. Thanks to the smart interpretation of generated data and the capability to now take real-time actions, municipal drinking water and wastewater treatment facility operators can positively impact their OPEX (operational expenditure) efficiencies and their capabilities to meet regulatory requirements. This paper describes how three municipal wastewater and drinking water plants gained process insights, and determined optimization opportunities thanks to the implementation of online total organic carbon (TOC) monitoring.

  10. Observer variability and optimal criteria of transient ischemia during ST monitoring with continuous 12-lead ECG.

    Science.gov (United States)

    Jernberg, Tomas; Cronblad, Jörgen; Lindahl, Bertil; Wallentin, Lars

    2002-07-01

    ST monitoring with continuous 12-lead ECG is a well-established method in patients with unstable coronary artery disease (CAD). However, the method lacks documentation on optimal criteria for episodes of transient ischemia and on observer variability. Observer variability was evaluated in 24-hour recordings from 100 patients with unstable CAD with monitoring in the coronary care unit. Influence on ST changes by variations in body position were evaluated by monitoring 50 patients in different body positions. Different criteria of transient ischemia and their predictive importance were evaluated in 630 patients with unstable CAD who underwent 12 hours of monitoring and thereafter were followed for 1 to13 months. Two sets of criteria were tested: (1) ST deviation > or = 0.1 mV for at least 1 minute, and (2) ST depression > or = 0.05 mV or elevation > or = 0.1 mV for at least 1 minute. When the first set of criteria were used, the interobserver agreement was good (kappa = 0.72) and 8 (16%) had significant ST changes in at least one body position. Out of 100 patients with symptoms suggestive of unstable CAD and such ischemia, 24 (24%) had a cardiac event during follow-up. When the second set of criteria were used, the interobserver agreement was poor (kappa = 0.32) and 21 (42%) had significant ST changes in at least one body position. Patients fulfilling the second but not the first set of criteria did not have a higher risk of cardiac event than those without transient ischemia (5.3 vs 4.3%). During 12-lead ECG monitoring, transient ischemic episodes should be defined as ST deviations > or = 0.1 mV for at least 1 minute, based on a low observer variability, minor problems with postural ST changes and an important predictive value.

  11. An Evolutionary Multi-objective Approach for Speed Tuning Optimization with Energy Saving in Railway Management

    OpenAIRE

    Chevrier , Rémy

    2010-01-01

    International audience; An approach for speed tuning in railway management is presented for optimizing both travel duration and energy saving. This approach is based on a state-of-the-art evolutionary algorithm with Pareto approach. This algorithm provides a set of diversified non-dominated solutions to the decision-maker. A case study on Gonesse connection (France) is also reported and analyzed.

  12. Condition monitoring and thermo economic optimization of operation for a hybrid plant using artificial neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Assadi, Mohsen; Fast, Magnus (Lund University, Dept. of Energy Sciences, Lund (Sweden))

    2008-05-15

    The project aim is to model the hybrid plant at Vaesthamnsverket in Helsingborg using artificial neural networks (ANN) and integrating the ANN models, for online condition monitoring and thermo economic optimization, on site. The definition of a hybrid plant is that it uses more than one fuel, in this case a natural gas fuelled gas turbine with heat recovery steam generator (HRSG) and a biomass fuelled steam boiler with steam turbine. The thermo economic optimization takes into account current electricity prices, taxes, fuel prices etc. and calculates the current production cost along with the 'predicted' production cost. The tool also has a built in feature of predicting when a compressor wash is economically beneficial. The user interface is developed together with co-workers at Vaesthamnsverket to ensure its usefulness. The user interface includes functions for warnings and alarms when possible deviations in operation occur and also includes a feature for plotting parameter trends (both measured and predicted values) in selected time intervals. The target group is the plant owners and the original equipment manufacturers (OEM). The power plant owners want to acquire a product for condition monitoring and thermo economic optimization of e.g. maintenance. The OEMs main interest lies in investigating the possibilities of delivering ANN models, for condition monitoring, along with their new gas turbines. The project has been carried out at Lund University, Department of Energy Sciences, with support from Vaesthamnsverket AB and Siemens Industrial Turbomachinery AB. Vaesthamnsverket has contributed with operational data from the plant as well as support in plant related questions. They have also been involved in the implementation of the ANN models in their computer system and the development of the user interface. Siemens have contributed with expert knowledge about their SGT800 gas turbine. The implementation of the ANN models, and the accompanying user

  13. RF cavity design exploiting a new derivative-free trust region optimization approach

    Directory of Open Access Journals (Sweden)

    Abdel-Karim S.O. Hassan

    2015-11-01

    Full Text Available In this article, a novel derivative-free (DF surrogate-based trust region optimization approach is proposed. In the proposed approach, quadratic surrogate models are constructed and successively updated. The generated surrogate model is then optimized instead of the underlined objective function over trust regions. Truncated conjugate gradients are employed to find the optimal point within each trust region. The approach constructs the initial quadratic surrogate model using few data points of order O(n, where n is the number of design variables. The proposed approach adopts weighted least squares fitting for updating the surrogate model instead of interpolation which is commonly used in DF optimization. This makes the approach more suitable for stochastic optimization and for functions subject to numerical error. The weights are assigned to give more emphasis to points close to the current center point. The accuracy and efficiency of the proposed approach are demonstrated by applying it to a set of classical bench-mark test problems. It is also employed to find the optimal design of RF cavity linear accelerator with a comparison analysis with a recent optimization technique.

  14. A Gradient Optimization Approach to Adaptive Multi-Robot Control

    Science.gov (United States)

    2009-09-01

    t) - v, (t) = 0 Vi from the first term in the sum, so the network converges to a near-optimal coverage configuration. Furthermore, from Ci(7) Td (t...they are not a function of 7) to get ry&(t)TW[j (TF)ICjIC[ d-l &di(t) = nYdcti( td Since 1V - 0, if limt,, Ai (t) is positive definite (we know the limit...property clearly, examine the magnitude of force exerted by one neighbor (m - 1 = 1) given by IIl f = 0ij - 02i/ llj - P I, and shown in the left of

  15. Optimization approach for saddling cost of medical cyclotrons with fuzziness

    International Nuclear Information System (INIS)

    Abass, S.A.; Massoud, E.M.A.

    2007-01-01

    Most radiation fields are combinations of different kinds of radiation. The radiations of most significance are fast neutrons, thermal neutrons, primary gammas and secondary gammas. Thermos's composite shielding materials are designed to attenuate these types of radiation. The shielding design requires an accurate cost-benefit analysis based on uncertainty optimization technique. The theory of fuzzy sets has been employed to formulate and solve the problem of cost-benefit analysis of medical cyclotron. This medical radioisotope production cyclotron is based in Sydney, Australia

  16. Fuel failure monitoring system design approach for KALIMER

    International Nuclear Information System (INIS)

    Song, Soon Ja; Hwang, I. K.; Kwon, Kee Choon

    1998-01-01

    Fuel Failure Monitoring System (FFMS) detects fission gas and locates failed fuels in Liquid Metal Reactor. This system comprises three subsystems; delayed neutron monitoring, cover gas monitoring, and gas tagging. The purpose of this system is to improve the integrity and availability of the liquid metal plant. In this paper, FFMS was analyzed on detection method and compared with various existing liquid metal plants. Sampling and detecting methods were classified with specific plant types. Several technologies of them was recognized and used in most liquid metal reactors. Detection technology and analysis performance, however, must be improved because of new technology when liquid metal plant is built, but the FFMS design scheme will not be changed. Thereby this paper suggests the design to implement KALIMER(Korea Advanced LIquid MEtal Reactor) FFMS

  17. Multiagent Task Coordination Using a Distributed Optimization Approach

    Science.gov (United States)

    2015-09-01

    AND ADDRESS(ES) Air Force Research Laboratory/ RISC 525 Brooks Road Rome NY 13441-4505 10. SPONSOR/MONITOR’S ACRONYM(S) AFRL/RI 11. SPONSOR...Staskevich and B. Abbe are with AFRL/ RISC , Rome, NY, 13441. Email: jingwang@fsmail.bradley.edu This work was supported by the Air Force Research...Engineering and Computer Science, University of Central Florida, Orlando, FL 32816, USA. G. Staskevich and B. Abbe are with AFRL/ RISC , Rome, NY, 13441. T

  18. A Study on the Optimal Positions of ECG Electrodes in a Garment for the Design of ECG-Monitoring Clothing for Male.

    Science.gov (United States)

    Cho, Hakyung; Lee, Joo Hyeon

    2015-09-01

    Smart clothing is a sort of wearable device used for ubiquitous health monitoring. It provides comfort and efficiency in vital sign measurements and has been studied and developed in various types of monitoring platforms such as T-shirt and sports bra. However, despite these previous approaches, smart clothing for electrocardiography (ECG) monitoring has encountered a serious shortcoming relevant to motion artifacts caused by wearer movement. In effect, motion artifacts are one of the major problems in practical implementation of most wearable health-monitoring devices. In the ECG measurements collected by a garment, motion artifacts are usually caused by improper location of the electrode, leading to lack of contact between the electrode and skin with body motion. The aim of this study was to suggest a design for ECG-monitoring clothing contributing to reduction of motion artifacts. Based on the clothing science theory, it was assumed in this study that the stability of the electrode in a dynamic state differed depending on the electrode location in an ECG-monitoring garment. Founded on this assumption, effects of 56 electrode positions were determined by sectioning the surface of the garment into grids with 6 cm intervals in the front and back of the bodice. In order to determine the optimal locations of the ECG electrodes from the 56 positions, ECG measurements were collected from 10 participants at every electrode position in the garment while the wearer was in motion. The electrode locations indicating both an ECG measurement rate higher than 80.0 % and a large amplitude during motion were selected as the optimal electrode locations. The results of this analysis show four electrode locations with consistently higher ECG measurement rates and larger amplitudes amongst the 56 locations. These four locations were abstracted to be least affected by wearer movement in this research. Based on this result, a design of the garment-formed ECG monitoring platform

  19. An iterative approach for the optimization of pavement maintenance management at the network level.

    Science.gov (United States)

    Torres-Machí, Cristina; Chamorro, Alondra; Videla, Carlos; Pellicer, Eugenio; Yepes, Víctor

    2014-01-01

    Pavement maintenance is one of the major issues of public agencies. Insufficient investment or inefficient maintenance strategies lead to high economic expenses in the long term. Under budgetary restrictions, the optimal allocation of resources becomes a crucial aspect. Two traditional approaches (sequential and holistic) and four classes of optimization methods (selection based on ranking, mathematical optimization, near optimization, and other methods) have been applied to solve this problem. They vary in the number of alternatives considered and how the selection process is performed. Therefore, a previous understanding of the problem is mandatory to identify the most suitable approach and method for a particular network. This study aims to assist highway agencies, researchers, and practitioners on when and how to apply available methods based on a comparative analysis of the current state of the practice. Holistic approach tackles the problem considering the overall network condition, while the sequential approach is easier to implement and understand, but may lead to solutions far from optimal. Scenarios defining the suitability of these approaches are defined. Finally, an iterative approach gathering the advantages of traditional approaches is proposed and applied in a case study. The proposed approach considers the overall network condition in a simpler and more intuitive manner than the holistic approach.

  20. An Iterative Approach for the Optimization of Pavement Maintenance Management at the Network Level

    Directory of Open Access Journals (Sweden)

    Cristina Torres-Machí

    2014-01-01

    Full Text Available Pavement maintenance is one of the major issues of public agencies. Insufficient investment or inefficient maintenance strategies lead to high economic expenses in the long term. Under budgetary restrictions, the optimal allocation of resources becomes a crucial aspect. Two traditional approaches (sequential and holistic and four classes of optimization methods (selection based on ranking, mathematical optimization, near optimization, and other methods have been applied to solve this problem. They vary in the number of alternatives considered and how the selection process is performed. Therefore, a previous understanding of the problem is mandatory to identify the most suitable approach and method for a particular network. This study aims to assist highway agencies, researchers, and practitioners on when and how to apply available methods based on a comparative analysis of the current state of the practice. Holistic approach tackles the problem considering the overall network condition, while the sequential approach is easier to implement and understand, but may lead to solutions far from optimal. Scenarios defining the suitability of these approaches are defined. Finally, an iterative approach gathering the advantages of traditional approaches is proposed and applied in a case study. The proposed approach considers the overall network condition in a simpler and more intuitive manner than the holistic approach.

  1. A triaxial accelerometer monkey algorithm for optimal sensor placement in structural health monitoring

    Science.gov (United States)

    Jia, Jingqing; Feng, Shuo; Liu, Wei

    2015-06-01

    Optimal sensor placement (OSP) technique is a vital part of the field of structural health monitoring (SHM). Triaxial accelerometers have been widely used in the SHM of large-scale structures in recent years. Triaxial accelerometers must be placed in such a way that all of the important dynamic information is obtained. At the same time, the sensor configuration must be optimal, so that the test resources are conserved. The recommended practice is to select proper degrees of freedom (DOF) based upon several criteria and the triaxial accelerometers are placed at the nodes corresponding to these DOFs. This results in non-optimal placement of many accelerometers. A ‘triaxial accelerometer monkey algorithm’ (TAMA) is presented in this paper to solve OSP problems of triaxial accelerometers. The EFI3 measurement theory is modified and involved in the objective function to make it more adaptable in the OSP technique of triaxial accelerometers. A method of calculating the threshold value based on probability theory is proposed to improve the healthy rate of monkeys in a troop generation process. Meanwhile, the processes of harmony ladder climb and scanning watch jump are proposed and given in detail. Finally, Xinghai NO.1 Bridge in Dalian is implemented to demonstrate the effectiveness of TAMA. The final results obtained by TAMA are compared with those of the original monkey algorithm and EFI3 measurement, which show that TAMA can improve computational efficiency and get a better sensor configuration.

  2. Optimal wind power deployment in Europe. A portfolio approach

    International Nuclear Information System (INIS)

    Roques, Fabien; Hiroux, Celine; Saguan, Marcelo

    2010-01-01

    Geographic diversification of wind farms can smooth out the fluctuations in wind power generation and reduce the associated system balancing and reliability costs. The paper uses historical wind production data from five European countries (Austria, Denmark, France, Germany, and Spain) and applies the Mean-Variance Portfolio theory to identify cross-country portfolios that minimise the total variance of wind production for a given level of production. Theoretical unconstrained portfolios show that countries (Spain and Denmark) with the best wind resource or whose size contributes to smoothing out the country output variability dominate optimal portfolios. The methodology is then elaborated to derive optimal constrained portfolios taking into account national wind resource potential and transmission constraints and compare them with the projected portfolios for 2020. Such constraints limit the theoretical potential efficiency gains from geographical diversification, but there is still considerable room to improve performance from actual or projected portfolios. These results highlight the need for more cross-border interconnection capacity, for greater coordination of European renewable support policies, and for renewable support mechanisms and electricity market designs providing locational incentives. Under these conditions, a mechanism for renewables credits trading could help aligning wind power portfolios with the theoretically efficient geographic dispersion. (author)

  3. Multiobjective Optimization Modeling Approach for Multipurpose Single Reservoir Operation

    Directory of Open Access Journals (Sweden)

    Iosvany Recio Villa

    2018-04-01

    Full Text Available The water resources planning and management discipline recognizes the importance of a reservoir’s carryover storage. However, mathematical models for reservoir operation that include carryover storage are scarce. This paper presents a novel multiobjective optimization modeling framework that uses the constraint-ε method and genetic algorithms as optimization techniques for the operation of multipurpose simple reservoirs, including carryover storage. The carryover storage was conceived by modifying Kritsky and Menkel’s method for reservoir design at the operational stage. The main objective function minimizes the cost of the total annual water shortage for irrigation areas connected to a reservoir, while the secondary one maximizes its energy production. The model includes operational constraints for the reservoir, Kritsky and Menkel’s method, irrigation areas, and the hydropower plant. The study is applied to Carlos Manuel de Céspedes reservoir, establishing a 12-month planning horizon and an annual reliability of 75%. The results highly demonstrate the applicability of the model, obtaining monthly releases from the reservoir that include the carryover storage, degree of reservoir inflow regulation, water shortages in irrigation areas, and the energy generated by the hydroelectric plant. The main product is an operational graph that includes zones as well as rule and guide curves, which are used as triggers for long-term reservoir operation.

  4. Compound light ion fuel cycles: An approach to optimization

    International Nuclear Information System (INIS)

    Kernbichler, W.; Heindler, M.

    1985-01-01

    Together with the relatively high complexity and the low power density anticipated for fusion reactors have produced different attitude towards the long term perspective of fusion as a commercial energy source. The favourite pathway is to trust in optimization aiming at low tritium inventory, the availability of low-activation structure materials, the increase of redundancy, etc. In contrast, a respectable minority suggests turning away from d-t fusion or to envisage fusion as powerful neutron rather than energy source (fusion as fissile fuel or synfuel factory). We here intend to investigate the potentiality of fusion based on alternatives to d-t fuel. Such so called ''advanced fuels'' require higher burn temperatures and advanced reactor concepts (high-beta confinement schemes to compensate for their inherently lower reactivities. The experience that has been gained in fusion oriented plasma research admittedly justifies optimism for advanced fuels to a still lesser extent than for d-t. It can however be argued that it may pay off to choose a developmental direction with higher risk for failure but aiming at a more desirable end product. In order to explore this eventual desirability of advanced fuel fusion, we assume, as has been done in the case of d-t, that the first category of problems can be successfully handled. Our goal is thus to examine the potentiality of advanced fuels with respect to the second category of problems which largely determines the attractivity of utilization in fusion reactors

  5. A New Reversible Database Watermarking Approach with Firefly Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Mustafa Bilgehan Imamoglu

    2017-01-01

    Full Text Available Up-to-date information is crucial in many fields such as medicine, science, and stock market, where data should be distributed to clients from a centralized database. Shared databases are usually stored in data centers where they are distributed over insecure public access network, the Internet. Sharing may result in a number of problems such as unauthorized copies, alteration of data, and distribution to unauthorized people for reuse. Researchers proposed using watermarking to prevent problems and claim digital rights. Many methods are proposed recently to watermark databases to protect digital rights of owners. Particularly, optimization based watermarking techniques draw attention, which results in lower distortion and improved watermark capacity. Difference expansion watermarking (DEW with Firefly Algorithm (FFA, a bioinspired optimization technique, is proposed to embed watermark into relational databases in this work. Best attribute values to yield lower distortion and increased watermark capacity are selected efficiently by the FFA. Experimental results indicate that FFA has reduced complexity and results in less distortion and improved watermark capacity compared to similar works reported in the literature.

  6. Optimal PID settings for first and second-order processes - Comparison with different controller tuning approaches

    OpenAIRE

    Pappas, Iosif

    2016-01-01

    PID controllers are extensively used in industry. Although many tuning methodologies exist, finding good controller settings is not an easy task and frequently optimization-based design is preferred to satisfy more complex criteria. In this thesis, the focus was to find which tuning approaches, if any, present close to optimal behavior. Pareto-optimal controllers were found for different first and second-order processes with time delay. Performance was quantified in terms of the integrat...

  7. Ultra low power signal oriented approach for wireless health monitoring.

    Science.gov (United States)

    Marinkovic, Stevan; Popovici, Emanuel

    2012-01-01

    In recent years there is growing pressure on the medical sector to reduce costs while maintaining or even improving the quality of care. A potential solution to this problem is real time and/or remote patient monitoring by using mobile devices. To achieve this, medical sensors with wireless communication, computational and energy harvesting capabilities are networked on, or in, the human body forming what is commonly called a Wireless Body Area Network (WBAN). We present the implementation of a novel Wake Up Receiver (WUR) in the context of standardised wireless protocols, in a signal-oriented WBAN environment and present a novel protocol intended for wireless health monitoring (WhMAC). WhMAC is a TDMA-based protocol with very low power consumption. It utilises WBAN-specific features and a novel ultra low power wake up receiver technology, to achieve flexible and at the same time very low power wireless data transfer of physiological signals. As the main application is in the medical domain, or personal health monitoring, the protocol caters for different types of medical sensors. We define four sensor modes, in which the sensors can transmit data, depending on the sensor type and emergency level. A full power dissipation model is provided for the protocol, with individual hardware and application parameters. Finally, an example application shows the reduction in the power consumption for different data monitoring scenarios.

  8. A Log Mining Approach for Process Monitoring in SCADA

    NARCIS (Netherlands)

    Hadziosmanovic, D.; Bolzoni, D.; Hartel, Pieter H.

    2012-01-01

    SCADA (Supervisory Control and Data Acquisition) systems are used for controlling and monitoring industrial processes. We propose a methodology to systematically identify potential process-related threats in SCADA. Process-related threats take place when an attacker gains user access rights and

  9. A Log Mining Approach for Process Monitoring in SCADA

    NARCIS (Netherlands)

    Hadziosmanovic, D.; Bolzoni, D.; Hartel, Pieter H.

    2010-01-01

    SCADA (Supervisory Control and Data Acquisition) systems are used for controlling and monitoring industrial processes. We propose a methodology to systematically identify potential process-related threats in SCADA. Process-related threats take place when an attacker gains user access rights and

  10. The highly reintegrative approach of electronic monitoring in the Netherlands

    NARCIS (Netherlands)

    Boone, M.M.; Kooij, van der M.; Rap, S.E.

    2017-01-01

    This contribution describes the way electronic monitoring (EM) is organized and implemented in the Netherlands. It will become clear that the situation in the Netherlands is characterized by, in particular, two features. The application of EM is highly interwoven with the Probation Service and its

  11. Mass Optimization of Battery/Supercapacitors Hybrid Systems Based on a Linear Programming Approach

    Science.gov (United States)

    Fleury, Benoit; Labbe, Julien

    2014-08-01

    The objective of this paper is to show that, on a specific launcher-type mission profile, a 40% gain of mass is expected using a battery/supercapacitors active hybridization instead of a single battery solution. This result is based on the use of a linear programming optimization approach to perform the mass optimization of the hybrid power supply solution.

  12. Optimal design of supply chain network under uncertainty environment using hybrid analytical and simulation modeling approach

    Science.gov (United States)

    Chiadamrong, N.; Piyathanavong, V.

    2017-12-01

    Models that aim to optimize the design of supply chain networks have gained more interest in the supply chain literature. Mixed-integer linear programming and discrete-event simulation are widely used for such an optimization problem. We present a hybrid approach to support decisions for supply chain network design using a combination of analytical and discrete-event simulation models. The proposed approach is based on iterative procedures until the difference between subsequent solutions satisfies the pre-determined termination criteria. The effectiveness of proposed approach is illustrated by an example, which shows closer to optimal results with much faster solving time than the results obtained from the conventional simulation-based optimization model. The efficacy of this proposed hybrid approach is promising and can be applied as a powerful tool in designing a real supply chain network. It also provides the possibility to model and solve more realistic problems, which incorporate dynamism and uncertainty.

  13. Optimization of experimental conditions for the monitoring of nucleation and growth of racemic Diprophylline from the supercooled melt

    Science.gov (United States)

    Lemercier, Aurélien; Viel, Quentin; Brandel, Clément; Cartigny, Yohann; Dargent, Eric; Petit, Samuel; Coquerel, Gérard

    2017-08-01

    Since more and more pharmaceutical substances are developed as amorphous forms, it is nowadays of major relevance to get insights into the nucleation and growth mechanisms from supercooled melts (SCM). A step-by-step approach of recrystallization from a SCM is presented here, designed to elucidate the impact of various experimental parameters. Using the bronchodilator agent Diprophylline (DPL) as a model compound, it is shown that optimal conditions for informative observations of the crystallization behaviour from supercooled racemic DPL require to place samples between two cover slides with a maximum sample thickness of 20 μm, and to monitor recrystallization during an annealing step of 30 min at 70 °C, i.e. about 33 °C above the temperature of glass transition. In these optimized conditions, it could be established that DPL crystallization proceeds in two steps: spontaneous nucleation and growth of large and well-faceted particles of a new crystal form (primary crystals: PC) and subsequent crystallization of a previously known form (RII) that develops from specific surfaces of PC. The formation of PC particles therefore constitutes the key-step of the crystallization events and is shown to be favoured by at least 2.33 wt% of the major chemical impurity, Theophylline.

  14. The Odyssey Approach for Optimizing Federated SPARQL Queries

    DEFF Research Database (Denmark)

    Montoya, Gabriela; Skaf-Molli, Hala; Hose, Katja

    2017-01-01

    . Nevertheless, these plans may still exhibit a high number of intermediate results or high execution times because of heuristics and inaccurate cost estimations. In this paper, we present Odyssey, an approach that uses statistics that allow for a more accurate cost estimation for federated queries and therefore...

  15. A Quasi-Robust Optimization Approach for Crew Rescheduling

    NARCIS (Netherlands)

    Veelenturf, L.P.; Potthoff, D.; Huisman, D.; Kroon, L.G.; Maroti, G.; Wagelmans, A.P.M.

    2016-01-01

    This paper studies the real-time crew rescheduling problem in case of large-scale disruptions. One of the greatest challenges of real-time disruption management is the unknown duration of the disruption. In this paper we present a novel approach for crew rescheduling where we deal with this

  16. Reactive Robustness and Integrated Approaches for Railway Optimization Problems

    DEFF Research Database (Denmark)

    Haahr, Jørgen Thorlund

    journeys helps the driver to drive efficiently and enhances robustness in a realistic (dynamic) environment. Four international scientific prizes have been awarded for distinct parts of the research during the course of this PhD project. The first prize was awarded for work during the \\2014 RAS Problem...... to absorb or withstand unexpected events such as delays. Making robust plans is central in order to maintain a safe and timely railway operation. This thesis focuses on reactive robustness, i.e., the ability to react once a plan is rendered infeasible in operation due to disruptions. In such time...... Solving Competition", where a freight yard optimization problem was considered. The second junior (PhD) prize was awared for the work performed in the \\ROADEF/EURO Challenge 2014: Trains don't vanish!", where the planning of rolling stock movements at a large station was considered. An honorable mention...

  17. Particle Swarm Optimization Approach in a Consignment Inventory System

    Science.gov (United States)

    Sharifyazdi, Mehdi; Jafari, Azizollah; Molamohamadi, Zohreh; Rezaeiahari, Mandana; Arshizadeh, Rahman

    2009-09-01

    Consignment Inventory (CI) is a kind of inventory which is in the possession of the customer, but is still owned by the supplier. This creates a condition of shared risk whereby the supplier risks the capital investment associated with the inventory while the customer risks dedicating retail space to the product. This paper considers both the vendor's and the retailers' costs in an integrated model. The vendor here is a warehouse which stores one type of product and supplies it at the same wholesale price to multiple retailers who then sell the product in independent markets at retail prices. Our main aim is to design a CI system which generates minimum costs for the two parties. Here a Particle Swarm Optimization (PSO) algorithm is developed to calculate the proper values. Finally a sensitivity analysis is performed to examine the effects of each parameter on decision variables. Also PSO performance is compared with genetic algorithm.

  18. Heuristic versus statistical physics approach to optimization problems

    International Nuclear Information System (INIS)

    Jedrzejek, C.; Cieplinski, L.

    1995-01-01

    Optimization is a crucial ingredient of many calculation schemes in science and engineering. In this paper we assess several classes of methods: heuristic algorithms, methods directly relying on statistical physics such as the mean-field method and simulated annealing; and Hopfield-type neural networks and genetic algorithms partly related to statistical physics. We perform the analysis for three types of problems: (1) the Travelling Salesman Problem, (2) vector quantization, and (3) traffic control problem in multistage interconnection network. In general, heuristic algorithms perform better (except for genetic algorithms) and much faster but have to be specific for every problem. The key to improving the performance could be to include heuristic features into general purpose statistical physics methods. (author)

  19. Uncertainty Aware Structural Topology Optimization Via a Stochastic Reduced Order Model Approach

    Science.gov (United States)

    Aguilo, Miguel A.; Warner, James E.

    2017-01-01

    This work presents a stochastic reduced order modeling strategy for the quantification and propagation of uncertainties in topology optimization. Uncertainty aware optimization problems can be computationally complex due to the substantial number of model evaluations that are necessary to accurately quantify and propagate uncertainties. This computational complexity is greatly magnified if a high-fidelity, physics-based numerical model is used for the topology optimization calculations. Stochastic reduced order model (SROM) methods are applied here to effectively 1) alleviate the prohibitive computational cost associated with an uncertainty aware topology optimization problem; and 2) quantify and propagate the inherent uncertainties due to design imperfections. A generic SROM framework that transforms the uncertainty aware, stochastic topology optimization problem into a deterministic optimization problem that relies only on independent calls to a deterministic numerical model is presented. This approach facilitates the use of existing optimization and modeling tools to accurately solve the uncertainty aware topology optimization problems in a fraction of the computational demand required by Monte Carlo methods. Finally, an example in structural topology optimization is presented to demonstrate the effectiveness of the proposed uncertainty aware structural topology optimization approach.

  20. Probabilistic safety assessment and optimal control of hazardous technological systems. A marked point process approach

    International Nuclear Information System (INIS)

    Holmberg, J.

    1997-04-01

    The thesis models risk management as an optimal control problem for a stochastic process. The approach classes the decisions made by management into three categories according to the control methods of a point process: (1) planned process lifetime, (2) modification of the design, and (3) operational decisions. The approach is used for optimization of plant shutdown criteria and surveillance test strategies of a hypothetical nuclear power plant

  1. Probabilistic safety assessment and optimal control of hazardous technological systems. A marked point process approach

    Energy Technology Data Exchange (ETDEWEB)

    Holmberg, J [VTT Automation, Espoo (Finland)

    1997-04-01

    The thesis models risk management as an optimal control problem for a stochastic process. The approach classes the decisions made by management into three categories according to the control methods of a point process: (1) planned process lifetime, (2) modification of the design, and (3) operational decisions. The approach is used for optimization of plant shutdown criteria and surveillance test strategies of a hypothetical nuclear power plant. 62 refs. The thesis includes also five previous publications by author.

  2. A penalty guided stochastic fractal search approach for system reliability optimization

    International Nuclear Information System (INIS)

    Mellal, Mohamed Arezki; Zio, Enrico

    2016-01-01

    Modern industry requires components and systems with high reliability levels. In this paper, we address the system reliability optimization problem. A penalty guided stochastic fractal search approach is developed for solving reliability allocation, redundancy allocation, and reliability–redundancy allocation problems. Numerical results of ten case studies are presented as benchmark problems for highlighting the superiority of the proposed approach compared to others from literature. - Highlights: • System reliability optimization is investigated. • A penalty guided stochastic fractal search approach is developed. • Results of ten case studies are compared with previously published methods. • Performance of the approach is demonstrated.

  3. Approaches to monitoring biological outcomes for HPV vaccination: challenges of early adopter countries

    DEFF Research Database (Denmark)

    Wong, Charlene A; Saraiya, Mona; Hariri, Susan

    2011-01-01

    In this review, we describe plans to monitor the impact of human papillomavirus (HPV) vaccine on biologic outcomes in selected international areas (Australia, Canada, Mexico, the Nordic countries, Scotland, and the United States) that have adopted this vaccine. This summary of monitoring plans...... provides a background for discussing the challenges of vaccine monitoring in settings where resources and capacity may vary. A variety of approaches that depend on existing infrastructure and resources are planned or underway for monitoring HPV vaccine impact. Monitoring HPV vaccine impact on biologic...

  4. Geometry Optimization Approaches of Inductively Coupled Printed Spiral Coils for Remote Powering of Implantable Biomedical Sensors

    Directory of Open Access Journals (Sweden)

    Sondos Mehri

    2016-01-01

    Full Text Available Electronic biomedical implantable sensors need power to perform. Among the main reported approaches, inductive link is the most commonly used method for remote powering of such devices. Power efficiency is the most important characteristic to be considered when designing inductive links to transfer energy to implantable biomedical sensors. The maximum power efficiency is obtained for maximum coupling and quality factors of the coils and is generally limited as the coupling between the inductors is usually very small. This paper is dealing with geometry optimization of inductively coupled printed spiral coils for powering a given implantable sensor system. For this aim, Iterative Procedure (IP and Genetic Algorithm (GA analytic based optimization approaches are proposed. Both of these approaches implement simple mathematical models that approximate the coil parameters and the link efficiency values. Using numerical simulations based on Finite Element Method (FEM and with experimental validation, the proposed analytic approaches are shown to have improved accurate performance results in comparison with the obtained performance of a reference design case. The analytical GA and IP optimization methods are also compared to a purely Finite Element Method based on numerical optimization approach (GA-FEM. Numerical and experimental validations confirmed the accuracy and the effectiveness of the analytical optimization approaches to design the optimal coil geometries for the best values of efficiency.

  5. Participative approach to elicit water quality monitoring needs from stakeholder groups - An application of integrated watershed management.

    Science.gov (United States)

    Behmel, S; Damour, M; Ludwig, R; Rodriguez, M J

    2018-07-15

    Water quality monitoring programs (WQMPs) must be based on monitoring objectives originating from the real knowledge needs of all stakeholders in a watershed and users of the resource. This paper proposes a participative approach to elicit knowledge needs and preferred modes of communication from citizens and representatives of organized stakeholders (ROS) on water quality and quantity issues. The participative approach includes six steps and is adaptable and transferable to different types of watersheds. These steps are: (1) perform a stakeholder analysis; (2) conduct an adaptable survey accompanied by a user-friendly public participation geographical information system (PPGIS); (3) hold workshops to meet with ROS to inform them of the results of the survey and PPGIS; discuss attainment of past monitoring objectives; exchange views on new knowledge needs and concerns on water quality and quantity; (4) meet with citizens to obtain the same type of input (as from ROS); (5) analyze the data and information collected to identify new knowledge needs and modes of communication and (6) identify, in collaboration with the individuals in charge of the WQMPs, the short-, medium- and long-term monitoring objectives and communication strategies to be pursued. The participative approach was tested on two distinct watersheds in the province of Quebec, Canada. It resulted in a series of optimization objectives of the existing WQMPs, new monitoring objectives and recommendations regarding communication strategies of the WQMPs' results. The results of this study show that the proposed methodology is appreciated by all parties and that the outcomes and monitoring objectives are acceptable. We also conclude that successful integrated watershed management is a question of scale, and that every aspect of integrated watershed management needs to be adapted to the surface watershed, the groundwater watershed (aquifers) and the human catchment area. Copyright © 2018 Elsevier Ltd. All

  6. Bioaccumulation in aquatic systems: methodological approaches, monitoring and assessment

    DEFF Research Database (Denmark)

    Schäfer, Sabine; Buchmeier, Georgia; Claus, Evelyn

    2015-01-01

    , various scientific and regulatory aspects of bioaccumulation in aquatic systems and the relevant critical issues are discussed. Monitoring chemical concentrations in biota can be used for compliance checking with regulatory directives, for identification of chemical sources or event-related environmental...... temporal and geographical range. Bioaccumulation is also assessed for regulation of chemicals of environmental concern whereby mainly data from laboratory studies on fish bioaccumulation are used. Field data can, however, provide additional important information for regulators. Strategies...... for bioaccumulation assessment still need to be harmonised for different regulations and groups of chemicals. To create awareness for critical issues and to mutually benefit from technical expertise and scientific findings, communication between risk assessment and monitoring communities needs to be improved...

  7. A hardware-algorithm co-design approach to optimize seizure detection algorithms for implantable applications.

    Science.gov (United States)

    Raghunathan, Shriram; Gupta, Sumeet K; Markandeya, Himanshu S; Roy, Kaushik; Irazoqui, Pedro P

    2010-10-30

    Implantable neural prostheses that deliver focal electrical stimulation upon demand are rapidly emerging as an alternate therapy for roughly a third of the epileptic patient population that is medically refractory. Seizure detection algorithms enable feedback mechanisms to provide focally and temporally specific intervention. Real-time feasibility and computational complexity often limit most reported detection algorithms to implementations using computers for bedside monitoring or external devices communicating with the implanted electrodes. A comparison of algorithms based on detection efficacy does not present a complete picture of the feasibility of the algorithm with limited computational power, as is the case with most battery-powered applications. We present a two-dimensional design optimization approach that takes into account both detection efficacy and hardware cost in evaluating algorithms for their feasibility in an implantable application. Detection features are first compared for their ability to detect electrographic seizures from micro-electrode data recorded from kainate-treated rats. Circuit models are then used to estimate the dynamic and leakage power consumption of the compared features. A score is assigned based on detection efficacy and the hardware cost for each of the features, then plotted on a two-dimensional design space. An optimal combination of compared features is used to construct an algorithm that provides maximal detection efficacy per unit hardware cost. The methods presented in this paper would facilitate the development of a common platform to benchmark seizure detection algorithms for comparison and feasibility analysis in the next generation of implantable neuroprosthetic devices to treat epilepsy. Copyright © 2010 Elsevier B.V. All rights reserved.

  8. Optimized Autonomous Space In-situ Sensor-Web for volcano monitoring

    Science.gov (United States)

    Song, W.-Z.; Shirazi, B.; Kedar, S.; Chien, S.; Webb, F.; Tran, D.; Davis, A.; Pieri, D.; LaHusen, R.; Pallister, J.; Dzurisin, D.; Moran, S.; Lisowski, M.

    2008-01-01

    In response to NASA's announced requirement for Earth hazard monitoring sensor-web technology, a multidisciplinary team involving sensor-network experts (Washington State University), space scientists (JPL), and Earth scientists (USGS Cascade Volcano Observatory (CVO)), is developing a prototype dynamic and scaleable hazard monitoring sensor-web and applying it to volcano monitoring. The combined Optimized Autonomous Space -In-situ Sensor-web (OASIS) will have two-way communication capability between ground and space assets, use both space and ground data for optimal allocation of limited power and bandwidth resources on the ground, and use smart management of competing demands for limited space assets. It will also enable scalability and seamless infusion of future space and in-situ assets into the sensor-web. The prototype will be focused on volcano hazard monitoring at Mount St. Helens, which has been active since October 2004. The system is designed to be flexible and easily configurable for many other applications as well. The primary goals of the project are: 1) integrating complementary space (i.e., Earth Observing One (EO-1) satellite) and in-situ (ground-based) elements into an interactive, autonomous sensor-web; 2) advancing sensor-web power and communication resource management technology; and 3) enabling scalability for seamless infusion of future space and in-situ assets into the sensor-web. To meet these goals, we are developing: 1) a test-bed in-situ array with smart sensor nodes capable of making autonomous data acquisition decisions; 2) efficient self-organization algorithm of sensor-web topology to support efficient data communication and command control; 3) smart bandwidth allocation algorithms in which sensor nodes autonomously determine packet priorities based on mission needs and local bandwidth information in real-time; and 4) remote network management and reprogramming tools. The space and in-situ control components of the system will be

  9. Integrated approach to monitor water dynamics with drones

    Science.gov (United States)

    Raymaekers, Dries; De Keukelaere, Liesbeth; Knaeps, Els; Strackx, Gert; Decrop, Boudewijn; Bollen, Mark

    2017-04-01

    Remote sensing has been used for more than 20 years to estimate water quality in the open ocean and study the evolution of vegetation on land. More recently big improvements have been made to extend these practices to coastal and inland waters, opening new monitoring opportunities, eg. monitoring the impact of dredging activities on the aquatic environment. While satellite sensors can provide complete coverage and historical information of the study area, they are limited in their temporal revisit time and spatial resolution. Therefore, deployment of drones can create an added value and in combination with satellite information increase insights in the dynamics and actors of coastal and aquatic systems. Drones have the advantages of monitoring at high spatial detail (cm scale), with high frequency and are flexible. One of the important water quality parameters is the suspended sediment concentration. However, retrieving sediment concentrations from unmanned systems is a challenging task. The sediment dynamics in the port of Breskens, the Netherlands, were investigated by combining information retrieved from different data sources: satellite, drone and in-situ data were collected, analysed and inserted in sediment models. As such, historical (satellite), near-real time (drone) and predictive (sediment models) information, integrated in a spatial data infrastructure, allow to perform data analysis and can support decision makers.

  10. SI:FatiguePro 4 Advanced Approach for Fatigue Monitoring

    International Nuclear Information System (INIS)

    Evon, Keith; Gilman, Tim; Carney, Curt

    2012-01-01

    Many nuclear plants are making commitments to implement fatigue monitoring systems in support of license renewal. Current fatigue monitoring systems use the methodology of ASME Code Subarticle NB-3200, which is a design code intended to compute a bounding cumulative usage factor (CUF). The first generation of fatigue monitoring software utilized a simplified, single stress term assumption and classical stress cycle-counting methods that take order into account such as Rainflow or Ordered Overall Range counting. Recently, the NRC has indicated in Regulatory Issue Summary 2008-30 that any fatigue analyses in support of License Renewal should use ASME Code Section III methodologies considering all six stress components. In addition, fatigue calculations for the license renewal term are required to consider the effects of environment. The implementation of a six stress term NB-3200 fatigue calculation to a Boiling Water Reactor (BWR) feedwater nozzle, including environmental effects, is the topic of this paper. Differences in results between the advanced methodology and the simplified methodology are discussed. (author)

  11. TwitterSensing: An Event-Based Approach for Wireless Sensor Networks Optimization Exploiting Social Media in Smart City Applications.

    Science.gov (United States)

    Costa, Daniel G; Duran-Faundez, Cristian; Andrade, Daniel C; Rocha-Junior, João B; Peixoto, João Paulo Just

    2018-04-03

    Modern cities are subject to periodic or unexpected critical events, which may bring economic losses or even put people in danger. When some monitoring systems based on wireless sensor networks are deployed, sensing and transmission configurations of sensor nodes may be adjusted exploiting the relevance of the considered events, but efficient detection and classification of events of interest may be hard to achieve. In Smart City environments, several people spontaneously post information in social media about some event that is being observed and such information may be mined and processed for detection and classification of critical events. This article proposes an integrated approach to detect and classify events of interest posted in social media, notably in Twitter , and the assignment of sensing priorities to source nodes. By doing so, wireless sensor networks deployed in Smart City scenarios can be optimized for higher efficiency when monitoring areas under the influence of the detected events.

  12. TwitterSensing: An Event-Based Approach for Wireless Sensor Networks Optimization Exploiting Social Media in Smart City Applications

    Directory of Open Access Journals (Sweden)

    Daniel G. Costa

    2018-04-01

    Full Text Available Modern cities are subject to periodic or unexpected critical events, which may bring economic losses or even put people in danger. When some monitoring systems based on wireless sensor networks are deployed, sensing and transmission configurations of sensor nodes may be adjusted exploiting the relevance of the considered events, but efficient detection and classification of events of interest may be hard to achieve. In Smart City environments, several people spontaneously post information in social media about some event that is being observed and such information may be mined and processed for detection and classification of critical events. This article proposes an integrated approach to detect and classify events of interest posted in social media, notably in Twitter, and the assignment of sensing priorities to source nodes. By doing so, wireless sensor networks deployed in Smart City scenarios can be optimized for higher efficiency when monitoring areas under the influence of the detected events.

  13. On the equivalent static loads approach for dynamic response structural optimization

    DEFF Research Database (Denmark)

    Stolpe, Mathias

    2014-01-01

    The equivalent static loads algorithm is an increasingly popular approach to solve dynamic response structural optimization problems. The algorithm is based on solving a sequence of related static response structural optimization problems with the same objective and constraint functions...... as the original problem. The optimization theoretical foundation of the algorithm is mainly developed in Park and Kang (J Optim Theory Appl 118(1):191–200, 2003). In that article it is shown, for a certain class of problems, that if the equivalent static loads algorithm terminates then the KKT conditions...

  14. Method of transient identification based on a possibilistic approach, optimized by genetic algorithm

    International Nuclear Information System (INIS)

    Almeida, Jose Carlos Soares de

    2001-02-01

    This work develops a method for transient identification based on a possible approach, optimized by Genetic Algorithm to optimize the number of the centroids of the classes that represent the transients. The basic idea of the proposed method is to optimize the partition of the search space, generating subsets in the classes within a partition, defined as subclasses, whose centroids are able to distinguish the classes with the maximum correct classifications. The interpretation of the subclasses as fuzzy sets and the possible approach provided a heuristic to establish influence zones of the centroids, allowing to achieve the 'don't know' answer for unknown transients, that is, outside the training set. (author)

  15. Multi-proxy monitoring approaches at Kangaroo Island, South Australia

    Science.gov (United States)

    Dixon, Bronwyn; Drysdale, Russell; Tyler, Jonathan; Goodwin, Ian

    2017-04-01

    Interpretations of geochemical signals preserved in young speleothems are greatly enhanced by comprehensive cave-site monitoring. In the light of this, a cave monitoring project is being conducted concurrently with the development of a new palaeoclimate record from Kelly Hill Cave (Kangaroo Island, South Australia). The site is strategically located because it is situated between longer-lived monitoring sites in southeastern and southwestern Australia, as well as being climatically 'upstream' from major population and agricultural centres. This study aims to understand possible controls on speleothem δ18O in Kelly Hill Cave through i. identification of local and regional δ18O drivers in precipitation; and ii. preservation and modification of climatic signals within the epikarst as indicated by dripwater δ18O. These aims are achieved through analysis of a five-year daily rainfall (amount and δ18O) dataset in conjunction with in-cave drip monitoring. Drivers of precipitation δ18O were identified through linear regression between δ18O values and local meteorological variables, air-parcel back trajectories, and synoptic-typing. Synoptically driven moisture sources were identified through the use of NCEP/NCAR climate reanalysis sea-level pressure, precipitable moisture, and outgoing longwave radiation data in order to trace moisture sources and travel mechanisms from surrounding ocean basins. Local controls on δ18O at Kelly Hill Cave are consistent with published interpretations of southern Australia sites, with oxygen isotopes primarily controlled by rainfall amount on both daily and monthly time scales. Back-trajectory analysis also supports previous observations that the Southern Ocean is the major source for moisture-bearing cold-front systems. However, synoptic typing of daily rainfall δ18O and amount extremes reveals a previously unreported tropical connection and moisture source. This tropical connection appears to be strongest in summer and autumn, but

  16. Optimal Investment Under Transaction Costs: A Threshold Rebalanced Portfolio Approach

    Science.gov (United States)

    Tunc, Sait; Donmez, Mehmet Ali; Kozat, Suleyman Serdar

    2013-06-01

    We study optimal investment in a financial market having a finite number of assets from a signal processing perspective. We investigate how an investor should distribute capital over these assets and when he should reallocate the distribution of the funds over these assets to maximize the cumulative wealth over any investment period. In particular, we introduce a portfolio selection algorithm that maximizes the expected cumulative wealth in i.i.d. two-asset discrete-time markets where the market levies proportional transaction costs in buying and selling stocks. We achieve this using "threshold rebalanced portfolios", where trading occurs only if the portfolio breaches certain thresholds. Under the assumption that the relative price sequences have log-normal distribution from the Black-Scholes model, we evaluate the expected wealth under proportional transaction costs and find the threshold rebalanced portfolio that achieves the maximal expected cumulative wealth over any investment period. Our derivations can be readily extended to markets having more than two stocks, where these extensions are pointed out in the paper. As predicted from our derivations, we significantly improve the achieved wealth over portfolio selection algorithms from the literature on historical data sets.

  17. A systemic approach for optimal cooling tower operation

    International Nuclear Information System (INIS)

    Cortinovis, Giorgia F.; Paiva, Jose L.; Song, Tah W.; Pinto, Jose M.

    2009-01-01

    The thermal performance of a cooling tower and its cooling water system is critical for industrial plants, and small deviations from the design conditions may cause severe instability in the operation and economics of the process. External disturbances such as variation in the thermal demand of the process or oscillations in atmospheric conditions may be suppressed in multiple ways. Nevertheless, such alternatives are hardly ever implemented in the industrial operation due to the poor coordination between the utility and process sectors. The complexity of the operation increases because of the strong interaction among the process variables. In the present work, an integrated model for the minimization of the operating costs of a cooling water system is developed. The system is composed of a cooling tower as well as a network of heat exchangers. After the model is verified, several cases are studied with the objective of determining the optimal operation. It is observed that the most important operational resources to mitigate disturbances in the thermal demand of the process are, in this order: the increase in recycle water flow rate, the increase in air flow rate and finally the forced removal of a portion of the water flow rate that enters the cooling tower with the corresponding make-up flow rate.

  18. Optimal planning of multiple distributed generation sources in distribution networks: A new approach

    Energy Technology Data Exchange (ETDEWEB)

    AlRashidi, M.R., E-mail: malrash2002@yahoo.com [Department of Electrical Engineering, College of Technological Studies, Public Authority for Applied Education and Training (PAAET) (Kuwait); AlHajri, M.F., E-mail: mfalhajri@yahoo.com [Department of Electrical Engineering, College of Technological Studies, Public Authority for Applied Education and Training (PAAET) (Kuwait)

    2011-10-15

    Highlights: {yields} A new hybrid PSO for optimal DGs placement and sizing. {yields} Statistical analysis to fine tune PSO parameters. {yields} Novel constraint handling mechanism to handle different constraints types. - Abstract: An improved particle swarm optimization algorithm (PSO) is presented for optimal planning of multiple distributed generation sources (DG). This problem can be divided into two sub-problems: the DG optimal size (continuous optimization) and location (discrete optimization) to minimize real power losses. The proposed approach addresses the two sub-problems simultaneously using an enhanced PSO algorithm capable of handling multiple DG planning in a single run. A design of experiment is used to fine tune the proposed approach via proper analysis of PSO parameters interaction. The proposed algorithm treats the problem constraints differently by adopting a radial power flow algorithm to satisfy the equality constraints, i.e. power flows in distribution networks, while the inequality constraints are handled by making use of some of the PSO features. The proposed algorithm was tested on the practical 69-bus power distribution system. Different test cases were considered to validate the proposed approach consistency in detecting optimal or near optimal solution. Results are compared with those of Sequential Quadratic Programming.

  19. Optimal planning of multiple distributed generation sources in distribution networks: A new approach

    International Nuclear Information System (INIS)

    AlRashidi, M.R.; AlHajri, M.F.

    2011-01-01

    Highlights: → A new hybrid PSO for optimal DGs placement and sizing. → Statistical analysis to fine tune PSO parameters. → Novel constraint handling mechanism to handle different constraints types. - Abstract: An improved particle swarm optimization algorithm (PSO) is presented for optimal planning of multiple distributed generation sources (DG). This problem can be divided into two sub-problems: the DG optimal size (continuous optimization) and location (discrete optimization) to minimize real power losses. The proposed approach addresses the two sub-problems simultaneously using an enhanced PSO algorithm capable of handling multiple DG planning in a single run. A design of experiment is used to fine tune the proposed approach via proper analysis of PSO parameters interaction. The proposed algorithm treats the problem constraints differently by adopting a radial power flow algorithm to satisfy the equality constraints, i.e. power flows in distribution networks, while the inequality constraints are handled by making use of some of the PSO features. The proposed algorithm was tested on the practical 69-bus power distribution system. Different test cases were considered to validate the proposed approach consistency in detecting optimal or near optimal solution. Results are compared with those of Sequential Quadratic Programming.

  20. Synthesis of biorefinery networks using a superstructure optimization based approach

    DEFF Research Database (Denmark)

    Bertran, Maria-Ona; Anaya-Reza, Omar; Lopez-Arenas, Maria Teresa

    Petroleum is currently the primary raw material for the production of fuels and chemicals. Consequently, our society is highly dependent on fossil non-renewable resources. However, renewable raw materials are recently receiving increasing interest for the production of chemicals and fuels, so a n...... of the proposed approach is shown through a practical case study for the production of valuable products (i.e. lysine and lactic acid) from sugarcane molasses; these alternatives are considered with respect to availability and demands in Mexico [4]....

  1. Optimal speech motor control and token-to-token variability: a Bayesian modeling approach.

    Science.gov (United States)

    Patri, Jean-François; Diard, Julien; Perrier, Pascal

    2015-12-01

    The remarkable capacity of the speech motor system to adapt to various speech conditions is due to an excess of degrees of freedom, which enables producing similar acoustical properties with different sets of control strategies. To explain how the central nervous system selects one of the possible strategies, a common approach, in line with optimal motor control theories, is to model speech motor planning as the solution of an optimality problem based on cost functions. Despite the success of this approach, one of its drawbacks is the intrinsic contradiction between the concept of optimality and the observed experimental intra-speaker token-to-token variability. The present paper proposes an alternative approach by formulating feedforward optimal control in a probabilistic Bayesian modeling framework. This is illustrated by controlling a biomechanical model of the vocal tract for speech production and by comparing it with an existing optimal control model (GEPPETO). The essential elements of this optimal control model are presented first. From them the Bayesian model is constructed in a progressive way. Performance of the Bayesian model is evaluated based on computer simulations and compared to the optimal control model. This approach is shown to be appropriate for solving the speech planning problem while accounting for variability in a principled way.

  2. Using a water-food-energy nexus approach for optimal irrigation management during drought events in Nebraska

    Science.gov (United States)

    Campana, P. E.; Zhang, J.; Yao, T.; Melton, F. S.; Yan, J.

    2017-12-01

    Climate change and drought have severe impacts on the agricultural sector affecting crop yields, water availability, and energy consumption for irrigation. Monitoring, assessing and mitigating the effects of climate change and drought on the agricultural and energy sectors are fundamental challenges that require investigation for water, food, and energy security issues. Using an integrated water-food-energy nexus approach, this study is developing a comprehensive drought management system through integration of real-time drought monitoring with real-time irrigation management. The spatially explicit model developed, GIS-OptiCE, can be used for simulation, multi-criteria optimization and generation of forecasts to support irrigation management. To demonstrate the value of the approach, the model has been applied to one major corn region in Nebraska to study the effects of the 2012 drought on crop yield and irrigation water/energy requirements as compared to a wet year such as 2009. The water-food-energy interrelationships evaluated show that significant water volumes and energy are required to halt the negative effects of drought on the crop yield. The multi-criteria optimization problem applied in this study indicates that the optimal solutions of irrigation do not necessarily correspond to those that would produce the maximum crop yields, depending on both water and economic constraints. In particular, crop pricing forecasts are extremely important to define the optimal irrigation management strategy. The model developed shows great potential in precision agriculture by providing near real-time data products including information on evapotranspiration, irrigation volumes, energy requirements, predicted crop growth, and nutrient requirements.

  3. Optimizing Treatment with TNF Inhibitors in Inflammatory Bowel Disease by Monitoring Drug Levels and Antidrug Antibodies

    DEFF Research Database (Denmark)

    Steenholdt, Casper; Bendtzen, Klaus; Brynskov, Jørn

    2016-01-01

    costs. The objective is to review optimization of anti-TNF therapy by use of personalized treatment strategies based on circulating drug levels and antidrug antibodies (Abs), i.e. therapeutic drug monitoring (TDM). Furthermore, to outline TDM-related pitfalls and their prevention. METHODS: Literature...... inflammatory phenotype influencing the pharmacodynamic (PD) responses to TNF inhibitors also affect treatment outcomes. As an alternative to handling anti-TNF-treated patients by empiric strategies, TDM identifies underlying PK and PD-related reasons for treatment failure and aids decision making to secure...... of chronology between changes in PK versus symptomatic and objective disease activity manifestations. Biases can be accommodated by knowledgeable interpretation of results obtained by validated assays with clinically established thresholds, and by repeated assessments over time using complimentary techniques...

  4. Development and optimization of the LHC and the SPS beam diagnostics based on synchrotron radiation monitoring

    International Nuclear Information System (INIS)

    Trad, Georges

    2015-01-01

    Measuring the beam transverse emittance is fundamental in every accelerator, in particular for colliders, where its precise determination is essential to maximize the luminosity and thus the performance of the colliding beams. Synchrotron Radiation (SR) is a versatile tool for non-destructive beam diagnostics, since its characteristics are closely related to those of the source beam. At CERN, being the only available diagnostics at high beam intensity and energy, SR monitors are exploited as the proton beam size monitor of the two higher energy machines, the Super Proton Synchrotron (SPS) and the Large Hadron Collider (LHC). The thesis work documented in this report focused on the design, development, characterization and optimization of these beam size monitors. Such studies were based on a comprehensive set of theoretical calculations, numerical simulations and experiments. A powerful simulation tool has been developed combining conventional softwares for SR simulation and optics design, thus allowing the description of an SR monitor from its source up to the detector. The simulations were confirmed by direct observations, and a detailed performance studies of the operational SR imaging monitor in the LHC, where different techniques for experimentally validating the system were applied, such as cross-calibrations with the wire scanners at low intensity (that are considered as a reference) and direct comparison with beam sizes de-convoluted from the LHC luminosity measurements. In 2015, the beam sizes to be measured with the further increase of the LHC beam energy to 7 TeV will decrease down to ∼190 μm. In these conditions, the SR imaging technique was found at its limits of applicability since the error on the beam size determination is proportional to the ratio of the system resolution and the measured beam size. Therefore, various solutions were probed to improve the system's performance such as the choice of one light polarization, the reduction of

  5. Reliability-redundancy optimization by means of a chaotic differential evolution approach

    International Nuclear Information System (INIS)

    Coelho, Leandro dos Santos

    2009-01-01

    The reliability design is related to the performance analysis of many engineering systems. The reliability-redundancy optimization problems involve selection of components with multiple choices and redundancy levels that produce maximum benefits, can be subject to the cost, weight, and volume constraints. Classical mathematical methods have failed in handling nonconvexities and nonsmoothness in optimization problems. As an alternative to the classical optimization approaches, the meta-heuristics have been given much attention by many researchers due to their ability to find an almost global optimal solution in reliability-redundancy optimization problems. Evolutionary algorithms (EAs) - paradigms of evolutionary computation field - are stochastic and robust meta-heuristics useful to solve reliability-redundancy optimization problems. EAs such as genetic algorithm, evolutionary programming, evolution strategies and differential evolution are being used to find global or near global optimal solution. A differential evolution approach based on chaotic sequences using Lozi's map for reliability-redundancy optimization problems is proposed in this paper. The proposed method has a fast convergence rate but also maintains the diversity of the population so as to escape from local optima. An application example in reliability-redundancy optimization based on the overspeed protection system of a gas turbine is given to show its usefulness and efficiency. Simulation results show that the application of deterministic chaotic sequences instead of random sequences is a possible strategy to improve the performance of differential evolution.

  6. Optimizing denominator data estimation through a multimodel approach

    Directory of Open Access Journals (Sweden)

    Ward Bryssinckx

    2014-05-01

    Full Text Available To assess the risk of (zoonotic disease transmission in developing countries, decision makers generally rely on distribution estimates of animals from survey records or projections of historical enumeration results. Given the high cost of large-scale surveys, the sample size is often restricted and the accuracy of estimates is therefore low, especially when spatial high-resolution is applied. This study explores possibilities of improving the accuracy of livestock distribution maps without additional samples using spatial modelling based on regression tree forest models, developed using subsets of the Uganda 2008 Livestock Census data, and several covariates. The accuracy of these spatial models as well as the accuracy of an ensemble of a spatial model and direct estimate was compared to direct estimates and “true” livestock figures based on the entire dataset. The new approach is shown to effectively increase the livestock estimate accuracy (median relative error decrease of 0.166-0.037 for total sample sizes of 80-1,600 animals, respectively. This outcome suggests that the accuracy levels obtained with direct estimates can indeed be achieved with lower sample sizes and the multimodel approach presented here, indicating a more efficient use of financial resources.

  7. A comparison of two closely-related approaches to aerodynamic design optimization

    Science.gov (United States)

    Shubin, G. R.; Frank, P. D.

    1991-01-01

    Two related methods for aerodynamic design optimization are compared. The methods, called the implicit gradient approach and the variational (or optimal control) approach, both attempt to obtain gradients necessary for numerical optimization at a cost significantly less than that of the usual black-box approach that employs finite difference gradients. While the two methods are seemingly quite different, they are shown to differ (essentially) in that the order of discretizing the continuous problem, and of applying calculus, is interchanged. Under certain circumstances, the two methods turn out to be identical. We explore the relationship between these methods by applying them to a model problem for duct flow that has many features in common with transonic flow over an airfoil. We find that the gradients computed by the variational method can sometimes be sufficiently inaccurate to cause the optimization to fail.

  8. An approach to routine individual internal dose monitoring at the object 'Shelter' personnel considering uncertainties

    International Nuclear Information System (INIS)

    Mel'nichuk, D.V.; Bondarenko, O.O.; Medvedjev, S.Yu.

    2002-01-01

    An approach to organisation of routine individual internal dose monitoring of the personnel of the Object 'Shelter' is presented in the work, that considers individualised uncertainties. In this aspect two methods of effective dose assessment based on bioassay are considered in the work: (1) traditional indirect method at which application results of workplace monitoring are not taken into account, and (2) a combined method in which both results of bioassay measurements and workplace monitoring are considered

  9. Generating evidence on a risk-based monitoring approach in the academic setting – lessons learned

    Directory of Open Access Journals (Sweden)

    Belinda von Niederhäusern

    2017-02-01

    Full Text Available Abstract Background In spite of efforts to employ risk-based strategies to increase monitoring efficiency in the academic setting, empirical evidence on their effectiveness remains sparse. This mixed-methods study aimed to evaluate the risk-based on-site monitoring approach currently followed at our academic institution. Methods We selected all studies monitored by the Clinical Trial Unit (CTU according to Risk ADApted MONitoring (ADAMON at the University Hospital Basel, Switzerland, between 01.01.2012 and 31.12.2014. We extracted study characteristics and monitoring information from the CTU Enterprise Resource Management system and from monitoring reports of all selected studies. We summarized the data descriptively. Additionally, we conducted semi-structured interviews with the three current CTU monitors. Results During the observation period, a total of 214 monitoring visits were conducted in 43 studies resulting in 2961 documented monitoring findings. Our risk-based approach predominantly identified administrative (46.2% and patient right findings (49.1%. We identified observational study design, high ADAMON risk category, industry sponsorship, the presence of an electronic database, experienced site staff, and inclusion of vulnerable study population to be factors associated with lower numbers of findings. The monitors understand the positive aspects of a risk-based approach but fear missing systematic errors due to the low frequency of visits. Conclusions We show that the factors mostly increasing the risk for on-site monitoring findings are underrepresented in the current risk analysis scheme. Our risk-based on-site approach should further be complemented by centralized data checks, allowing monitors to transform their role towards partners for overall trial quality, and success.

  10. A practical approach: in-situ continuous emission monitoring analysers

    Energy Technology Data Exchange (ETDEWEB)

    C.B. Daw; A.J. Bowers [Procal Analytics Ltd, Peterborough (United Kingdom)

    2004-07-01

    Advances in design and construction of stack-mounted analyzers has resulted in a large demand for this technology for continuous emission monitoring (CEM) of air pollutants from fossil-fuel power plants. The paper looks at some difficulties encountered in use of on-stack CEMs and how to overcome them. Examples are given of installations' use of in-situ CEMS systems at three coal-fired power plants; the Drax (UK), Powerton (United States) and TVA Paradise power station (United States). 12 figs., 1 tab.

  11. A risk management process for reinforced concrete structures by coupling modelling, monitoring and Bayesian approaches

    International Nuclear Information System (INIS)

    Capra, Bruno; Li, Kefei; Wolff, Valentin; Bernard, Olivier; Gerard, Bruno

    2004-01-01

    The impact of steel corrosion on the durability of reinforced concrete structures has since a long time been a major concern in civil engineering. The main electrochemical mechanisms of the steel corrosion are know well known. The material and structure degradation is attributed to the progressive formation of an expansive corrosion product at the steel-concrete interface. To assess quantitatively the structure lifetime, a two-stage service life model has been accepted widely. So far, the research attention is mainly given to the corrosion in an un-cracked concrete. However. practically one is often confronted to the reinforcement corrosion in an already cracked concrete. How to quantify the corrosion risk is of great interest for the long term durability of these cracked structures. To this end, this paper proposes a service life modeling for the corrosion process by carbonation in a cracked or un-cracked concrete depending on the observation or monitoring data available. Some recent experimental investigations are used to calibrate the models. Then, the models are applied to a shell structure to quantify the corrosion process and determine the optimal maintenance strategy. As corrosion processes are very difficult to model and subjected to material and environmental random variations, an example of structure reassessment is presented taking into account in situ information by the mean of Bayesian approaches. The coupling of monitoring, modelling and updating leads to a new global maintenance strategy of infrastructure. In conclusion: This paper presents an unified methodology coupling predictive models, observations and Bayesian approaches in order to assess the degradation degree of an ageing structure. The particular case of corrosion is treated on an innovative way by the development of a service life model taking into account cracking effects on the kinetics of the phenomena. At a material level, the dominant factors are the crack opening and the crack nature

  12. Optimization approaches for treating nuclear power plant problems

    International Nuclear Information System (INIS)

    Abdelgoad, A.S.A.

    2012-01-01

    Electricity generation is the process of generating electric energy from other forms of energy. There are many technologies that can be and are used to generate electricity. One of these technologies is the nuclear power. A nuclear power plant (NPP) is a thermal power station in which the heat source is one or more nuclear reactors. As in a conventional thermal power station the heat is used to generate steam which drives a steam turbine connected to a generator which produces electricity. As of February 2nd, 2012, there were 439 nuclear power plants in operation through the world. NPP are usually considered to be base load stations, which are best suited to constant power output. The thesis consists of five chapters: Chapter I presents a survey on some important concepts of the NPP problems. Chapter II introduces the economic future of nuclear power. It presents nuclear energy scenarios beyond 2015, market potential for electricity generation to 2030 and economics of new plant construction. Chapter III presents a reliability centered problem of power plant preventive maintenance scheduling. NPP preventive maintenance scheduling problem with fuzzy parameters in the constraints is solved. A case study is provided to demonstrate the efficiency of proposed model. A comparison study between the deterministic case and fuzzy case for the problem of concern is carried out. Chapter IV introduces a fuzzy approach to the generation expansion planning problem (GEP) in a multiobjective environment. The GEP problem as an integer programming model with fuzzy parameters in the constraints is formulated. A parametric study is carried out for the GEP problem. A case study is provided to demonstrate the efficiency of our proposed model. A comparison study between our approach and the deterministic one is made. Chapter V is concerned with the conclusions arrived in carrying out this thesis and gives some suggestions for further research.

  13. Availability analysis of mechanical systems with condition-based maintenance using semi-Markov and evaluation of optimal condition monitoring interval

    Science.gov (United States)

    Kumar, Girish; Jain, Vipul; Gandhi, O. P.

    2018-03-01

    Maintenance helps to extend equipment life by improving its condition and avoiding catastrophic failures. Appropriate model or mechanism is, thus, needed to quantify system availability vis-a-vis a given maintenance strategy, which will assist in decision-making for optimal utilization of maintenance resources. This paper deals with semi-Markov process (SMP) modeling for steady state availability analysis of mechanical systems that follow condition-based maintenance (CBM) and evaluation of optimal condition monitoring interval. The developed SMP model is solved using two-stage analytical approach for steady-state availability analysis of the system. Also, CBM interval is decided for maximizing system availability using Genetic Algorithm approach. The main contribution of the paper is in the form of a predictive tool for system availability that will help in deciding the optimum CBM policy. The proposed methodology is demonstrated for a centrifugal pump.

  14. Optical modeling toward optimizing monitoring of intestinal perfusion in trauma patients

    Science.gov (United States)

    Akl, Tony J.; Wilson, Mark A.; Ericson, M. N.; Coté, Gerard L.

    2013-02-01

    Trauma is the number one cause of death for people between the ages 1 and 44 years in the United States. In addition, according to the Centers of Disease Control and Prevention, injury results in over 31 million emergency department visits annually. Minimizing the resuscitation period in major abdominal injuries increases survival rates by correcting impaired tissue oxygen delivery. Optimization of resuscitation requires a monitoring method to determine sufficient tissue oxygenation. Oxygenation can be assessed by determining the adequacy of tissue perfusion. In this work, we present the design of a wireless perfusion and oxygenation sensor based on photoplethysmography. Through optical modeling, the benefit of using the visible wavelengths 470, 525 and 590nm (around the 525nm hemoglobin isobestic point) for intestinal perfusion monitoring is compared to the typical near infrared (NIR) wavelengths (805nm isobestic point) used in such sensors. Specifically, NIR wavelengths penetrate through the thin intestinal wall ( 4mm) leading to high background signals. However, these visible wavelengths have two times shorter penetration depth that the NIR wavelengths. Monte-Carlo simulations show that the transmittance of the three selected wavelengths is lower by 5 orders of magnitude depending on the perfusion state. Due to the high absorbance of hemoglobin in the visible range, the perfusion signal carried by diffusely reflected light is also enhanced by an order of magnitude while oxygenation signal levels are maintained. In addition, short source-detector separations proved to be beneficial for limiting the probing depth to the thickness of the intestinal wall.

  15. Establishing an air pollution monitoring network for intra-urban population exposure assessment : a location-allocation approach

    Energy Technology Data Exchange (ETDEWEB)

    Kanaroglou, P.S. [McMaster Univ., Hamilton, ON (Canada). School of Geography and Geology; Jerrett, M.; Beckerman, B.; Arain, M.A. [McMaster Univ., Hamilton, ON (Canada). School of Geography and Geology]|[McMaster Univ., Hamilton, ON (Canada). McMaster Inst. of Environment and Health; Morrison, J. [Carleton Univ., Ottawa, ON (Canada). School of Computer Science; Gilbert, N.L. [Health Canada, Ottawa, ON (Canada). Air Health Effects Div; Brook, J.R. [Meteorological Service of Canada, Toronto, ON (Canada)

    2004-10-01

    A study was conducted to assess the relation between traffic-generated air pollution and health reactions ranging from childhood asthma to mortality from lung cancer. In particular, it developed a formal method of optimally locating a dense network of air pollution monitoring stations in order to derive an exposure assessment model based on the data obtained from the monitoring stations and related land use, population and biophysical information. The method for determining the locations of 100 nitrogen dioxide monitors in Toronto, Ontario focused on land use, transportation infrastructure and the distribution of at-risk populations. The exposure assessment produced reasonable estimates at the intra-urban scale. This method for locating air pollution monitors effectively maximizes sampling coverage in relation to important socio-demographic characteristics and likely pollution variability. The location-allocation approach integrates many variables into the demand surface to reconfigure a monitoring network and is especially useful for measuring traffic pollutants with fine-scale spatial variability. The method also shows great promise for improving the assessment of exposure to ambient air pollution in epidemiologic studies. 19 refs., 3 tabs., 4 figs.

  16. Optimizing cloud removal from satellite remotely sensed data for monitoring vegetation dynamics in humid tropical climate

    International Nuclear Information System (INIS)

    Hashim, M; Pour, A B; Onn, C H

    2014-01-01

    Remote sensing technology is an important tool to analyze vegetation dynamics, quantifying vegetation fraction of Earth's agricultural and natural vegetation. In optical remote sensing analysis removing atmospheric interferences, particularly distribution of cloud contaminations, are always a critical task in the tropical climate. This paper suggests a fast and alternative approach to remove cloud and shadow contaminations for Landsat Enhanced Thematic Mapper + (ETM + ) multi temporal datasets. Band 3 and Band 4 from all the Landsat ETM + dataset are two main spectral bands that are very crucial in this study for cloud removal technique. The Normalise difference vegetation index (NDVI) and the normalised difference soil index (NDSI) are two main derivatives derived from the datasets. Change vector analysis is used in this study to seek the vegetation dynamics. The approach developed in this study for cloud optimizing can be broadly applicable for optical remote sensing satellite data, which are seriously obscured with heavy cloud contamination in the tropical climate

  17. Near-Port Air Quality Assessment Utilizing a Mobile Monitoring Approach

    Data.gov (United States)

    U.S. Environmental Protection Agency — Near-Port Air Quality Assessment Utilizing a Mobile Monitoring Approach. This dataset is associated with the following publication: Steffens, J., S. Kimbrough, R....

  18. Robotic Spent Fuel Monitoring – It is time to improve old approaches and old techniques!

    Energy Technology Data Exchange (ETDEWEB)

    Tobin, Stephen Joseph [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Dasari, Venkateswara Rao [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Trellue, Holly Renee [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-12-13

    This report describes various approaches and techniques associated with robotic spent fuel monitoring. The purpose of this description is to improve the quality of measured signatures, reduce the inspection burden on the IAEA, and to provide frequent verification.

  19. Reliable fault detection and diagnosis of photovoltaic systems based on statistical monitoring approaches

    KAUST Repository

    Harrou, Fouzi; Sun, Ying; Taghezouit, Bilal; Saidi, Ahmed; Hamlati, Mohamed-Elkarim

    2017-01-01

    This study reports the development of an innovative fault detection and diagnosis scheme to monitor the direct current (DC) side of photovoltaic (PV) systems. Towards this end, we propose a statistical approach that exploits the advantages of one

  20. A citizen science approach to monitoring bleaching in the zoantharian Palythoa tuberculosa

    KAUST Repository

    Parkinson, John Everett; Yang, Sung-Yin; Kawamura, Iori; Byron, Gordon; Todd, Peter Alan; Reimer, James Davis

    2016-01-01

    in midwinter, as well as low sample size and brief training owing to the course structure. Despite certain limitations of P. tuberculosa as a focal organism, the citizen science approach to color monitoring has promise, and we

  1. A theoretical approach to calibrate radiation portal monitor (RPM) systems

    International Nuclear Information System (INIS)

    Nafee, Sherif S.; Abbas, Mahmoud I.

    2008-01-01

    Radiation portal monitor (RPM) systems are widely used at international border crossings, where they are applied to the task of detecting nuclear devices, special nuclear material, and radiation dispersal device materials that could appear at borders. The requirements and constraints on RPM systems deployed at high-volume border crossings are significantly different from those at weapons facilities or steel recycling plants, the former being required to rapidly detect localized sources of radiation with a very high detection probability and low false-alarm rate, while screening all of the traffic without impeding the flow of commerce [Chambers, W.H., Atwater, H.F., Fehlau, P.E., Hastings, R.D., Henry, C.N., Kunz, W.E., Sampson, T.E., Whittlesey, T.H., Worth, G.M., 1974. Portal Monitor for Diversion Safeguards. LA-5681, Los Alamos Scientific Laboratory, Los Alamos, NM]. In the present work, compact analytical formulae are derived and used to calibrate two RPM systems with isotropic radiating sources: (i) polyvinyltoluene (PVT) or plastic and (ii) thallium-doped crystalline sodium iodide, NaI(Tl), gamma-ray detector materials. The calculated efficiencies are compared to measured values reported in the literatures, showing very good agreement

  2. New approach to airborne monitoring of radioactive pollution

    International Nuclear Information System (INIS)

    Hoeschl, V.; Jurza, P.; Pavlik, B.

    1997-01-01

    The use of remote sensing methods in the monitoring of an environment is increasing. The best results are obtained when various types of exploration methods are available. This paper presents the use of airborne gamma ray methods, which can be included in a wide scope of works related to environmental problems. It may concern uranium mining areas, areas surrounding various nuclear facilities or areas of Chernobyl fallout. Gamma ray spectrometry data can be combined with airborne magnetic, surface gravity and satellite imagery data to obtain maximum information in data output. Airborne geophysics is able to detect and delineate radioactive contamination and to find important geological trends defining the geological structure of the monitored area. Our company PICODAS Prague Ltd. introduces new sophisticated airborne instrumentation as well as up-to-date data processing and data presentation techniques. In the Czech Cretaceous, a long term project, ''The Structurally-tectonic Survey of the South-West Foreland of the Straz Deposit'' has been undertaken, concerning the ecological load on the environment, especially the pollution of the underground water level horizons due to uranium mining in that area. The major interest is the complicated tectonic structure which interferes heavily with the hydrogeological situation of the region. The paper presents the results of airborne surveys and the interpretation of other geophysical data from the surroundings of Straz pod Ralskem and from Karlovy Vary. (author)

  3. A Simple Approach for Monitoring Business Service Time Variation

    Directory of Open Access Journals (Sweden)

    Su-Fen Yang

    2014-01-01

    Full Text Available Control charts are effective tools for signal detection in both manufacturing processes and service processes. Much of the data in service industries comes from processes having nonnormal or unknown distributions. The commonly used Shewhart variable control charts, which depend heavily on the normality assumption, are not appropriately used here. In this paper, we propose a new asymmetric EWMA variance chart (EWMA-AV chart and an asymmetric EWMA mean chart (EWMA-AM chart based on two simple statistics to monitor process variance and mean shifts simultaneously. Further, we explore the sampling properties of the new monitoring statistics and calculate the average run lengths when using both the EWMA-AV chart and the EWMA-AM chart. The performance of the EWMA-AV and EWMA-AM charts and that of some existing variance and mean charts are compared. A numerical example involving nonnormal service times from the service system of a bank branch in Taiwan is used to illustrate the applications of the EWMA-AV and EWMA-AM charts and to compare them with the existing variance (or standard deviation and mean charts. The proposed EWMA-AV chart and EWMA-AM charts show superior detection performance compared to the existing variance and mean charts. The EWMA-AV chart and EWMA-AM chart are thus recommended.

  4. A simple approach for monitoring business service time variation.

    Science.gov (United States)

    Yang, Su-Fen; Arnold, Barry C

    2014-01-01

    Control charts are effective tools for signal detection in both manufacturing processes and service processes. Much of the data in service industries comes from processes having nonnormal or unknown distributions. The commonly used Shewhart variable control charts, which depend heavily on the normality assumption, are not appropriately used here. In this paper, we propose a new asymmetric EWMA variance chart (EWMA-AV chart) and an asymmetric EWMA mean chart (EWMA-AM chart) based on two simple statistics to monitor process variance and mean shifts simultaneously. Further, we explore the sampling properties of the new monitoring statistics and calculate the average run lengths when using both the EWMA-AV chart and the EWMA-AM chart. The performance of the EWMA-AV and EWMA-AM charts and that of some existing variance and mean charts are compared. A numerical example involving nonnormal service times from the service system of a bank branch in Taiwan is used to illustrate the applications of the EWMA-AV and EWMA-AM charts and to compare them with the existing variance (or standard deviation) and mean charts. The proposed EWMA-AV chart and EWMA-AM charts show superior detection performance compared to the existing variance and mean charts. The EWMA-AV chart and EWMA-AM chart are thus recommended.

  5. A general approach for optimal kinematic design of 6-DOF parallel ...

    Indian Academy of Sciences (India)

    Optimal kinematic design of parallel manipulators is a challenging problem. In this work, an attempt has been made to present a generalized approach of kinematic design for a 6-legged parallel manipulator, by considering only the minimally required design parameters. The same approach has been used to design a ...

  6. Tomographic Reconstruction from a Few Views: A Multi-Marginal Optimal Transport Approach

    Energy Technology Data Exchange (ETDEWEB)

    Abraham, I., E-mail: isabelle.abraham@cea.fr [CEA Ile de France (France); Abraham, R., E-mail: romain.abraham@univ-orleans.fr; Bergounioux, M., E-mail: maitine.bergounioux@univ-orleans.fr [Université d’Orléans, UFR Sciences, MAPMO, UMR 7349 (France); Carlier, G., E-mail: carlier@ceremade.dauphine.fr [CEREMADE, UMR CNRS 7534, Université Paris IX Dauphine, Pl. de Lattre de Tassigny (France)

    2017-02-15

    In this article, we focus on tomographic reconstruction. The problem is to determine the shape of the interior interface using a tomographic approach while very few X-ray radiographs are performed. We use a multi-marginal optimal transport approach. Preliminary numerical results are presented.

  7. An Efficient Approach for Solving Mesh Optimization Problems Using Newton’s Method

    Directory of Open Access Journals (Sweden)

    Jibum Kim

    2014-01-01

    Full Text Available We present an efficient approach for solving various mesh optimization problems. Our approach is based on Newton’s method, which uses both first-order (gradient and second-order (Hessian derivatives of the nonlinear objective function. The volume and surface mesh optimization algorithms are developed such that mesh validity and surface constraints are satisfied. We also propose several Hessian modification methods when the Hessian matrix is not positive definite. We demonstrate our approach by comparing our method with nonlinear conjugate gradient and steepest descent methods in terms of both efficiency and mesh quality.

  8. Stochastic Real-Time Optimal Control: A Pseudospectral Approach for Bearing-Only Trajectory Optimization

    Science.gov (United States)

    2011-09-01

    measurements suitable for algorithms such as Ekelund or Spiess ranging [104], followed by one extra turn to eliminate ambiguities. A maneuver that...and Dynamics, 7(3), 1984. [104] Spiess , F. N. “Complete Solution of the Bearings Only Approach Problem”. UC San Diego: Scripps Institution of...spectral methods, 21 Spiess ranging, 4 state augmentation, 81 state transition matrix, 66 stereo ranging, 4 sUAS, 5–8, 14, 38, 39, 69, 77, 82, 86, 117, 167

  9. A complex systems approach to planning, optimization and decision making for energy networks

    International Nuclear Information System (INIS)

    Beck, Jessica; Kempener, Ruud; Cohen, Brett; Petrie, Jim

    2008-01-01

    This paper explores a new approach to planning and optimization of energy networks, using a mix of global optimization and agent-based modeling tools. This approach takes account of techno-economic, environmental and social criteria, and engages explicitly with inherent network complexity in terms of the autonomous decision-making capability of individual agents within the network, who may choose not to act as economic rationalists. This is an important consideration from the standpoint of meeting sustainable development goals. The approach attempts to set targets for energy planning, by determining preferred network development pathways through multi-objective optimization. The viability of such plans is then explored through agent-based models. The combined approach is demonstrated for a case study of regional electricity generation in South Africa, with biomass as feedstock

  10. Surface laser marking optimization using an experimental design approach

    Science.gov (United States)

    Brihmat-Hamadi, F.; Amara, E. H.; Lavisse, L.; Jouvard, J. M.; Cicala, E.; Kellou, H.

    2017-04-01

    Laser surface marking is performed on a titanium substrate using a pulsed frequency doubled Nd:YAG laser ( λ= 532 nm, τ pulse=5 ns) to process the substrate surface under normal atmospheric conditions. The aim of the work is to investigate, following experimental and statistical approaches, the correlation between the process parameters and the response variables (output), using a Design of Experiment method (DOE): Taguchi methodology and a response surface methodology (RSM). A design is first created using MINTAB program, and then the laser marking process is performed according to the planned design. The response variables; surface roughness and surface reflectance were measured for each sample, and incorporated into the design matrix. The results are then analyzed and the RSM model is developed and verified for predicting the process output for the given set of process parameters values. The analysis shows that the laser beam scanning speed is the most influential operating factor followed by the laser pumping intensity during marking, while the other factors show complex influences on the objective functions.

  11. Gynecomastia associated with herniated nipples: an optimal surgical approach.

    Science.gov (United States)

    Jaiswal, Rohit; Pu, Lee L Q

    2012-04-01

    Gynecomastia is a common disorder observed in male plastic surgery patients. Treatment options may include observation, surgical excision, or liposuction techniques. Congenital herniated nipple is a more rare condition, especially in male patients. We present the case of a 12-year-old boy with bilateral gynecomastia and herniated nipple-areolar complexes. A staged repair was undertaken in this patient with grade 2 gynecomastia. The first operation was ultrasonic liposuction bilaterally, yielding 200 mL of aspirate from the left and 400 mL on the right, to correct the gynecomastia. The second procedure, performed 6 months later, was a bilateral periareolar mastopexy to repair the herniated nipple-areolar complexes. The result of the first procedure was flattened and symmetrical breast tissue bilaterally, essentially a correction of the gynecomastia. The herniated nipples were still present, however. Bilateral periareolar mastopexies were then performed with resulting reduction of the herniations. There were no complications with either procedure, and a good cosmetic result was achieved. A staged surgical approach was successful in correcting both conditions with an excellent aesthetic result and the advantage of decreased risk for nipple complications.

  12. An Informatics Approach to Demand Response Optimization in Smart Grids

    Energy Technology Data Exchange (ETDEWEB)

    Simmhan, Yogesh; Aman, Saima; Cao, Baohua; Giakkoupis, Mike; Kumbhare, Alok; Zhou, Qunzhi; Paul, Donald; Fern, Carol; Sharma, Aditya; Prasanna, Viktor K

    2011-03-03

    Power utilities are increasingly rolling out “smart” grids with the ability to track consumer power usage in near real-time using smart meters that enable bidirectional communication. However, the true value of smart grids is unlocked only when the veritable explosion of data that will become available is ingested, processed, analyzed and translated into meaningful decisions. These include the ability to forecast electricity demand, respond to peak load events, and improve sustainable use of energy by consumers, and are made possible by energy informatics. Information and software system techniques for a smarter power grid include pattern mining and machine learning over complex events and integrated semantic information, distributed stream processing for low latency response,Cloud platforms for scalable operations and privacy policies to mitigate information leakage in an information rich environment. Such an informatics approach is being used in the DoE sponsored Los Angeles Smart Grid Demonstration Project, and the resulting software architecture will lead to an agile and adaptive Los Angeles Smart Grid.

  13. A unified modeling approach for physical experiment design and optimization in laser driven inertial confinement fusion

    Energy Technology Data Exchange (ETDEWEB)

    Li, Haiyan [Mechatronics Engineering School of Guangdong University of Technology, Guangzhou 510006 (China); Huang, Yunbao, E-mail: Huangyblhy@gmail.com [Mechatronics Engineering School of Guangdong University of Technology, Guangzhou 510006 (China); Jiang, Shaoen, E-mail: Jiangshn@vip.sina.com [Laser Fusion Research Center, China Academy of Engineering Physics, Mianyang 621900 (China); Jing, Longfei, E-mail: scmyking_2008@163.com [Laser Fusion Research Center, China Academy of Engineering Physics, Mianyang 621900 (China); Tianxuan, Huang; Ding, Yongkun [Laser Fusion Research Center, China Academy of Engineering Physics, Mianyang 621900 (China)

    2015-11-15

    Highlights: • A unified modeling approach for physical experiment design is presented. • Any laser facility can be flexibly defined and included with two scripts. • Complex targets and laser beams can be parametrically modeled for optimization. • Automatically mapping of laser beam energy facilitates targets shape optimization. - Abstract: Physical experiment design and optimization is very essential for laser driven inertial confinement fusion due to the high cost of each shot. However, only limited experiments with simple structure or shape on several laser facilities can be designed and evaluated in available codes, and targets are usually defined by programming, which may lead to it difficult for complex shape target design and optimization on arbitrary laser facilities. A unified modeling approach for physical experiment design and optimization on any laser facilities is presented in this paper. Its core idea includes: (1) any laser facility can be flexibly defined and included with two scripts, (2) complex shape targets and laser beams can be parametrically modeled based on features, (3) an automatically mapping scheme of laser beam energy onto discrete mesh elements of targets enable targets or laser beams be optimized without any additional interactive modeling or programming, and (4) significant computation algorithms are additionally presented to efficiently evaluate radiation symmetry on the target. Finally, examples are demonstrated to validate the significance of such unified modeling approach for physical experiments design and optimization in laser driven inertial confinement fusion.

  14. A Modified Penalty Parameter Approach for Optimal Estimation of UH with Simultaneous Estimation of Infiltration Parameters

    Science.gov (United States)

    Bhattacharjya, Rajib Kumar

    2018-05-01

    The unit hydrograph and the infiltration parameters of a watershed can be obtained from observed rainfall-runoff data by using inverse optimization technique. This is a two-stage optimization problem. In the first stage, the infiltration parameters are obtained and the unit hydrograph ordinates are estimated in the second stage. In order to combine this two-stage method into a single stage one, a modified penalty parameter approach is proposed for converting the constrained optimization problem to an unconstrained one. The proposed approach is designed in such a way that the model initially obtains the infiltration parameters and then searches the optimal unit hydrograph ordinates. The optimization model is solved using Genetic Algorithms. A reduction factor is used in the penalty parameter approach so that the obtained optimal infiltration parameters are not destroyed during subsequent generation of genetic algorithms, required for searching optimal unit hydrograph ordinates. The performance of the proposed methodology is evaluated by using two example problems. The evaluation shows that the model is superior, simple in concept and also has the potential for field application.

  15. Topology Optimization of Constrained Layer Damping on Plates Using Method of Moving Asymptote (MMA Approach

    Directory of Open Access Journals (Sweden)

    Zheng Ling

    2011-01-01

    Full Text Available Damping treatments have been extensively used as a powerful means to damp out structural resonant vibrations. Usually, damping materials are fully covered on the surface of plates. The drawbacks of this conventional treatment are also obvious due to an added mass and excess material consumption. Therefore, it is not always economical and effective from an optimization design view. In this paper, a topology optimization approach is presented to maximize the modal damping ratio of the plate with constrained layer damping treatment. The governing equation of motion of the plate is derived on the basis of energy approach. A finite element model to describe dynamic performances of the plate is developed and used along with an optimization algorithm in order to determine the optimal topologies of constrained layer damping layout on the plate. The damping of visco-elastic layer is modeled by the complex modulus formula. Considering the vibration and energy dissipation mode of the plate with constrained layer damping treatment, damping material density and volume factor are considered as design variable and constraint respectively. Meantime, the modal damping ratio of the plate is assigned as the objective function in the topology optimization approach. The sensitivity of modal damping ratio to design variable is further derived and Method of Moving Asymptote (MMA is adopted to search the optimized topologies of constrained layer damping layout on the plate. Numerical examples are used to demonstrate the effectiveness of the proposed topology optimization approach. The results show that vibration energy dissipation of the plates can be enhanced by the optimal constrained layer damping layout. This optimal technology can be further extended to vibration attenuation of sandwich cylindrical shells which constitute the major building block of many critical structures such as cabins of aircrafts, hulls of submarines and bodies of rockets and missiles as an

  16. Monitoring of the radon exposure in workplaces: Regulatory approaches

    International Nuclear Information System (INIS)

    Ettenhuber, E.

    2002-01-01

    Germany has a reference level of 2 10 6 Bqh/m 3 for radon in workplaces, corresponding to an annual dose of 6 mSv and a limit of 6 10 6 Bqh/m 3 , corresponding to 10 mSv/y. If the reference level is exceeded remedial action has to be taken and a new radon measurement should be carried out. If it is not possible to reduce the radon concentration below the reference level the competent authority has to be notified and monitoring of the radon concentrations performed. Germany has performed a study to investigate the exposure by natural radionuclides in workplaces in a large number of industrial activities, with a dose assessment of the workers under normal circumstances. They made a categorization of NORM activities in dose ranges of 20 mSv/y. Most of the NORM activities fall in the category <1 mSv/y when normal occupational hygiene measures are taken

  17. Multi-objective approach in thermoenvironomic optimization of a benchmark cogeneration system

    International Nuclear Information System (INIS)

    Sayyaadi, Hoseyn

    2009-01-01

    Multi-objective optimization for designing of a benchmark cogeneration system known as CGAM cogeneration system has been performed. In optimization approach, the exergetic, economic and environmental aspects have been considered, simultaneously. The thermodynamic modeling has been implemented comprehensively while economic analysis conducted in accordance with the total revenue requirement (TRR) method. The results for the single objective thermoeconomic optimization have been compared with the previous studies in optimization of CGAM problem. In multi-objective optimization of the CGAM problem, the three objective functions including the exergetic efficiency, total levelized cost rate of the system product and the cost rate of environmental impact have been considered. The environmental impact objective function has been defined and expressed in cost terms. This objective has been integrated with the thermoeconomic objective to form a new unique objective function known as a thermoenvironomic objective function. The thermoenvironomic objective has been minimized while the exergetic objective has been maximized. One of the most suitable optimization techniques developed using a particular class of search algorithms known as multi-objective evolutionary algorithms (MOEAs) has been considered here. This approach which is developed based on the genetic algorithm has been applied to find the set of Pareto optimal solutions with respect to the aforementioned objective functions. An example of decision-making has been presented and a final optimal solution has been introduced. The sensitivity of the solutions to the interest rate and the fuel cost has been studied

  18. Fast engineering optimization: A novel highly effective control parameterization approach for industrial dynamic processes.

    Science.gov (United States)

    Liu, Ping; Li, Guodong; Liu, Xinggao

    2015-09-01

    Control vector parameterization (CVP) is an important approach of the engineering optimization for the industrial dynamic processes. However, its major defect, the low optimization efficiency caused by calculating the relevant differential equations in the generated nonlinear programming (NLP) problem repeatedly, limits its wide application in the engineering optimization for the industrial dynamic processes. A novel highly effective control parameterization approach, fast-CVP, is first proposed to improve the optimization efficiency for industrial dynamic processes, where the costate gradient formulae is employed and a fast approximate scheme is presented to solve the differential equations in dynamic process simulation. Three well-known engineering optimization benchmark problems of the industrial dynamic processes are demonstrated as illustration. The research results show that the proposed fast approach achieves a fine performance that at least 90% of the computation time can be saved in contrast to the traditional CVP method, which reveals the effectiveness of the proposed fast engineering optimization approach for the industrial dynamic processes. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  19. A Hybrid Change Detection Approach for Damage Detection and Recovery Monitoring

    Science.gov (United States)

    de Alwis Pitts, Dilkushi; Wieland, Marc; Wang, Shifeng; So, Emily; Pittore, Massimiliano

    2014-05-01

    Following a disaster, change detection via pre- and post-event very high resolution remote sensing images is an essential technique for damage assessment and recovery monitoring over large areas in complex urban environments. Most assessments to date focus on detection, destruction and recovery of man-made objects that facilitate shelter and accessibility, such as buildings, roads, bridges, etc., as indicators for assessment and better decision making. Moreover, many current change-detection mechanisms do not use all the data and knowledge which are often available for the pre-disaster state. Recognizing the continuous rather than dichotomous character of the data-rich/data-poor distinction permits the incorporation of ancillary data and existing knowledge into the processing flow. Such incorporation could improve the reliability of the results and thereby enhance the usability of robust methods for disaster management. This study proposes an application-specific and robust change detection method from multi-temporal very high resolution multi-spectral satellite images. This hybrid indicator-specific method uses readily available pre-disaster GIS data and integrates existing knowledge into the processing flow to optimize the change detection while offering the possibility to target specific types of changes to man-made objects. The indicator-specific information of the GIS objects is used as a series of masks to treat the GIS objects with similar characteristics similarly for better accuracy. The proposed approach is based on a fusion of a multi-index change detection method based on gradient, texture and edge similarity filters. The change detection index is flexible for disaster cases in which the pre-disaster and post-disaster images are not of the same resolution. The proposed automated method is evaluated with QuickBird and Ikonos datasets for abrupt changes soon after disaster. The method could also be extended in a semi-automated way for monitoring

  20. Inverse Reliability Task: Artificial Neural Networks and Reliability-Based Optimization Approaches

    OpenAIRE

    Lehký , David; Slowik , Ondřej; Novák , Drahomír

    2014-01-01

    Part 7: Genetic Algorithms; International audience; The paper presents two alternative approaches to solve inverse reliability task – to determine the design parameters to achieve desired target reliabilities. The first approach is based on utilization of artificial neural networks and small-sample simulation Latin hypercube sampling. The second approach considers inverse reliability task as reliability-based optimization task using double-loop method and also small-sample simulation. Efficie...

  1. Examining change detection approaches for tropical mangrove monitoring

    Science.gov (United States)

    Myint, Soe W.; Franklin, Janet; Buenemann, Michaela; Kim, Won; Giri, Chandra

    2014-01-01

    This study evaluated the effectiveness of different band combinations and classifiers (unsupervised, supervised, object-oriented nearest neighbor, and object-oriented decision rule) for quantifying mangrove forest change using multitemporal Landsat data. A discriminant analysis using spectra of different vegetation types determined that bands 2 (0.52 to 0.6 μm), 5 (1.55 to 1.75 μm), and 7 (2.08 to 2.35 μm) were the most effective bands for differentiating mangrove forests from surrounding land cover types. A ranking of thirty-six change maps, produced by comparing the classification accuracy of twelve change detection approaches, was used. The object-based Nearest Neighbor classifier produced the highest mean overall accuracy (84 percent) regardless of band combinations. The automated decision rule-based approach (mean overall accuracy of 88 percent) as well as a composite of bands 2, 5, and 7 used with the unsupervised classifier and the same composite or all band difference with the object-oriented Nearest Neighbor classifier were the most effective approaches.

  2. Realizing an Optimization Approach Inspired from Piaget’s Theory on Cognitive Development

    Directory of Open Access Journals (Sweden)

    Utku Kose

    2015-09-01

    Full Text Available The objective of this paper is to introduce an artificial intelligence based optimization approach, which is inspired from Piaget’s theory on cognitive development. The approach has been designed according to essential processes that an individual may experience while learning something new or improving his / her knowledge. These processes are associated with the Piaget’s ideas on an individual’s cognitive development. The approach expressed in this paper is a simple algorithm employing swarm intelligence oriented tasks in order to overcome single-objective optimization problems. For evaluating effectiveness of this early version of the algorithm, test operations have been done via some benchmark functions. The obtained results show that the approach / algorithm can be an alternative to the literature in terms of single-objective optimization.The authors have suggested the name: Cognitive Development Optimization Algorithm (CoDOA for the related intelligent optimization approach.

  3. A robust optimization based approach for microgrid operation in deregulated environment

    International Nuclear Information System (INIS)

    Gupta, R.A.; Gupta, Nand Kishor

    2015-01-01

    Highlights: • RO based approach developed for optimal MG operation in deregulated environment. • Wind uncertainty modeled by interval forecasting through ARIMA model. • Proposed approach evaluated using two realistic case studies. • Proposed approach evaluated the impact of degree of robustness. • Proposed approach gives a significant reduction in operation cost of microgrid. - Abstract: Micro Grids (MGs) are clusters of Distributed Energy Resource (DER) units and loads. MGs are self-sustainable and generally operated in two modes: (1) grid connected and (2) grid isolated. In deregulated environment, the operation of MG is managed by the Microgrid Operator (MO) with an objective to minimize the total cost of operation. The MG management is crucial in the deregulated power system due to (i) integration of intermittent renewable sources such as wind and Photo Voltaic (PV) generation, and (ii) volatile grid prices. This paper presents robust optimization based approach for optimal MG management considering wind power uncertainty. Time series based Autoregressive Integrated Moving Average (ARIMA) model is used to characterize the wind power uncertainty through interval forecasting. The proposed approach is illustrated through a case study having both dispatchable and non-dispatchable generators through different modes of operation. Further the impact of degree of robustness is analyzed in both cases on the total cost of operation of the MG. A comparative analysis between obtained results using proposed approach and other existing approach shows the strength of proposed approach in cost minimization in MG management

  4. An approach for multi-objective optimization of vehicle suspension system

    Science.gov (United States)

    Koulocheris, D.; Papaioannou, G.; Christodoulou, D.

    2017-10-01

    In this paper, a half car model of with nonlinear suspension systems is selected in order to study the vertical vibrations and optimize its suspension system with respect to ride comfort and road holding. A road bump was used as road profile. At first, the optimization problem is solved with the use of Genetic Algorithms with respect to 6 optimization targets. Then the k - ɛ optimization method was implemented to locate one optimum solution. Furthermore, an alternative approach is presented in this work: the previous optimization targets are separated in main and supplementary ones, depending on their importance in the analysis. The supplementary targets are not crucial to the optimization but they could enhance the main objectives. Thus, the problem was solved again using Genetic Algorithms with respect to the 3 main targets of the optimization. Having obtained the Pareto set of solutions, the k - ɛ optimality method was implemented for the 3 main targets and the supplementary ones, evaluated by the simulation of the vehicle model. The results of both cases are presented and discussed in terms of convergence of the optimization and computational time. The optimum solutions acquired from both cases are compared based on performance metrics as well.

  5. Tissue viability monitoring: a multi-sensor wearable platform approach

    Science.gov (United States)

    Mathur, Neha; Davidson, Alan; Buis, Arjan; Glesk, Ivan

    2016-12-01

    Health services worldwide are seeking ways to improve patient care for amputees suffering from diabetes, and at the same time reduce costs. The monitoring of residual limb temperature, interface pressure and gait can be a useful indicator of tissue viability in lower limb amputees especially to predict the occurrence of pressure ulcers. This is further exacerbated by elevated temperatures and humid micro environment within the prosthesis which encourages the growth of bacteria and skin breakdown. Wearable systems for prosthetic users have to be designed such that the sensors are minimally obtrusive and reliable enough to faithfully record movement and physiological signals. A mobile sensor platform has been developed for use with the lower limb prosthetic users. This system uses an Arduino board that includes sensors for temperature, gait, orientation and pressure measurements. The platform transmits sensor data to a central health authority database server infrastructure through the Bluetooth protocol at a suitable sampling rate. The data-sets recorded using these systems are then processed using machine learning algorithms to extract clinically relevant information from the data. Where a sensor threshold is reached a warning signal can be sent wirelessly together with the relevant data to the patient and appropriate medical personnel. This knowledge is also useful in establishing biomarkers related to a possible deterioration in a patient's health or for assessing the impact of clinical interventions.

  6. Optimization of the choice of unmanned aerial vehicles used to monitor the implementation of selected construction projects

    Science.gov (United States)

    Skorupka, Dariusz; Duchaczek, Artur; Waniewska, Agnieszka; Kowacka, Magdalena

    2017-07-01

    Due to their properties unmanned aerial vehicles have huge number of possibilities for application in construction engineering. The nature and extent of construction works performedmakes the decision to purchase the right equipment significant for the possibility for its further use while monitoring the implementation of these works. Technical factors, such as the accuracy and quality of the applied measurement instruments are especially important when monitoring the realization of construction projects. The paper presents the optimization of the choice of unmanned aerial vehicles using the Bellinger method. The decision-making analysis takes into account criteria that are particularly crucial by virtue of the range of monitoring of ongoing construction works.

  7. Optimal control approaches for aircraft conflict avoidance using speed regulation : a numerical study

    OpenAIRE

    Cellier , Loïc; Cafieri , Sonia; Messine , Frederic

    2013-01-01

    International audience; In this paper a numerical study is provided to solve the aircraft conflict avoidance problem through velocity regulation maneuvers. Starting from optimal controlbased model and approaches in which aircraft accelerations are the controls, and by applying the direct shooting technique, we propose to study two different largescale nonlinear optimization problems. In order to compare different possibilities of implementation, two environments (AMPL and MATLAB) and determin...

  8. A geometric approach to multiperiod mean variance optimization of assets and liabilities

    OpenAIRE

    Leippold, Markus; Trojani, Fabio; Vanini, Paolo

    2005-01-01

    We present a geometric approach to discrete time multiperiod mean variance portfolio optimization that largely simplifies the mathematical analysis and the economic interpretation of such model settings. We show that multiperiod mean variance optimal policies can be decomposed in an orthogonal set of basis strategies, each having a clear economic interpretation. This implies that the corresponding multi period mean variance frontiers are spanned by an orthogonal basis of dynamic returns. Spec...

  9. A New Approach to Site Demand-Based Level Inventory Optimization

    Science.gov (United States)

    2016-06-01

    Note: If probability distributions are estimated based on mean and variance , use ˆ qix  and 2ˆ( )qi to generate these. q in , number of...TO SITE DEMAND-BASED LEVEL INVENTORY OPTIMIZATION by Tacettin Ersoz June 2016 Thesis Advisor: Javier Salmeron Second Reader: Emily...DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE A NEW APPROACH TO SITE DEMAND-BASED LEVEL INVENTORY OPTIMIZATION 5. FUNDING NUMBERS 6

  10. Probability approaching method (PAM) and its application on fuel management optimization

    International Nuclear Information System (INIS)

    Liu, Z.; Hu, Y.; Shi, G.

    2004-01-01

    For multi-cycle reloading optimization problem, a new solving scheme is presented. The multi-cycle problem is de-coupled into a number of relatively independent mono-cycle issues, then this non-linear programming problem with complex constraints is solved by an advanced new algorithm -probability approaching method (PAM), which is based on probability theory. The result on simplified core model shows well effect of this new multi-cycle optimization scheme. (authors)

  11. A trust region approach with multivariate Padé model for optimal circuit design

    Science.gov (United States)

    Abdel-Malek, Hany L.; Ebid, Shaimaa E. K.; Mohamed, Ahmed S. A.

    2017-11-01

    Since the optimization process requires a significant number of consecutive function evaluations, it is recommended to replace the function by an easily evaluated approximation model during the optimization process. The model suggested in this article is based on a multivariate Padé approximation. This model is constructed using data points of ?, where ? is the number of parameters. The model is updated over a sequence of trust regions. This model avoids the slow convergence of linear models of ? and has features of quadratic models that need interpolation data points of ?. The proposed approach is tested by applying it to several benchmark problems. Yield optimization using such a direct method is applied to some practical circuit examples. Minimax solution leads to a suitable initial point to carry out the yield optimization process. The yield is optimized by the proposed derivative-free method for active and passive filter examples.

  12. PID control design for chaotic synchronization using a tribes optimization approach

    Energy Technology Data Exchange (ETDEWEB)

    Santos Coelho, Leandro dos [Industrial and Systems Engineering Graduate Program, LAS/PPGEPS, Pontifical Catholic University of Parana, PUCPR, Imaculada Conceicao, 1155, 80215-901 Curitiba, Parana (Brazil)], E-mail: leandro.coelho@pucpr.br; Andrade Bernert, Diego Luis de [Industrial and Systems Engineering Graduate Program, LAS/PPGEPS, Pontifical Catholic University of Parana, PUCPR, Imaculada Conceicao, 1155, 80215-901 Curitiba, Parana (Brazil)], E-mail: dbernert@gmail.com

    2009-10-15

    Recently, the investigation of synchronization and control problems for discrete chaotic systems has stimulated a wide range of research activity including both theoretical studies and practical applications. This paper deals with the tuning of a proportional-integral-derivative (PID) controller using a modified Tribes optimization algorithm based on truncated chaotic Zaslavskii map (MTribes) for synchronization of two identical discrete chaotic systems subject the different initial conditions. The Tribes algorithm is inspired by the social behavior of bird flocking and is also an optimization adaptive procedure that does not require sociometric or swarm size parameter tuning. Numerical simulations are given to show the effectiveness of the proposed synchronization method. In addition, some comparisons of the MTribes optimization algorithm with other continuous optimization methods, including classical Tribes algorithm and particle swarm optimization approaches, are presented.

  13. PID control design for chaotic synchronization using a tribes optimization approach

    International Nuclear Information System (INIS)

    Santos Coelho, Leandro dos; Andrade Bernert, Diego Luis de

    2009-01-01

    Recently, the investigation of synchronization and control problems for discrete chaotic systems has stimulated a wide range of research activity including both theoretical studies and practical applications. This paper deals with the tuning of a proportional-integral-derivative (PID) controller using a modified Tribes optimization algorithm based on truncated chaotic Zaslavskii map (MTribes) for synchronization of two identical discrete chaotic systems subject the different initial conditions. The Tribes algorithm is inspired by the social behavior of bird flocking and is also an optimization adaptive procedure that does not require sociometric or swarm size parameter tuning. Numerical simulations are given to show the effectiveness of the proposed synchronization method. In addition, some comparisons of the MTribes optimization algorithm with other continuous optimization methods, including classical Tribes algorithm and particle swarm optimization approaches, are presented.

  14. Establishment of a hydrological monitoring network in a tropical African catchment: An integrated participatory approach

    Science.gov (United States)

    Gomani, M. C.; Dietrich, O.; Lischeid, G.; Mahoo, H.; Mahay, F.; Mbilinyi, B.; Sarmett, J.

    Sound decision making for water resources management has to be based on good knowledge of the dominant hydrological processes of a catchment. This information can only be obtained through establishing suitable hydrological monitoring networks. Research catchments are typically established without involving the key stakeholders, which results in instruments being installed at inappropriate places as well as at high risk of theft and vandalism. This paper presents an integrated participatory approach for establishing a hydrological monitoring network. We propose a framework with six steps beginning with (i) inception of idea; (ii) stakeholder identification; (iii) defining the scope of the network; (iv) installation; (v) monitoring; and (vi) feedback mechanism integrated within the participatory framework. The approach is illustrated using an example of the Ngerengere catchment in Tanzania. In applying the approach, the concept of establishing the Ngerengere catchment monitoring network was initiated in 2008 within the Resilient Agro-landscapes to Climate Change in Tanzania (ReACCT) research program. The main stakeholders included: local communities; Sokoine University of Agriculture; Wami Ruvu Basin Water Office and the ReACCT Research team. The scope of the network was based on expert experience in similar projects and lessons learnt from literature review of similar projects from elsewhere integrated with local expert knowledge. The installations involved reconnaissance surveys, detailed surveys, and expert consultations to identify best sites. First, a Digital Elevation Model, land use, and soil maps were used to identify potential monitoring sites. Local and expert knowledge was collected on flow regimes, indicators of shallow groundwater plant species, precipitation pattern, vegetation, and soil types. This information was integrated and used to select sites for installation of an automatic weather station, automatic rain gauges, river flow gauging stations

  15. Minimization of the LCA impact of thermodynamic cycles using a combined simulation-optimization approach

    International Nuclear Information System (INIS)

    Brunet, Robert; Cortés, Daniel; Guillén-Gosálbez, Gonzalo; Jiménez, Laureano; Boer, Dieter

    2012-01-01

    This work presents a computational approach for the simultaneous minimization of the total cost and environmental impact of thermodynamic cycles. Our method combines process simulation, multi-objective optimization and life cycle assessment (LCA) within a unified framework that identifies in a systematic manner optimal design and operating conditions according to several economic and LCA impacts. Our approach takes advantages of the complementary strengths of process simulation (in which mass, energy balances and thermodynamic calculations are implemented in an easy manner) and rigorous deterministic optimization tools. We demonstrate the capabilities of this strategy by means of two case studies in which we address the design of a 10 MW Rankine cycle modeled in Aspen Hysys, and a 90 kW ammonia-water absorption cooling cycle implemented in Aspen Plus. Numerical results show that it is possible to achieve environmental and cost savings using our rigorous approach. - Highlights: ► Novel framework for the optimal design of thermdoynamic cycles. ► Combined use of simulation and optimization tools. ► Optimal design and operating conditions according to several economic and LCA impacts. ► Design of a 10MW Rankine cycle in Aspen Hysys, and a 90kW absorption cycle in Aspen Plus.

  16. Minimization of the LCA impact of thermodynamic cycles using a combined simulation-optimization approach

    Energy Technology Data Exchange (ETDEWEB)

    Brunet, Robert; Cortes, Daniel [Departament d' Enginyeria Quimica, Escola Tecnica Superior d' Enginyeria Quimica, Universitat Rovira i Virgili, Campus Sescelades, Avinguda Paisos Catalans 26, 43007 Tarragona (Spain); Guillen-Gosalbez, Gonzalo [Departament d' Enginyeria Quimica, Escola Tecnica Superior d' Enginyeria Quimica, Universitat Rovira i Virgili, Campus Sescelades, Avinguda Paisos Catalans 26, 43007 Tarragona (Spain); Jimenez, Laureano [Departament d' Enginyeria Quimica, Escola Tecnica Superior d' Enginyeria Quimica, Universitat Rovira i Virgili, Campus Sescelades, Avinguda Paisos Catalans 26, 43007 Tarragona (Spain); Boer, Dieter [Departament d' Enginyeria Mecanica, Escola Tecnica Superior d' Enginyeria, Universitat Rovira i Virgili, Campus Sescelades, Avinguda Paisos Catalans 26, 43007, Tarragona (Spain)

    2012-12-15

    This work presents a computational approach for the simultaneous minimization of the total cost and environmental impact of thermodynamic cycles. Our method combines process simulation, multi-objective optimization and life cycle assessment (LCA) within a unified framework that identifies in a systematic manner optimal design and operating conditions according to several economic and LCA impacts. Our approach takes advantages of the complementary strengths of process simulation (in which mass, energy balances and thermodynamic calculations are implemented in an easy manner) and rigorous deterministic optimization tools. We demonstrate the capabilities of this strategy by means of two case studies in which we address the design of a 10 MW Rankine cycle modeled in Aspen Hysys, and a 90 kW ammonia-water absorption cooling cycle implemented in Aspen Plus. Numerical results show that it is possible to achieve environmental and cost savings using our rigorous approach. - Highlights: Black-Right-Pointing-Pointer Novel framework for the optimal design of thermdoynamic cycles. Black-Right-Pointing-Pointer Combined use of simulation and optimization tools. Black-Right-Pointing-Pointer Optimal design and operating conditions according to several economic and LCA impacts. Black-Right-Pointing-Pointer Design of a 10MW Rankine cycle in Aspen Hysys, and a 90kW absorption cycle in Aspen Plus.

  17. Dynamic optimization of maintenance and improvement planning for water main system: Periodic replacement approach

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jong Woo; Choi, Go Bong; Lee, Jong Min [Seoul National University, Seoul (Korea, Republic of); Suh, Jung Chul [Samchully Corporation, Seoul (Korea, Republic of)

    2016-01-15

    This paper proposes a Markov decision process (MDP) based approach to derive an optimal schedule of maintenance, rehabilitation and replacement of the water main system. The scheduling problem utilizes auxiliary information of a pipe such as the current state, cost, and deterioration model. The objective function and detailed algorithm of dynamic programming are modified to solve the periodic replacement problem. The optimal policy evaluated by the proposed algorithm is compared to several existing policies via Monte Carlo simulations. The proposed decision framework provides a systematic way to obtain an optimal policy.

  18. The analytical approach to optimization of active region structure of quantum dot laser

    International Nuclear Information System (INIS)

    Korenev, V V; Savelyev, A V; Zhukov, A E; Omelchenko, A V; Maximov, M V

    2014-01-01

    Using the analytical approach introduced in our previous papers we analyse the possibilities of optimization of size and structure of active region of semiconductor quantum dot lasers emitting via ground-state optical transitions. It is shown that there are optimal length' dispersion and number of QD layers in laser active region which allow one to obtain lasing spectrum of a given width at minimum injection current. Laser efficiency corresponding to the injection current optimized by the cavity length is practically equal to its maximum value

  19. The analytical approach to optimization of active region structure of quantum dot laser

    Science.gov (United States)

    Korenev, V. V.; Savelyev, A. V.; Zhukov, A. E.; Omelchenko, A. V.; Maximov, M. V.

    2014-10-01

    Using the analytical approach introduced in our previous papers we analyse the possibilities of optimization of size and structure of active region of semiconductor quantum dot lasers emitting via ground-state optical transitions. It is shown that there are optimal length' dispersion and number of QD layers in laser active region which allow one to obtain lasing spectrum of a given width at minimum injection current. Laser efficiency corresponding to the injection current optimized by the cavity length is practically equal to its maximum value.

  20. Tuning of PID controller for an automatic regulator voltage system using chaotic optimization approach

    International Nuclear Information System (INIS)

    Santos Coelho, Leandro dos

    2009-01-01

    Despite the popularity, the tuning aspect of proportional-integral-derivative (PID) controllers is a challenge for researchers and plant operators. Various controllers tuning methodologies have been proposed in the literature such as auto-tuning, self-tuning, pattern recognition, artificial intelligence, and optimization methods. Chaotic optimization algorithms as an emergent method of global optimization have attracted much attention in engineering applications. Chaotic optimization algorithms, which have the features of easy implementation, short execution time and robust mechanisms of escaping from local optimum, is a promising tool for engineering applications. In this paper, a tuning method for determining the parameters of PID control for an automatic regulator voltage (AVR) system using a chaotic optimization approach based on Lozi map is proposed. Since chaotic mapping enjoys certainty, ergodicity and the stochastic property, the proposed chaotic optimization introduces chaos mapping using Lozi map chaotic sequences which increases its convergence rate and resulting precision. Simulation results are promising and show the effectiveness of the proposed approach. Numerical simulations based on proposed PID control of an AVR system for nominal system parameters and step reference voltage input demonstrate the good performance of chaotic optimization.

  1. Radiation dose optimization research: Exposure technique approaches in CR imaging – A literature review

    International Nuclear Information System (INIS)

    Seeram, Euclid; Davidson, Rob; Bushong, Stewart; Swan, Hans

    2013-01-01

    The purpose of this paper is to review the literature on exposure technique approaches in Computed Radiography (CR) imaging as a means of radiation dose optimization in CR imaging. Specifically the review assessed three approaches: optimization of kVp; optimization of mAs; and optimization of the Exposure Indicator (EI) in practice. Only papers dating back to 2005 were described in this review. The major themes, patterns, and common findings from the literature reviewed showed that important features are related to radiation dose management strategies for digital radiography include identification of the EI as a dose control mechanism and as a “surrogate for dose management”. In addition the use of the EI has been viewed as an opportunity for dose optimization. Furthermore optimization research has focussed mainly on optimizing the kVp in CR imaging as a means of implementing the ALARA philosophy, and studies have concentrated on mainly chest imaging using different CR systems such as those commercially available from Fuji, Agfa, Kodak, and Konica-Minolta. These studies have produced “conflicting results”. In addition, a common pattern was the use of automatic exposure control (AEC) and the measurement of constant effective dose, and the use of a dose-area product (DAP) meter

  2. A HyperSpectral Imaging (HSI) approach for bio-digestate real time monitoring

    Science.gov (United States)

    Bonifazi, Giuseppe; Fabbri, Andrea; Serranti, Silvia

    2014-05-01

    One of the key issues in developing Good Agricultural Practices (GAP) is represented by the optimal utilisation of fertilisers and herbicidal to reduce the impact of Nitrates in soils and the environment. In traditional agriculture practises, these substances were provided to the soils through the use of chemical products (inorganic/organic fertilizers, soil improvers/conditioners, etc.), usually associated to several major environmental problems, such as: water pollution and contamination, fertilizer dependency, soil acidification, trace mineral depletion, over-fertilization, high energy consumption, contribution to climate change, impacts on mycorrhizas, lack of long-term sustainability, etc. For this reason, the agricultural market is more and more interested in the utilisation of organic fertilisers and soil improvers. Among organic fertilizers, there is an emerging interest for the digestate, a sub-product resulting from anaerobic digestion (AD) processes. Several studies confirm the high properties of digestate if used as organic fertilizer and soil improver/conditioner. Digestate, in fact, is somehow similar to compost: AD converts a major part of organic nitrogen to ammonia, which is then directly available to plants as nitrogen. In this paper, new analytical tools, based on HyperSpectral Imaging (HSI) sensing devices, and related detection architectures, is presented and discussed in order to define and apply simple to use, reliable, robust and low cost strategies finalised to define and implement innovative smart detection engines for digestate characterization and monitoring. This approach is finalized to utilize this "waste product" as a valuable organic fertilizer and soil conditioner, in a reduced impact and an "ad hoc" soil fertilisation perspective. Furthermore, the possibility to contemporary utilize the HSI approach to realize a real time physicalchemical characterisation of agricultural soils (i.e. nitrogen, phosphorus, etc., detection) could

  3. Real-time risk monitoring in business processes : a sensor-based approach

    NARCIS (Netherlands)

    Conforti, R.; La Rosa, M.; Fortino, G.; Hofstede, ter A.H.M.; Recker, J.; Adams, M.

    2013-01-01

    This article proposes an approach for real-time monitoring of risks in executable business process models. The approach considers risks in all phases of the business process management lifecycle, from process design, where risks are defined on top of process models, through to process diagnosis,

  4. Optimal Non-Invasive Fault Classification Model for Packaged Ceramic Tile Quality Monitoring Using MMW Imaging

    Science.gov (United States)

    Agarwal, Smriti; Singh, Dharmendra

    2016-04-01

    Millimeter wave (MMW) frequency has emerged as an efficient tool for different stand-off imaging applications. In this paper, we have dealt with a novel MMW imaging application, i.e., non-invasive packaged goods quality estimation for industrial quality monitoring applications. An active MMW imaging radar operating at 60 GHz has been ingeniously designed for concealed fault estimation. Ceramic tiles covered with commonly used packaging cardboard were used as concealed targets for undercover fault classification. A comparison of computer vision-based state-of-the-art feature extraction techniques, viz, discrete Fourier transform (DFT), wavelet transform (WT), principal component analysis (PCA), gray level co-occurrence texture (GLCM), and histogram of oriented gradient (HOG) has been done with respect to their efficient and differentiable feature vector generation capability for undercover target fault classification. An extensive number of experiments were performed with different ceramic tile fault configurations, viz., vertical crack, horizontal crack, random crack, diagonal crack along with the non-faulty tiles. Further, an independent algorithm validation was done demonstrating classification accuracy: 80, 86.67, 73.33, and 93.33 % for DFT, WT, PCA, GLCM, and HOG feature-based artificial neural network (ANN) classifier models, respectively. Classification results show good capability for HOG feature extraction technique towards non-destructive quality inspection with appreciably low false alarm as compared to other techniques. Thereby, a robust and optimal image feature-based neural network classification model has been proposed for non-invasive, automatic fault monitoring for a financially and commercially competent industrial growth.

  5. An Optimized Autonomous Space In-situ Sensorweb (OASIS) for Volcano Monitoring

    Science.gov (United States)

    Song, W.; Shirazi, B.; Lahusen, R.; Chien, S.; Kedar, S.; Webb, F.

    2006-12-01

    In response to NASA's announced requirement for Earth hazard monitoring sensor-web technology, we are developing a prototype real-time Optimized Autonomous Space In-situ Sensorweb. The prototype will be focused on volcano hazard monitoring at Mount St. Helens, which has been in continuous eruption since October 2004. The system is designed to be flexible and easily configurable for many other applications as well. The primary goals of the project are: 1) integrating complementary space (i.e., Earth Observing One (EO- 1) satellite) and in-situ (ground-based) elements into an interactive, autonomous sensor-web; 2) advancing sensor-web power and communication resource management technology; and 3) enabling scalability for seamless infusion of future space and in-situ assets into the sensor-web. To meet these goals, we are developing: 1) a test-bed in-situ array with smart sensor nodes capable of making autonomous data acquisition decisions; 2) efficient self-organization algorithm of sensor-web topology to support efficient data communication and command control; 3) smart bandwidth allocation algorithms in which sensor nodes autonomously determine packet priorities based on mission needs and local bandwidth information in real- time; and 4) remote network management and reprogramming tools. The space and in-situ control components of the system will be integrated such that each element is capable of triggering the other. Sensor-web data acquisition and dissemination will be accomplished through the use of SensorML language standards for geospatial information. The three-year project will demonstrate end-to-end system performance with the in-situ test-bed at Mount St. Helens and NASA's EO-1 platform.

  6. Innovative biological approaches for monitoring and improving water quality

    Directory of Open Access Journals (Sweden)

    Sanja eAracic

    2015-08-01

    Full Text Available Water quality is largely influenced by the abundance and diversity of indigenous microbes present within an aquatic environment. Physical, chemical and biological contaminants from anthropogenic activities can accumulate in aquatic systems causing detrimental ecological consequences. Approaches exploiting microbial processes are now being utilized for the detection, and removal or reduction of contaminants. Contaminants can be identified and quantified in situ using microbial whole-cell biosensors, negating the need for water samples to be tested off-site. Similarly, the innate biodegradative processes can be enhanced through manipulation of the composition and/or function of the indigenous microbial communities present within the contaminated environments. Biological contaminants, such as detrimental/pathogenic bacteria, can be specifically targeted and reduced in number using bacteriophages. This mini-review discusses the potential application of whole-cell microbial biosensors for the detection of contaminants, the exploitation of microbial biodegradative processes for environmental restoration and the manipulation of microbial communities using phages.

  7. Innovative biological approaches for monitoring and improving water quality

    Science.gov (United States)

    Aracic, Sanja; Manna, Sam; Petrovski, Steve; Wiltshire, Jennifer L.; Mann, Gülay; Franks, Ashley E.

    2015-01-01

    Water quality is largely influenced by the abundance and diversity of indigenous microbes present within an aquatic environment. Physical, chemical and biological contaminants from anthropogenic activities can accumulate in aquatic systems causing detrimental ecological consequences. Approaches exploiting microbial processes are now being utilized for the detection, and removal or reduction of contaminants. Contaminants can be identified and quantified in situ using microbial whole-cell biosensors, negating the need for water samples to be tested off-site. Similarly, the innate biodegradative processes can be enhanced through manipulation of the composition and/or function of the indigenous microbial communities present within the contaminated environments. Biological contaminants, such as detrimental/pathogenic bacteria, can be specifically targeted and reduced in number using bacteriophages. This mini-review discusses the potential application of whole-cell microbial biosensors for the detection of contaminants, the exploitation of microbial biodegradative processes for environmental restoration and the manipulation of microbial communities using phages. PMID:26322034

  8. Global Crop Monitoring: A Satellite-Based Hierarchical Approach

    Directory of Open Access Journals (Sweden)

    Bingfang Wu

    2015-04-01

    Full Text Available Taking advantage of multiple new remote sensing data sources, especially from Chinese satellites, the CropWatch system has expanded the scope of its international analyses through the development of new indicators and an upgraded operational methodology. The approach adopts a hierarchical system covering four spatial levels of detail: global, regional, national (thirty-one key countries including China and “sub-countries” (for the nine largest countries. The thirty-one countries encompass more that 80% of both production and exports of maize, rice, soybean and wheat. The methodology resorts to climatic and remote sensing indicators at different scales. The global patterns of crop environmental growing conditions are first analyzed with indicators for rainfall, temperature, photosynthetically active radiation (PAR as well as potential biomass. At the regional scale, the indicators pay more attention to crops and include Vegetation Health Index (VHI, Vegetation Condition Index (VCI, Cropped Arable Land Fraction (CALF as well as Cropping Intensity (CI. Together, they characterize crop situation, farming intensity and stress. CropWatch carries out detailed crop condition analyses at the national scale with a comprehensive array of variables and indicators. The Normalized Difference Vegetation Index (NDVI, cropped areas and crop conditions are integrated to derive food production estimates. For the nine largest countries, CropWatch zooms into the sub-national units to acquire detailed information on crop condition and production by including new indicators (e.g., Crop type proportion. Based on trend analysis, CropWatch also issues crop production supply outlooks, covering both long-term variations and short-term dynamic changes in key food exporters and importers. The hierarchical approach adopted by CropWatch is the basis of the analyses of climatic and crop conditions assessments published in the quarterly “CropWatch bulletin” which

  9. Integrated Optimization of Long-Range Underwater Signal Detection, Feature Extraction, and Classification for Nuclear Treaty Monitoring

    NARCIS (Netherlands)

    Tuma, M.; Rorbech, V.; Prior, M.; Igel, C.

    2016-01-01

    We designed and jointly optimized an integrated signal processing chain for detection and classification of long-range passive-acoustic underwater signals recorded by the global geophysical monitoring network of the Comprehensive Nuclear-Test-Ban Treaty Organization. Starting at the level of raw

  10. Optimizing the transient transfection process of HEK-293 suspension cells for protein production by nucleotide ratio monitoring

    DEFF Research Database (Denmark)

    de Los Milagros Bassani Molinas, Maria; Beer, Christiane; Hesse, F

    2014-01-01

    Large scale, transient gene expression (TGE) is highly dependent of the physiological status of a cell line. Therefore, intracellular nucleotide pools and ratios were used for identifying and monitoring the optimal status of a suspension cell line used for TGE. The transfection efficiency upon po...

  11. Land Degradation Monitoring in the Ordos Plateau of China Using an Expert Knowledge and BP-ANN-Based Approach

    Directory of Open Access Journals (Sweden)

    Yaojie Yue

    2016-11-01

    Full Text Available Land degradation monitoring is of vital importance to provide scientific information for promoting sustainable land utilization. This paper presents an expert knowledge and BP-ANN-based approach to detect and monitor land degradation in an effort to overcome the deficiencies of image classification and vegetation index-based approaches. The proposed approach consists of three generic steps: (1 extraction of knowledge on the relationship between land degradation degree and predisposing factors, which are NDVI and albedo, from domain experts; (2 establishment of a land degradation detecting model based on the BP-ANN algorithm; and (3 land degradation dynamic analysis. A comprehensive analysis was conducted on the development of land degradation in the Ordos Plateau of China in 1990, 2000 and 2010. The results indicate that the proposed approach is reliable for monitoring land degradation, with an overall accuracy of 91.2%. From 1990–2010, a reverse trend of land degradation is observed in Ordos Plateau. Regions with relatively high land degradation dynamic were mostly located in the northeast of Ordos Plateau. Additionally, most of the regions have transferred from a hot spot of land degradation to a less changed area. It is suggested that land utilization optimization plays a key role for effective land degradation control. However, it should be highlighted that the goals of such strategies should aim at the main negative factors causing land degradation, and the land use type and its quantity must meet the demand of population and be reconciled with natural conditions. Results from this case study suggest that the expert knowledge and BP-ANN-based approach is effective in mapping land degradation.

  12. A Wireless Sensor Network-Based Approach with Decision Support for Monitoring Lake Water Quality.

    Science.gov (United States)

    Huang, Xiaoci; Yi, Jianjun; Chen, Shaoli; Zhu, Xiaomin

    2015-11-19

    Online monitoring and water quality analysis of lakes are urgently needed. A feasible and effective approach is to use a Wireless Sensor Network (WSN). Lake water environments, like other real world environments, present many changing and unpredictable situations. To ensure flexibility in such an environment, the WSN node has to be prepared to deal with varying situations. This paper presents a WSN self-configuration approach for lake water quality monitoring. The approach is based on the integration of a semantic framework, where a reasoner can make decisions on the configuration of WSN services. We present a WSN ontology and the relevant water quality monitoring context information, which considers its suitability in a pervasive computing environment. We also propose a rule-based reasoning engine that is used to conduct decision support through reasoning techniques and context-awareness. To evaluate the approach, we conduct usability experiments and performance benchmarks.

  13. Using maximum entropy modeling for optimal selection of sampling sites for monitoring networks

    Science.gov (United States)

    Stohlgren, Thomas J.; Kumar, Sunil; Barnett, David T.; Evangelista, Paul H.

    2011-01-01

    Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2) of the National Ecological Observatory Network (NEON). We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint), within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.

  14. An efficient particle swarm approach for mixed-integer programming in reliability-redundancy optimization applications

    International Nuclear Information System (INIS)

    Santos Coelho, Leandro dos

    2009-01-01

    The reliability-redundancy optimization problems can involve the selection of components with multiple choices and redundancy levels that produce maximum benefits, and are subject to the cost, weight, and volume constraints. Many classical mathematical methods have failed in handling nonconvexities and nonsmoothness in reliability-redundancy optimization problems. As an alternative to the classical optimization approaches, the meta-heuristics have been given much attention by many researchers due to their ability to find an almost global optimal solutions. One of these meta-heuristics is the particle swarm optimization (PSO). PSO is a population-based heuristic optimization technique inspired by social behavior of bird flocking and fish schooling. This paper presents an efficient PSO algorithm based on Gaussian distribution and chaotic sequence (PSO-GC) to solve the reliability-redundancy optimization problems. In this context, two examples in reliability-redundancy design problems are evaluated. Simulation results demonstrate that the proposed PSO-GC is a promising optimization technique. PSO-GC performs well for the two examples of mixed-integer programming in reliability-redundancy applications considered in this paper. The solutions obtained by the PSO-GC are better than the previously best-known solutions available in the recent literature

  15. An Efficacious Multi-Objective Fuzzy Linear Programming Approach for Optimal Power Flow Considering Distributed Generation.

    Science.gov (United States)

    Warid, Warid; Hizam, Hashim; Mariun, Norman; Abdul-Wahab, Noor Izzri

    2016-01-01

    This paper proposes a new formulation for the multi-objective optimal power flow (MOOPF) problem for meshed power networks considering distributed generation. An efficacious multi-objective fuzzy linear programming optimization (MFLP) algorithm is proposed to solve the aforementioned problem with and without considering the distributed generation (DG) effect. A variant combination of objectives is considered for simultaneous optimization, including power loss, voltage stability, and shunt capacitors MVAR reserve. Fuzzy membership functions for these objectives are designed with extreme targets, whereas the inequality constraints are treated as hard constraints. The multi-objective fuzzy optimal power flow (OPF) formulation was converted into a crisp OPF in a successive linear programming (SLP) framework and solved using an efficient interior point method (IPM). To test the efficacy of the proposed approach, simulations are performed on the IEEE 30-busand IEEE 118-bus test systems. The MFLP optimization is solved for several optimization cases. The obtained results are compared with those presented in the literature. A unique solution with a high satisfaction for the assigned targets is gained. Results demonstrate the effectiveness of the proposed MFLP technique in terms of solution optimality and rapid convergence. Moreover, the results indicate that using the optimal DG location with the MFLP algorithm provides the solution with the highest quality.

  16. An efficient particle swarm approach for mixed-integer programming in reliability-redundancy optimization applications

    Energy Technology Data Exchange (ETDEWEB)

    Santos Coelho, Leandro dos [Industrial and Systems Engineering Graduate Program, LAS/PPGEPS, Pontifical Catholic University of Parana, PUCPR, Imaculada Conceicao, 1155, 80215-901 Curitiba, Parana (Brazil)], E-mail: leandro.coelho@pucpr.br

    2009-04-15

    The reliability-redundancy optimization problems can involve the selection of components with multiple choices and redundancy levels that produce maximum benefits, and are subject to the cost, weight, and volume constraints. Many classical mathematical methods have failed in handling nonconvexities and nonsmoothness in reliability-redundancy optimization problems. As an alternative to the classical optimization approaches, the meta-heuristics have been given much attention by many researchers due to their ability to find an almost global optimal solutions. One of these meta-heuristics is the particle swarm optimization (PSO). PSO is a population-based heuristic optimization technique inspired by social behavior of bird flocking and fish schooling. This paper presents an efficient PSO algorithm based on Gaussian distribution and chaotic sequence (PSO-GC) to solve the reliability-redundancy optimization problems. In this context, two examples in reliability-redundancy design problems are evaluated. Simulation results demonstrate that the proposed PSO-GC is a promising optimization technique. PSO-GC performs well for the two examples of mixed-integer programming in reliability-redundancy applications considered in this paper. The solutions obtained by the PSO-GC are better than the previously best-known solutions available in the recent literature.

  17. A novel linear programming approach to fluence map optimization for intensity modulated radiation therapy treatment planning

    International Nuclear Information System (INIS)

    Romeijn, H Edwin; Ahuja, Ravindra K; Dempsey, James F; Kumar, Arvind; Li, Jonathan G

    2003-01-01

    We present a novel linear programming (LP) based approach for efficiently solving the intensity modulated radiation therapy (IMRT) fluence-map optimization (FMO) problem to global optimality. Our model overcomes the apparent limitations of a linear-programming approach by approximating any convex objective function by a piecewise linear convex function. This approach allows us to retain the flexibility offered by general convex objective functions, while allowing us to formulate the FMO problem as a LP problem. In addition, a novel type of partial-volume constraint that bounds the tail averages of the differential dose-volume histograms of structures is imposed while retaining linearity as an alternative approach to improve dose homogeneity in the target volumes, and to attempt to spare as many critical structures as possible. The goal of this work is to develop a very rapid global optimization approach that finds high quality dose distributions. Implementation of this model has demonstrated excellent results. We found globally optimal solutions for eight 7-beam head-and-neck cases in less than 3 min of computational time on a single processor personal computer without the use of partial-volume constraints. Adding such constraints increased the running times by a factor of 2-3, but improved the sparing of critical structures. All cases demonstrated excellent target coverage (>95%), target homogeneity (<10% overdosing and <7% underdosing) and organ sparing using at least one of the two models

  18. Optimal Implementation of Prescription Drug Monitoring Programs in the Emergency Department

    Directory of Open Access Journals (Sweden)

    Garrett DePalma

    2018-02-01

    Full Text Available The opioid epidemic is the most significant modern-day, public health crisis. Physicians and lawmakers have developed methods and practices to curb opioid use. This article describes one method, prescription drug monitoring programs (PDMP, through the lens of how to optimize use for emergency departments (ED. EDs have rapidly become a central location to combat opioid abuse and drug diversion. PDMPs can provide emergency physicians with comprehensive prescribing information to improve clinical decisions around opioids. However, PDMPs vary tremendously in their accessibility and usability in the ED, which limits their effectiveness at the point of care. Problems are complicated by varying state-to-state requirements for data availability and accessibility. Several potential solutions to improving the utility of PDMPs in EDs include integrating PDMPs with electronic health records, implementing unsolicited reporting and prescription context, improving PDMP accessibility, data analytics, and expanding the scope of PDMPs. These improvements may help improve clinical decision-making for emergency physicians through better data, data presentation, and accessibility.

  19. A simple optimized microwave digestion method for multielement monitoring in mussel samples

    International Nuclear Information System (INIS)

    Saavedra, Y.; Gonzalez, A.; Fernandez, P.; Blanco, J.

    2004-01-01

    With the aim of obtaining a set of common decomposition conditions allowing the determination of several metals in mussel tissue (Hg by cold vapour atomic absorption spectrometry; Cu and Zn by flame atomic absorption spectrometry; and Cd, PbCr, Ni, As and Ag by electrothermal atomic absorption spectrometry), a factorial experiment was carried out using as factors the sample weight, digestion time and acid addition. It was found that the optimal conditions were 0.5 g of freeze-dried and triturated samples with 6 ml of nitric acid and subjected to microwave heating for 20 min at 180 psi. This pre-treatment, using only one step and one oxidative reagent, was suitable to determine the nine metals studied with no subsequent handling of the digest. It was possible to carry out the determination of atomic absorption using calibrations with aqueous standards and matrix modifiers for cadmium, lead, chromium, arsenic and silver. The accuracy of the procedure was checked using oyster tissue (SRM 1566b) and mussel tissue (CRM 278R) certified reference materials. The method is now used routinely to monitor these metals in wild and cultivated mussels, and found to be good

  20. International health research monitoring: exploring a scientific and a cooperative approach using participatory action research.

    Science.gov (United States)

    Chantler, Tracey; Cheah, Phaik Yeong; Miiro, George; Hantrakum, Viriya; Nanvubya, Annet; Ayuo, Elizabeth; Kivaya, Esther; Kidola, Jeremiah; Kaleebu, Pontiano; Parker, Michael; Njuguna, Patricia; Ashley, Elizabeth; Guerin, Philippe J; Lang, Trudie

    2014-02-17

    To evaluate and determine the value of monitoring models developed by the Mahidol Oxford Tropical Research Unit and the East African Consortium for Clinical Research, consider how this can be measured and explore monitors' and investigators' experiences of and views about the nature, purpose and practice of monitoring. A case study approach was used within the context of participatory action research because one of the aims was to guide and improve practice. 34 interviews, five focus groups and observations of monitoring practice were conducted. Fieldwork occurred in the places where the monitoring models are coordinated and applied in Thailand, Cambodia, Uganda and Kenya. Participants included those coordinating the monitoring schemes, monitors, senior investigators and research staff. Transcribed textual data from field notes, interviews and focus groups was imported into a qualitative data software program (NVIVO V. 10) and analysed inductively and thematically by a qualitative researcher. The initial coding framework was reviewed internally and two main categories emerged from the subsequent interrogation of the data. The categories that were identified related to the conceptual framing and nature of monitoring, and the practice of monitoring, including relational factors. Particular emphasis was given to the value of a scientific and cooperative style of monitoring as a means of enhancing data quality, trust and transparency. In terms of practice the primary purpose of monitoring was defined as improving the conduct of health research and increasing the capacity of researchers and trial sites. The models studied utilise internal and network wide expertise to improve the ethics and quality of clinical research. They demonstrate how monitoring can be a scientific and constructive exercise rather than a threatening process. The value of cooperative relations needs to be given more emphasis in monitoring activities, which seek to ensure that research protects

  1. A new approach for monitoring ebolavirus in wild great apes.

    Directory of Open Access Journals (Sweden)

    Patricia E Reed

    2014-09-01

    Full Text Available Central Africa is a "hotspot" for emerging infectious diseases (EIDs of global and local importance, and a current outbreak of ebolavirus is affecting multiple countries simultaneously. Ebolavirus is suspected to have caused recent declines in resident great apes. While ebolavirus vaccines have been proposed as an intervention to protect apes, their effectiveness would be improved if we could diagnostically confirm Ebola virus disease (EVD as the cause of die-offs, establish ebolavirus geographical distribution, identify immunologically naïve populations, and determine whether apes survive virus exposure.Here we report the first successful noninvasive detection of antibodies against Ebola virus (EBOV from wild ape feces. Using this method, we have been able to identify gorillas with antibodies to EBOV with an overall prevalence rate reaching 10% on average, demonstrating that EBOV exposure or infection is not uniformly lethal in this species. Furthermore, evidence of antibodies was identified in gorillas thought previously to be unexposed to EBOV (protected from exposure by rivers as topological barriers of transmission.Our new approach will contribute to a strategy to protect apes from future EBOV infections by early detection of increased incidence of exposure, by identifying immunologically naïve at-risk populations as potential targets for vaccination, and by providing a means to track vaccine efficacy if such intervention is deemed appropriate. Finally, since human EVD is linked to contact with infected wildlife carcasses, efforts aimed at identifying great ape outbreaks could have a profound impact on public health in local communities, where EBOV causes case-fatality rates of up to 88%.

  2. A deterministic approach for performance assessment and optimization of power distribution units in Iran

    International Nuclear Information System (INIS)

    Azadeh, A.; Ghaderi, S.F.; Omrani, H.

    2009-01-01

    This paper presents a deterministic approach for performance assessment and optimization of power distribution units in Iran. The deterministic approach is composed of data envelopment analysis (DEA), principal component analysis (PCA) and correlation techniques. Seventeen electricity distribution units have been considered for the purpose of this study. Previous studies have generally used input-output DEA models for benchmarking and evaluation of electricity distribution units. However, this study considers an integrated deterministic DEA-PCA approach since the DEA model should be verified and validated by a robust multivariate methodology such as PCA. Moreover, the DEA models are verified and validated by PCA, Spearman and Kendall's Tau correlation techniques, while previous studies do not have the verification and validation features. Also, both input- and output-oriented DEA models are used for sensitivity analysis of the input and output variables. Finally, this is the first study to present an integrated deterministic approach for assessment and optimization of power distributions in Iran

  3. A Multivariate Quality Loss Function Approach for Optimization of Spinning Processes

    Science.gov (United States)

    Chakraborty, Shankar; Mitra, Ankan

    2018-05-01

    Recent advancements in textile industry have given rise to several spinning techniques, such as ring spinning, rotor spinning etc., which can be used to produce a wide variety of textile apparels so as to fulfil the end requirements of the customers. To achieve the best out of these processes, they should be utilized at their optimal parametric settings. However, in presence of multiple yarn characteristics which are often conflicting in nature, it becomes a challenging task for the spinning industry personnel to identify the best parametric mix which would simultaneously optimize all the responses. Hence, in this paper, the applicability of a new systematic approach in the form of multivariate quality loss function technique is explored for optimizing multiple quality characteristics of yarns while identifying the ideal settings of two spinning processes. It is observed that this approach performs well against the other multi-objective optimization techniques, such as desirability function, distance function and mean squared error methods. With slight modifications in the upper and lower specification limits of the considered quality characteristics, and constraints of the non-linear optimization problem, it can be successfully applied to other processes in textile industry to determine their optimal parametric settings.

  4. An optimization approach for black-and-white and hinge-removal topology designs

    Energy Technology Data Exchange (ETDEWEB)

    Fu, Yongqing; Zhang, Xianmin [South China University of Technology, Guangzhou (China)

    2014-02-15

    An optimization approach for black-and-white and hinge-removal topology designs is studied. To achieve this motive, an optimal topology allowing grey boundaries is found firstly. When a suitable design has been obtained, this solution is then used as a starting point for the follow-up optimization with the goal to free unfavorable intermediate elements. For this purpose, an updated optimality criterion in which a threshold factor is introduced to gradually suppress elements with low density is proposed. The typical optimality method and new technique proposed are applied to the design procedure sequentially. Besides, to circumvent the one-point hinge connection problem producing in the process of freeing intermediate elements, a hinge-removal strategy is also proposed. During the optimization, the binary constraints on design variables are relaxed based on the scheme of solid isotropic material with penalization. Meanwhile, the mesh independency filter is employed to ensure the existence of a solution and remove well-known checkerboards. In this way, a solution that has few intermediate elements and is free of one-point hinge connections is obtained. Finally, different numerical examples including the compliance minimization, compliant mechanisms and vibration problems demonstrate the validity of the proposed approach.

  5. Implementing and Innovating Marine Monitoring Approaches for Assessing Marine Environmental Status

    KAUST Repository

    Danovaro, Roberto; Carugati, Laura; Berzano, Marco; Cahill, Abigail E.; Carvalho, Susana; Chenuil, Anne; Corinaldesi, Cinzia; Cristina, Sonia; David, Romain; Dell'Anno, Antonio; Dzhembekova, Nina; Garcé s, Esther; Gasol, Joseph M.; Goela, Priscila; Fé ral, Jean-Pierre; Ferrera, Isabel; Forster, Rodney M.; Kurekin, Andrey A.; Rastelli, Eugenio; Marinova, Veselka; Miller, Peter I.; Moncheva, Snejana; Newton, Alice; Pearman, John K.; Pitois, Sophie G.; Reñ é , Albert; Rodrí guez-Ezpeleta, Naiara; Saggiomo, Vincenzo; Simis, Stefan G. H.; Stefanova, Kremena; Wilson, Christian; Lo Martire, Marco; Greco, Silvestro; Cochrane, Sabine K. J.; Mangoni, Olga; Borja, Angel

    2016-01-01

    Marine environmental monitoring has tended to focus on site-specific methods of investigation. These traditional methods have low spatial and temporal resolution and are relatively labor intensive per unit area/time that they cover. To implement the Marine Strategy Framework Directive (MSFD), European Member States are required to improve marine monitoring and design monitoring networks. This can be achieved by developing and testing innovative and cost-effective monitoring systems, as well as indicators of environmental status. Here, we present several recently developed methodologies and technologies to improve marine biodiversity indicators and monitoring methods. The innovative tools are discussed concerning the technologies presently utilized as well as the advantages and disadvantages of their use in routine monitoring. In particular, the present analysis focuses on: (i) molecular approaches, including microarray, Real Time quantitative PCR (qPCR), and metagenetic (metabarcoding) tools; (ii) optical (remote) sensing and acoustic methods; and (iii) in situ monitoring instruments. We also discuss their applications in marine monitoring within the MSFD through the analysis of case studies in order to evaluate their potential utilization in future routine marine monitoring. We show that these recently-developed technologies can present clear advantages in accuracy, efficiency and cost.

  6. Implementing and Innovating Marine Monitoring Approaches for Assessing Marine Environmental Status

    KAUST Repository

    Danovaro, Roberto

    2016-11-23

    Marine environmental monitoring has tended to focus on site-specific methods of investigation. These traditional methods have low spatial and temporal resolution and are relatively labor intensive per unit area/time that they cover. To implement the Marine Strategy Framework Directive (MSFD), European Member States are required to improve marine monitoring and design monitoring networks. This can be achieved by developing and testing innovative and cost-effective monitoring systems, as well as indicators of environmental status. Here, we present several recently developed methodologies and technologies to improve marine biodiversity indicators and monitoring methods. The innovative tools are discussed concerning the technologies presently utilized as well as the advantages and disadvantages of their use in routine monitoring. In particular, the present analysis focuses on: (i) molecular approaches, including microarray, Real Time quantitative PCR (qPCR), and metagenetic (metabarcoding) tools; (ii) optical (remote) sensing and acoustic methods; and (iii) in situ monitoring instruments. We also discuss their applications in marine monitoring within the MSFD through the analysis of case studies in order to evaluate their potential utilization in future routine marine monitoring. We show that these recently-developed technologies can present clear advantages in accuracy, efficiency and cost.

  7. A genetic algorithm approach to optimization for the radiological worker allocation problem

    International Nuclear Information System (INIS)

    Yan Chen; Masakuni Narita; Masashi Tsuji; Sangduk Sa

    1996-01-01

    The worker allocation optimization problem in radiological facilities inevitably involves various types of requirements and constraints relevant to radiological protection and labor management. Some of these goals and constraints are not amenable to a rigorous mathematical formulation. Conventional methods for this problem rely heavily on sophisticated algebraic or numerical algorithms, which cause difficulties in the search for optimal solutions in the search space of worker allocation optimization problems. Genetic algorithms (GAB) are stochastic search algorithms introduced by J. Holland in the 1970s based on ideas and techniques from genetic and evolutionary theories. The most striking characteristic of GAs is the large flexibility allowed in the formulation of the optimal problem and the process of the search for the optimal solution. In the formulation, it is not necessary to define the optimal problem in rigorous mathematical terms, as required in the conventional methods. Furthermore, by designing a model of evolution for the optimal search problem, the optimal solution can be sought efficiently with computational simple manipulations without highly complex mathematical algorithms. We reported a GA approach to the worker allocation problem in radiological facilities in the previous study. In this study, two types of hard constraints were employed to reduce the huge search space, where the optimal solution is sought in such a way as to satisfy as many of soft constraints as possible. It was demonstrated that the proposed evolutionary method could provide the optimal solution efficiently compared with conventional methods. However, although the employed hard constraints could localize the search space into a very small region, it brought some complexities in the designed genetic operators and demanded additional computational burdens. In this paper, we propose a simplified evolutionary model with less restrictive hard constraints and make comparisons between

  8. Using particle counters for pretreatment optimization, iron transport monitoring, condenser leak detection, and carryover monitoring - a synopsis of experiences

    International Nuclear Information System (INIS)

    Bryant, R.L.

    2008-01-01

    Steam generating systems all require clean water. The effects of particulate material in the steam/water cycle on metal corrosion, erosion, cracking, and deposition are frequently observed. However, the physical/chemical mechanisms are often difficult to correlate with a specific plant event, since the periodic ''grab'' samples from various areas of the water/steam process which are generally conducted do not allow real time continuous on-line particulate monitoring and data collection. This paper introduces the concept of using particulate measuring instruments to monitor the steam generation cycle, and presents case histories of real world plant situations where on-line particulate measurement using particle counters and particle monitors has defined the source of a problem, quantified the severity of a problem, and provided a solution to a problem. (orig.)

  9. International health research monitoring: exploring a scientific and a cooperative approach using participatory action research

    OpenAIRE

    Chantler, Tracey; Cheah, Phaik Yeong; Miiro, George; Hantrakum, Viriya; Nanvubya, Annet; Ayuo, Elizabeth; Kivaya, Esther; Kidola, Jeremiah; Kaleebu, Pontiano; Parker, Michael; Njuguna, Patricia; Ashley, Elizabeth; Guerin, Philippe J; Lang, Trudie

    2014-01-01

    Objectives To evaluate and determine the value of monitoring models developed by the Mahidol Oxford Tropical Research Unit and the East African Consortium for Clinical Research, consider how this can be measured and explore monitors’ and investigators’ experiences of and views about the nature, purpose and practice of monitoring. Research design A case study approach was used within the context of participatory action research because one of the aims was to guide and improve practice. 34 inte...

  10. Object-oriented Approach to High-level Network Monitoring and Management

    Science.gov (United States)

    Mukkamala, Ravi

    2000-01-01

    An absolute prerequisite for the management of large investigating methods to build high-level monitoring computer networks is the ability to measure their systems that are built on top of existing monitoring performance. Unless we monitor a system, we cannot tools. Due to the heterogeneous nature of the hope to manage and control its performance. In this underlying systems at NASA Langley Research Center, paper, we describe a network monitoring system that we use an object-oriented approach for the design, we are currently designing and implementing. Keeping, first, we use UML (Unified Modeling Language) to in mind the complexity of the task and the required model users' requirements. Second, we identify the flexibility for future changes, we use an object-oriented existing capabilities of the underlying monitoring design methodology. The system is built using the system. Third, we try to map the former with the latter. APIs offered by the HP OpenView system.

  11. A Reliable, Non-Invasive Approach to Data Center Monitoring and Management

    Directory of Open Access Journals (Sweden)

    Moises Levy

    2017-08-01

    Full Text Available Recent standards, legislation, and best practices point to data center infrastructure management systems to control and monitor data center performance. This work presents an innovative approach to address some of the challenges that currently hinder data center management. It explains how monitoring and management systems should be envisioned and implemented. Key parameters associated with data center infrastructure and information technology equipment can be monitored in real-time across an entire facility using low-cost, low-power wireless sensors. Given the data centers’ mission critical nature, the system must be reliable and deployable through a non-invasive process. The need for the monitoring system is also presented through a feedback control systems perspective, which allows higher levels of automation. The data center monitoring and management system enables data gathering, analysis, and decision-making to improve performance, and to enhance asset utilization.

  12. A robust approach to optimal matched filter design in ultrasonic non-destructive evaluation (NDE)

    Science.gov (United States)

    Li, Minghui; Hayward, Gordon

    2017-02-01

    The matched filter was demonstrated to be a powerful yet efficient technique to enhance defect detection and imaging in ultrasonic non-destructive evaluation (NDE) of coarse grain materials, provided that the filter was properly designed and optimized. In the literature, in order to accurately approximate the defect echoes, the design utilized the real excitation signals, which made it time consuming and less straightforward to implement in practice. In this paper, we present a more robust and flexible approach to optimal matched filter design using the simulated excitation signals, and the control parameters are chosen and optimized based on the real scenario of array transducer, transmitter-receiver system response, and the test sample, as a result, the filter response is optimized and depends on the material characteristics. Experiments on industrial samples are conducted and the results confirm the great benefits of the method.

  13. A heuristic approach to optimization of structural topology including self-weight

    Science.gov (United States)

    Tajs-Zielińska, Katarzyna; Bochenek, Bogdan

    2018-01-01

    Topology optimization of structures under a design-dependent self-weight load is investigated in this paper. The problem deserves attention because of its significant importance in the engineering practice, especially nowadays as topology optimization is more often applied when designing large engineering structures, for example, bridges or carrying systems of tall buildings. It is worth noting that well-known approaches of topology optimization which have been successfully applied to structures under fixed loads cannot be directly adapted to the case of design-dependent loads, so that topology generation can be a challenge also for numerical algorithms. The paper presents the application of a simple but efficient non-gradient method to topology optimization of elastic structures under self-weight loading. The algorithm is based on the Cellular Automata concept, the application of which can produce effective solutions with low computational cost.

  14. Optimizing Maintenance Planning in the Production Industry Using the Markovian Approach

    Directory of Open Access Journals (Sweden)

    B Kareem

    2012-12-01

    Full Text Available Maintenance is an essential activity in every manufacturing establishment, as manufacturing effectiveness counts on the functionality of production equipment and machinery in terms of their productivity and operational life. Maintenance cost minimization can be achieved by adopting an appropriate maintenance planning policy. This paper applies the Markovian approach to maintenance planning decision, thereby generating optimal maintenance policy from the identified alternatives over a specified period of time. Markov chains, transition matrices, decision processes, and dynamic programming models were formulated for the decision problem related to maintenance operations of a cable production company. Preventive and corrective maintenance data based on workloads and costs, were collected from the company and utilized in this study. The result showed variability in the choice of optimal maintenance policy that was adopted in the case study. Post optimality analysis of the process buttressed the claim. The proposed approach is promising for solving the maintenance scheduling decision problems of the company.

  15. Heat and mass transfer intensification and shape optimization a multi-scale approach

    CERN Document Server

    2013-01-01

    Is the heat and mass transfer intensification defined as a new paradigm of process engineering, or is it just a common and old idea, renamed and given the current taste? Where might intensification occur? How to achieve intensification? How the shape optimization of thermal and fluidic devices leads to intensified heat and mass transfers? To answer these questions, Heat & Mass Transfer Intensification and Shape Optimization: A Multi-scale Approach clarifies  the definition of the intensification by highlighting the potential role of the multi-scale structures, the specific interfacial area, the distribution of driving force, the modes of energy supply and the temporal aspects of processes.   A reflection on the methods of process intensification or heat and mass transfer enhancement in multi-scale structures is provided, including porous media, heat exchangers, fluid distributors, mixers and reactors. A multi-scale approach to achieve intensification and shape optimization is developed and clearly expla...

  16. An Optimization-Based Impedance Approach for Robot Force Regulation with Prescribed Force Limits

    Directory of Open Access Journals (Sweden)

    R. de J. Portillo-Vélez

    2015-01-01

    Full Text Available An optimization based approach for the regulation of excessive or insufficient forces at the end-effector level is introduced. The objective is to minimize the interaction force error at the robot end effector, while constraining undesired interaction forces. To that end, a dynamic optimization problem (DOP is formulated considering a dynamic robot impedance model. Penalty functions are considered in the DOP to handle the constraints on the interaction force. The optimization problem is online solved through the gradient flow approach. Convergence properties are presented and the stability is drawn when the force limits are considered in the analysis. The effectiveness of our proposal is validated via experimental results for a robotic grasping task.

  17. Multi-scale Modeling Approach for Design and Optimization of Oleochemical Processes

    DEFF Research Database (Denmark)

    Jones, Mark Nicholas; Forero-Hernandez, Hector Alexander; Sarup, Bent

    2017-01-01

    The primary goal of this work is to present a systematic methodology and software frameworkfor a multi-level approach ranging from process synthesis and modeling throughproperty prediction, to sensitivity analysis, property parameter tuning and optimization.This framework is applied to the follow...

  18. Hierarchical approach to optimization of parallel matrix multiplication on large-scale platforms

    KAUST Repository

    Hasanov, Khalid; Quintin, Jean-Noë l; Lastovetsky, Alexey

    2014-01-01

    -scale parallelism in mind. Indeed, while in 1990s a system with few hundred cores was considered a powerful supercomputer, modern top supercomputers have millions of cores. In this paper, we present a hierarchical approach to optimization of message-passing parallel

  19. Quality by design approach for optimizing the formulation and physical properties of extemporaneously prepared orodispersible films

    NARCIS (Netherlands)

    Visser, J. Caroline; Dohmen, Willem M. C.; Hinrichs, Wouter L. J.; Breitkreutz, Joerg; Frijlink, Henderik W.; Woerdenbag, Herman J.

    2015-01-01

    The quality by design (QbD) approach was applied for optimizing the formulation of extemporaneously prepared orodispersible films (ODFs) using Design-Expert Software. The starting formulation was based on earlier experiments and contained the film forming agents hypromellose and carbomer 974P and

  20. A Robot Trajectory Optimization Approach for Thermal Barrier Coatings Used for Free-Form Components

    Science.gov (United States)

    Cai, Zhenhua; Qi, Beichun; Tao, Chongyuan; Luo, Jie; Chen, Yuepeng; Xie, Changjun

    2017-10-01

    This paper is concerned with a robot trajectory optimization approach for thermal barrier coatings. As the requirements of high reproducibility of complex workpieces increase, an optimal thermal spraying trajectory should not only guarantee an accurate control of spray parameters defined by users (e.g., scanning speed, spray distance, scanning step, etc.) to achieve coating thickness homogeneity but also help to homogenize the heat transfer distribution on the coating surface. A mesh-based trajectory generation approach is introduced in this work to generate path curves on a free-form component. Then, two types of meander trajectories are generated by performing a different connection method. Additionally, this paper presents a research approach for introducing the heat transfer analysis into the trajectory planning process. Combining heat transfer analysis with trajectory planning overcomes the defects of traditional trajectory planning methods (e.g., local over-heating), which helps form the uniform temperature field by optimizing the time sequence of path curves. The influence of two different robot trajectories on the process of heat transfer is estimated by coupled FEM models which demonstrates the effectiveness of the presented optimization approach.

  1. Quantitative NMR Approach to Optimize the Formation of Chemical Building Blocks from Abundant Carbohydrates

    DEFF Research Database (Denmark)

    Elliot, Samuel Gilbert; Tolborg, Søren; Sádaba, Irantzu

    2017-01-01

    -containing catalysts such as Sn-Beta. These compounds are potential building blocks for polyesters with additional olefin and alcohol functionalities. We employ an NMR approach to identify, quantify and optimize the formation these building blocks in the chemocatalytic transformation of abundant carbohydrates by Sn...

  2. An efficient identification approach for stable and unstable nonlinear systems using Colliding Bodies Optimization algorithm.

    Science.gov (United States)

    Pal, Partha S; Kar, R; Mandal, D; Ghoshal, S P

    2015-11-01

    This paper presents an efficient approach to identify different stable and practically useful Hammerstein models as well as unstable nonlinear process along with its stable closed loop counterpart with the help of an evolutionary algorithm as Colliding Bodies Optimization (CBO) optimization algorithm. The performance measures of the CBO based optimization approach such as precision, accuracy are justified with the minimum output mean square value (MSE) which signifies that the amount of bias and variance in the output domain are also the least. It is also observed that the optimization of output MSE in the presence of outliers has resulted in a very close estimation of the output parameters consistently, which also justifies the effective general applicability of the CBO algorithm towards the system identification problem and also establishes the practical usefulness of the applied approach. Optimum values of the MSEs, computational times and statistical information of the MSEs are all found to be the superior as compared with those of the other existing similar types of stochastic algorithms based approaches reported in different recent literature, which establish the robustness and efficiency of the applied CBO based identification scheme. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  3. An artificial intelligence approach to onboard fault monitoring and diagnosis for aircraft applications

    Science.gov (United States)

    Schutte, P. C.; Abbott, K. H.

    1986-01-01

    Real-time onboard fault monitoring and diagnosis for aircraft applications, whether performed by the human pilot or by automation, presents many difficult problems. Quick response to failures may be critical, the pilot often must compensate for the failure while diagnosing it, his information about the state of the aircraft is often incomplete, and the behavior of the aircraft changes as the effect of the failure propagates through the system. A research effort was initiated to identify guidelines for automation of onboard fault monitoring and diagnosis and associated crew interfaces. The effort began by determining the flight crew's information requirements for fault monitoring and diagnosis and the various reasoning strategies they use. Based on this information, a conceptual architecture was developed for the fault monitoring and diagnosis process. This architecture represents an approach and a framework which, once incorporated with the necessary detail and knowledge, can be a fully operational fault monitoring and diagnosis system, as well as providing the basis for comparison of this approach to other fault monitoring and diagnosis concepts. The architecture encompasses all aspects of the aircraft's operation, including navigation, guidance and controls, and subsystem status. The portion of the architecture that encompasses subsystem monitoring and diagnosis was implemented for an aircraft turbofan engine to explore and demonstrate the AI concepts involved. This paper describes the architecture and the implementation for the engine subsystem.

  4. Hybrid approaches to clinical trial monitoring: Practical alternatives to 100% source data verification

    Directory of Open Access Journals (Sweden)

    Sourabh De

    2011-01-01

    Full Text Available For years, a vast majority of clinical trial industry has followed the tenet of 100% source data verification (SDV. This has been driven partly by the overcautious approach to linking quality of data to the extent of monitoring and SDV and partly by being on the safer side of regulations. The regulations however, do not state any upper or lower limits of SDV. What it expects from researchers and the sponsors is methodologies which ensure data quality. How the industry does it is open to innovation and application of statistical methods, targeted and remote monitoring, real time reporting, adaptive monitoring schedules, etc. In short, hybrid approaches to monitoring. Coupled with concepts of optimum monitoring and SDV at site and off-site monitoring techniques, it should be possible to save time required to conduct SDV leading to more available time for other productive activities. Organizations stand to gain directly or indirectly from such savings, whether by diverting the funds back to the R&D pipeline; investing more in technology infrastructure to support large trials; or simply increasing sample size of trials. Whether it also affects the work-life balance of monitors who may then need to travel with a less hectic schedule for the same level of quality and productivity can be predicted only when there is more evidence from field.

  5. Hybrid approaches to clinical trial monitoring: Practical alternatives to 100% source data verification.

    Science.gov (United States)

    De, Sourabh

    2011-07-01

    For years, a vast majority of clinical trial industry has followed the tenet of 100% source data verification (SDV). This has been driven partly by the overcautious approach to linking quality of data to the extent of monitoring and SDV and partly by being on the safer side of regulations. The regulations however, do not state any upper or lower limits of SDV. What it expects from researchers and the sponsors is methodologies which ensure data quality. How the industry does it is open to innovation and application of statistical methods, targeted and remote monitoring, real time reporting, adaptive monitoring schedules, etc. In short, hybrid approaches to monitoring. Coupled with concepts of optimum monitoring and SDV at site and off-site monitoring techniques, it should be possible to save time required to conduct SDV leading to more available time for other productive activities. Organizations stand to gain directly or indirectly from such savings, whether by diverting the funds back to the R&D pipeline; investing more in technology infrastructure to support large trials; or simply increasing sample size of trials. Whether it also affects the work-life balance of monitors who may then need to travel with a less hectic schedule for the same level of quality and productivity can be predicted only when there is more evidence from field.

  6. NLP model and stochastic multi-start optimization approach for heat exchanger networks

    International Nuclear Information System (INIS)

    Núñez-Serna, Rosa I.; Zamora, Juan M.

    2016-01-01

    Highlights: • An NLP model for the optimal design of heat exchanger networks is proposed. • The NLP model is developed from a stage-wise grid diagram representation. • A two-phase stochastic multi-start optimization methodology is utilized. • Improved network designs are obtained with different heat load distributions. • Structural changes and reductions in the number of heat exchangers are produced. - Abstract: Heat exchanger network synthesis methodologies frequently identify good network structures, which nevertheless, might be accompanied by suboptimal values of design variables. The objective of this work is to develop a nonlinear programming (NLP) model and an optimization approach that aim at identifying the best values for intermediate temperatures, sub-stream flow rate fractions, heat loads and areas for a given heat exchanger network topology. The NLP model that minimizes the total annual cost of the network is constructed based on a stage-wise grid diagram representation. To improve the possibilities of obtaining global optimal designs, a two-phase stochastic multi-start optimization algorithm is utilized for the solution of the developed model. The effectiveness of the proposed optimization approach is illustrated with the optimization of two network designs proposed in the literature for two well-known benchmark problems. Results show that from the addressed base network topologies it is possible to achieve improved network designs, with redistributions in exchanger heat loads that lead to reductions in total annual costs. The results also show that the optimization of a given network design sometimes leads to structural simplifications and reductions in the total number of heat exchangers of the network, thereby exposing alternative viable network topologies initially not anticipated.

  7. Monitoring arid-land groundwater abstraction through optimization of a land surface model with remote sensing-based evaporation

    KAUST Repository

    Lopez Valencia, Oliver Miguel

    2018-02-01

    The increase in irrigated agriculture in Saudi Arabia is having a large impact on its limited groundwater resources. While large-scale water storage changes can be estimated using satellite data, monitoring groundwater abstraction rates is largely non-existent at either farm or regional level, so water management decisions remain ill-informed. Although determining water use from space at high spatiotemporal resolutions remains challenging, a number of approaches have shown promise, particularly in the retrieval of crop water use via evaporation. Apart from satellite-based estimates, land surface models offer a continuous spatial-temporal evolution of full land-atmosphere water and energy exchanges. In this study, we first examine recent trends in terrestrial water storage depletion within the Arabian Peninsula and explore its relation to increased agricultural activity in the region using satellite data. Next, we evaluate a number of large-scale remote sensing-based evaporation models, giving insight into the challenges of evaporation retrieval in arid environments. Finally, we present a novel method aimed to retrieve groundwater abstraction rates used in irrigated fields by constraining a land surface model with remote sensing-based evaporation observations. The approach is used to reproduce reported irrigation rates over 41 center-pivot irrigation fields presenting a range of crop dynamics over the course of one year. The results of this application are promising, with mean absolute errors below 3 mm:day-1, bias of -1.6 mm:day-1, and a first rough estimate of total annual abstractions of 65.8 Mm3 (close to the estimated value using reported farm data, 69.42 Mm3). However, further efforts to address the overestimation of bare soil evaporation in the model are required. The uneven coverage of satellite data within the study site allowed us to evaluate its impact on the optimization, with a better match between observed and obtained irrigation rates on fields with

  8. Optimization of mass of plastic scintillator film for flow-cell based tritium monitoring: a Monte Carlo study

    International Nuclear Information System (INIS)

    Roy, Arup Singha; Palani Selvam, T.; Raman, Anand; Raja, V.; Chaudhury, Probal

    2014-01-01

    Over the years, various types of tritium-in-air monitors have been designed and developed based on different principles. Ionization chamber, proportional counter and scintillation detector systems are few among them. A plastic scintillator based, flow-cell type online tritium-in-air monitoring system was developed for online monitoring of tritium in air. The value of the scintillator mass inside the cell-volume, which maximizes the response of the detector system, should be obtained to get maximum efficiency. The present study is aimed to optimize the amount of mass of the plastic scintillator film for the flow-cell based tritium monitoring instrument so that maximum efficiency is achieved. The Monte Carlo based EGSnrc code system has been used for this purpose

  9. Monitoring T-Cell Responses in Translational Studies: Optimization of Dye-Based Proliferation Assay for Evaluation of Antigen-Specific Responses

    Directory of Open Access Journals (Sweden)

    Anja Ten Brinke

    2017-12-01

    Full Text Available Adoptive therapy with regulatory T cells or tolerance-inducing antigen (Ag-presenting cells is innovative and promising therapeutic approach to control undesired and harmful activation of the immune system, as observed in autoimmune diseases, solid organ and bone marrow transplantation. One of the critical issues to elucidate the mechanisms responsible for success or failure of these therapies and define the specificity of the therapy is the evaluation of the Ag-specific T-cell responses. Several efforts have been made to develop suitable and reproducible assays. Here, we focus on dye-based proliferation assays. We highlight with practical examples the fundamental issues to take into consideration for implementation of an effective and sensitive dye-based proliferation assay to monitor Ag-specific responses in patients. The most critical points were used to design a road map to set up and analyze the optimal assay to assess Ag-specific T-cell responses in patients undergoing different treatments. This is the first step to optimize monitoring of tolerance induction, allowing comparison of outcomes of different clinical studies. The road map can also be applied to other therapeutic interventions, not limited to tolerance induction therapies, in which Ag-specific T-cell responses are relevant such as vaccination approaches and cancer immunotherapy.

  10. An Augmented Incomplete Factorization Approach for Computing the Schur Complement in Stochastic Optimization

    KAUST Repository

    Petra, Cosmin G.; Schenk, Olaf; Lubin, Miles; Gä ertner, Klaus

    2014-01-01

    We present a scalable approach and implementation for solving stochastic optimization problems on high-performance computers. In this work we revisit the sparse linear algebra computations of the parallel solver PIPS with the goal of improving the shared-memory performance and decreasing the time to solution. These computations consist of solving sparse linear systems with multiple sparse right-hand sides and are needed in our Schur-complement decomposition approach to compute the contribution of each scenario to the Schur matrix. Our novel approach uses an incomplete augmented factorization implemented within the PARDISO linear solver and an outer BiCGStab iteration to efficiently absorb pivot perturbations occurring during factorization. This approach is capable of both efficiently using the cores inside a computational node and exploiting sparsity of the right-hand sides. We report on the performance of the approach on highperformance computers when solving stochastic unit commitment problems of unprecedented size (billions of variables and constraints) that arise in the optimization and control of electrical power grids. Our numerical experiments suggest that supercomputers can be efficiently used to solve power grid stochastic optimization problems with thousands of scenarios under the strict "real-time" requirements of power grid operators. To our knowledge, this has not been possible prior to the present work. © 2014 Society for Industrial and Applied Mathematics.

  11. Monitoring post-fire vegetation rehabilitation projects: A common approach for non-forested ecosystems

    Science.gov (United States)

    Wirth, Troy A.; Pyke, David A.

    2007-01-01

    Emergency Stabilization and Rehabilitation (ES&R) and Burned Area Emergency Response (BAER) treatments are short-term, high-intensity treatments designed to mitigate the adverse effects of wildfire on public lands. The federal government expends significant resources implementing ES&R and BAER treatments after wildfires; however, recent reviews have found that existing data from monitoring and research are insufficient to evaluate the effects of these activities. The purpose of this report is to: (1) document what monitoring methods are generally used by personnel in the field; (2) describe approaches and methods for post-fire vegetation and soil monitoring documented in agency manuals; (3) determine the common elements of monitoring programs recommended in these manuals; and (4) describe a common monitoring approach to determine the effectiveness of future ES&R and BAER treatments in non-forested regions. Both qualitative and quantitative methods to measure effectiveness of ES&R treatments are used by federal land management agencies. Quantitative methods are used in the field depending on factors such as funding, personnel, and time constraints. There are seven vegetation monitoring manuals produced by the federal government that address monitoring methods for (primarily) vegetation and soil attributes. These methods vary in their objectivity and repeatability. The most repeatable methods are point-intercept, quadrat-based density measurements, gap intercepts, and direct measurement of soil erosion. Additionally, these manuals recommend approaches for designing monitoring programs for the state of ecosystems or the effect of management actions. The elements of a defensible monitoring program applicable to ES&R and BAER projects that most of these manuals have in common are objectives, stratification, control areas, random sampling, data quality, and statistical analysis. The effectiveness of treatments can be determined more accurately if data are gathered using

  12. Vibration Monitoring of Gas Turbine Engines: Machine-Learning Approaches and Their Challenges

    Directory of Open Access Journals (Sweden)

    Ioannis Matthaiou

    2017-09-01

    Full Text Available In this study, condition monitoring strategies are examined for gas turbine engines using vibration data. The focus is on data-driven approaches, for this reason a novelty detection framework is considered for the development of reliable data-driven models that can describe the underlying relationships of the processes taking place during an engine’s operation. From a data analysis perspective, the high dimensionality of features extracted and the data complexity are two problems that need to be dealt with throughout analyses of this type. The latter refers to the fact that the healthy engine state data can be non-stationary. To address this, the implementation of the wavelet transform is examined to get a set of features from vibration signals that describe the non-stationary parts. The problem of high dimensionality of the features is addressed by “compressing” them using the kernel principal component analysis so that more meaningful, lower-dimensional features can be used to train the pattern recognition algorithms. For feature discrimination, a novelty detection scheme that is based on the one-class support vector machine (OCSVM algorithm is chosen for investigation. The main advantage, when compared to other pattern recognition algorithms, is that the learning problem is being cast as a quadratic program. The developed condition monitoring strategy can be applied for detecting excessive vibration levels that can lead to engine component failure. Here, we demonstrate its performance on vibration data from an experimental gas turbine engine operating on different conditions. Engine vibration data that are designated as belonging to the engine’s “normal” condition correspond to fuels and air-to-fuel ratio combinations, in which the engine experienced low levels of vibration. Results demonstrate that such novelty detection schemes can achieve a satisfactory validation accuracy through appropriate selection of two parameters of the

  13. A Genetic Algorithms-based Approach for Optimized Self-protection in a Pervasive Service Middleware

    DEFF Research Database (Denmark)

    Zhang, Weishan; Ingstrup, Mads; Hansen, Klaus Marius

    2009-01-01

    With increasingly complex and heterogeneous systems in pervasive service computing, it becomes more and more important to provide self-protected services to end users. In order to achieve self-protection, the corresponding security should be provided in an optimized manner considering...... the constraints of heterogeneous devices and networks. In this paper, we present a Genetic Algorithms-based approach for obtaining optimized security configurations at run time, supported by a set of security OWL ontologies and an event-driven framework. This approach has been realized as a prototype for self-protection...... in the Hydra middleware, and is integrated with a framework for enforcing the computed solution at run time using security obligations. The experiments with the prototype on configuring security strategies for a pervasive service middleware show that this approach has acceptable performance, and could be used...

  14. Optimization of the x-ray monitoring angle for creating a correlation model between internal and external respiratory signals

    Energy Technology Data Exchange (ETDEWEB)

    Akimoto, Mami; Nakamura, Mitsuhiro; Mukumoto, Nobutaka; Yamada, Masahiro; Ueki, Nami; Matsuo, Yukinori; Sawada, Akira; Mizowaki, Takashi; Kokubo, Masaki; Hiraoka, Masahiro [Department of Radiation Oncology and Image-applied Therapy, Graduate School of Medicine, Kyoto University, Kyoto 606-8507 (Japan); Department of Radiation Oncology and Image-applied Therapy, Graduate School of Medicine, Kyoto University, Kyoto 606-8507, Japan and Department of Radiological Technology, Faculty of Medical Science, Kyoto College of Medical Science, Nantan, Kyoto 622-0041 (Japan); Department of Radiation Oncology and Image-applied Therapy, Graduate School of Medicine, Kyoto University, Kyoto 606-8507 (Japan); Department of Radiation Oncology, Kobe City Medical Center General Hospital, Kobe, Hyogo 650-0047, Japan and Division of Radiation Oncology, Institute of Biomedical Research and Innovation, Kobe, Hyogo 650-0047 (Japan); Department of Radiation Oncology and Image-applied Therapy, Graduate School of Medicine, Kyoto University, Kyoto 606-8507 (Japan)

    2012-10-15

    Purpose: To perform dynamic tumor tracking irradiation with the Vero4DRT (MHI-TM2000), a correlation model [four dimensional (4D) model] between the displacement of infrared markers on the abdominal wall and the three-dimensional position of a tumor indicated by a minimum of three implanted gold markers is required. However, the gold markers cannot be detected successfully on fluoroscopic images under the following situations: (1) overlapping of the gold markers; and (2) a low intensity ratio of the gold marker to its surroundings. In the present study, the authors proposed a method to readily determine the optimal x-ray monitoring angle for creating a 4D model utilizing computed tomography (CT) images. Methods: The Vero4DRT mounting two orthogonal kV x-ray imaging subsystems can separately rotate the gantry along an O-shaped guide-lane and the O-ring along its vertical axis. The optimal x-ray monitoring angle was determined on CT images by minimizing the root-sum-square of water equivalent path lengths (WEPLs) on the orthogonal lines passing all of the gold markers while rotating the O-ring and the gantry. The x-ray monitoring angles at which the distances between the gold markers were within 5 mm at the isocenter level were excluded to prevent false detection of the gold markers in consideration of respiratory motions. First, the relationship between the WEPLs (unit: mm) and the intensity ratios of the gold markers was examined to assess the validity of our proposed method. Second, our proposed method was applied to the 4D-CT images at the end-expiration phase for 11 lung cancer patients who had four to five gold markers. To prove the necessity of the x-ray monitoring angle optimization, the intensity ratios of the least visible markers (minimum intensity ratios) that were estimated from the WEPLs were compared under the following conditions: the optimal x-ray monitoring angle and the angles used for setup verification. Additionally, the intra- and

  15. LMI–based robust controller design approach in aircraft multidisciplinary design optimization problem

    Directory of Open Access Journals (Sweden)

    Qinghua Zeng

    2015-07-01

    Full Text Available This article proposes a linear matrix inequality–based robust controller design approach to implement the synchronous design of aircraft control discipline and other disciplines, in which the variation in design parameters is treated as equivalent perturbations. Considering the complicated mapping relationships between the coefficient arrays of aircraft motion model and the aircraft design parameters, the robust controller designed is directly based on the variation in these coefficient arrays so conservative that the multidisciplinary design optimization problem would be too difficult to solve, or even if there is a solution, the robustness of design result is generally poor. Therefore, this article derives the uncertainty model of disciplinary design parameters based on response surface approximation, converts the design problem of the robust controller into a problem of solving a standard linear matrix inequality, and theoretically gives a less conservative design method of the robust controller which is based on the variation in design parameters. Furthermore, the concurrent subspace approach is applied to the multidisciplinary system with this kind of robust controller in the design loop. A multidisciplinary design optimization of a tailless aircraft as example is shown that control discipline can be synchronous optimal design with other discipline, especially this method will greatly reduce the calculated amount of multidisciplinary design optimization and make multidisciplinary design optimization results more robustness of flight performance.

  16. Optimal planning approaches with multiple impulses for rendezvous based on hybrid genetic algorithm and control method

    Directory of Open Access Journals (Sweden)

    JingRui Zhang

    2015-03-01

    Full Text Available In this article, we focus on safe and effective completion of a rendezvous and docking task by looking at planning approaches and control with fuel-optimal rendezvous for a target spacecraft running on a near-circular reference orbit. A variety of existent practical path constraints are considered, including the constraints of field of view, impulses, and passive safety. A rendezvous approach is calculated by using a hybrid genetic algorithm with those constraints. Furthermore, a control method of trajectory tracking is adopted to overcome the external disturbances. Based on Clohessy–Wiltshire equations, we first construct the mathematical model of optimal planning approaches of multiple impulses with path constraints. Second, we introduce the principle of hybrid genetic algorithm with both stronger global searching ability and local searching ability. We additionally explain the application of this algorithm in the problem of trajectory planning. Then, we give three-impulse simulation examples to acquire an optimal rendezvous trajectory with the path constraints presented in this article. The effectiveness and applicability of the tracking control method are verified with the optimal trajectory above as control objective through the numerical simulation.

  17. Energy Efficiency - Spectral Efficiency Trade-off: A Multiobjective Optimization Approach

    KAUST Repository

    Amin, Osama

    2015-04-23

    In this paper, we consider the resource allocation problem for energy efficiency (EE) - spectral efficiency (SE) trade-off. Unlike traditional research that uses the EE as an objective function and imposes constraints either on the SE or achievable rate, we propound a multiobjective optimization approach that can flexibly switch between the EE and SE functions or change the priority level of each function using a trade-off parameter. Our dynamic approach is more tractable than the conventional approaches and more convenient to realistic communication applications and scenarios. We prove that the multiobjective optimization of the EE and SE is equivalent to a simple problem that maximizes the achievable rate/SE and minimizes the total power consumption. Then we apply the generalized framework of the resource allocation for the EE-SE trade-off to optimally allocate the subcarriers’ power for orthogonal frequency division multiplexing (OFDM) with imperfect channel estimation. Finally, we use numerical results to discuss the choice of the trade-off parameter and study the effect of the estimation error, transmission power budget and channel-to-noise ratio on the multiobjective optimization.

  18. Energy Efficiency - Spectral Efficiency Trade-off: A Multiobjective Optimization Approach

    KAUST Repository

    Amin, Osama; Bedeer, Ebrahim; Ahmed, Mohamed; Dobre, Octavia

    2015-01-01

    In this paper, we consider the resource allocation problem for energy efficiency (EE) - spectral efficiency (SE) trade-off. Unlike traditional research that uses the EE as an objective function and imposes constraints either on the SE or achievable rate, we propound a multiobjective optimization approach that can flexibly switch between the EE and SE functions or change the priority level of each function using a trade-off parameter. Our dynamic approach is more tractable than the conventional approaches and more convenient to realistic communication applications and scenarios. We prove that the multiobjective optimization of the EE and SE is equivalent to a simple problem that maximizes the achievable rate/SE and minimizes the total power consumption. Then we apply the generalized framework of the resource allocation for the EE-SE trade-off to optimally allocate the subcarriers’ power for orthogonal frequency division multiplexing (OFDM) with imperfect channel estimation. Finally, we use numerical results to discuss the choice of the trade-off parameter and study the effect of the estimation error, transmission power budget and channel-to-noise ratio on the multiobjective optimization.

  19. Numerical optimization approach for resonant electromagnetic vibration transducer designed for random vibration

    International Nuclear Information System (INIS)

    Spreemann, Dirk; Hoffmann, Daniel; Folkmer, Bernd; Manoli, Yiannos

    2008-01-01

    This paper presents a design and optimization strategy for resonant electromagnetic vibration energy harvesting devices. An analytic expression for the magnetic field of cylindrical permanent magnets is used to build up an electromagnetic subsystem model. This subsystem is used to find the optimal resting position of the oscillating mass and to optimize the geometrical parameters (shape and size) of the magnet and coil. The objective function to be investigated is thereby the maximum voltage output of the transducer. An additional mechanical subsystem model based on well-known equations describing the dynamics of spring–mass–damper systems is established to simulate both nonlinear spring characteristics and the effect of internal limit stops. The mechanical subsystem enables the identification of optimal spring characteristics for realistic operation conditions such as stochastic vibrations. With the overall transducer model, a combination of both subsystems connected to a simple electrical circuit, a virtual operation of the optimized vibration transducer excited by a measured random acceleration profile can be performed. It is shown that the optimization approach results in an appreciable increase of the converter performance

  20. Optimal Route Searching with Multiple Dynamical Constraints—A Geometric Algebra Approach

    Directory of Open Access Journals (Sweden)

    Dongshuang Li

    2018-05-01

    Full Text Available The process of searching for a dynamic constrained optimal path has received increasing attention in traffic planning, evacuation, and personalized or collaborative traffic service. As most existing multiple constrained optimal path (MCOP methods cannot search for a path given various types of constraints that dynamically change during the search, few approaches for dynamic multiple constrained optimal path (DMCOP with type II dynamics are available for practical use. In this study, we develop a method to solve the DMCOP problem with type II dynamics based on the unification of various types of constraints under a geometric algebra (GA framework. In our method, the network topology and three different types of constraints are represented by using algebraic base coding. With a parameterized optimization of the MCOP algorithm based on a greedy search strategy under the generation-refinement paradigm, this algorithm is found to accurately support the discovery of optimal paths as the constraints of numerical values, nodes, and route structure types are dynamically added to the network. The algorithm was tested with simulated cases of optimal tourism route searches in China’s road networks with various combinations of constraints. The case study indicates that our algorithm can not only solve the DMCOP with different types of constraints but also use constraints to speed up the route filtering.