WorldWideScience

Sample records for determine optimal placement

  1. Determination of anatomic landmarks for optimal placement in captive-bolt euthanasia of goats.

    Science.gov (United States)

    Plummer, Paul J; Shearer, Jan K; Kleinhenz, Katie E; Shearer, Leslie C

    2018-03-01

    OBJECTIVE To determine the optimal anatomic site and directional aim of a penetrating captive bolt (PCB) for euthanasia of goats. SAMPLE 8 skulls from horned and polled goat cadavers and 10 anesthetized horned and polled goats scheduled to be euthanized at the end of a teaching laboratory. PROCEDURES Sagittal sections of cadaver skulls from 8 horned and polled goats were used to determine the ideal anatomic site and aiming of a PCB to maximize damage to the midbrain region of the brainstem for euthanasia. Anatomic sites for ideal placement and directional aiming were confirmed by use of 10 anesthetized horned and polled goats. RESULTS Clinical observation and postmortem examination of the sagittal sections of skulls from the 10 anesthetized goats that were euthanized confirmed that perpendicular placement and firing of a PCB at the intersection of 2 lines, each drawn from the lateral canthus of 1 eye to the middle of the base of the opposite ear, resulted in consistent disruption of the midbrain and thalamus in all goats. Immediate cessation of breathing, followed by a loss of heartbeat in all 10 of the anesthetized goats, confirmed that use of this site consistently resulted in effective euthanasia. CONCLUSIONS AND CLINICAL RELEVANCE Damage to the brainstem and key adjacent structures may be accomplished by firing a PCB perpendicular to the skull over the anatomic site identified at the intersection of 2 lines, each drawn from the lateral canthus of 1 eye to the middle of the base of the opposite ear.

  2. Optimal placement of capacito

    Directory of Open Access Journals (Sweden)

    N. Gnanasekaran

    2016-06-01

    Full Text Available Optimal size and location of shunt capacitors in the distribution system plays a significant role in minimizing the energy loss and the cost of reactive power compensation. This paper presents a new efficient technique to find optimal size and location of shunt capacitors with the objective of minimizing cost due to energy loss and reactive power compensation of distribution system. A new Shark Smell Optimization (SSO algorithm is proposed to solve the optimal capacitor placement problem satisfying the operating constraints. The SSO algorithm is a recently developed metaheuristic optimization algorithm conceptualized using the shark’s hunting ability. It uses a momentum incorporated gradient search and a rotational movement based local search for optimization. To demonstrate the applicability of proposed method, it is tested on IEEE 34-bus and 118-bus radial distribution systems. The simulation results obtained are compared with previous methods reported in the literature and found to be encouraging.

  3. Derivative load voltage and particle swarm optimization to determine optimum sizing and placement of shunt capacitor in improving line losses

    Directory of Open Access Journals (Sweden)

    Mohamed Milad Baiek

    2016-12-01

    Full Text Available The purpose of this research is to study optimal size and placement of shunt capacitor in order to minimize line loss. Derivative load bus voltage was calculated to determine the sensitive load buses which further being optimum with the placement of shunt capacitor. Particle swarm optimization (PSO was demonstrated on the IEEE 14 bus power system to find optimum size of shunt capacitor in reducing line loss. The objective function was applied to determine the proper placement of capacitor and get satisfied solutions towards constraints. The simulation was run over Matlab under two scenarios namely base case and increasing 100% load. Derivative load bus voltage was simulated to determine the most sensitive load bus. PSO was carried out to determine the optimum sizing of shunt capacitor at the most sensitive bus. The results have been determined that the most sensitive bus was bus number 14 for the base case and increasing 100% load. The optimum sizing was 8.17 Mvar for the base case and 23.98 Mvar for increasing load about 100%. Line losses were able to reduce approximately 0.98% for the base case and increasing 100% load reduced about 3.16%. The proposed method was also proven as a better result compared with harmony search algorithm (HSA method. HSA method recorded loss reduction ratio about 0.44% for the base case and 2.67% when the load was increased by 100% while PSO calculated loss reduction ratio about 1.12% and 4.02% for the base case and increasing 100% load respectively. The result of this study will support the previous study and it is concluded that PSO was successfully able to solve some engineering problems as well as to find a solution in determining shunt capacitor sizing on the power system simply and accurately compared with other evolutionary optimization methods.

  4. Comparison of metaheuristic techniques to determine optimal placement of biomass power plants

    International Nuclear Information System (INIS)

    Reche-Lopez, P.; Ruiz-Reyes, N.; Garcia Galan, S.; Jurado, F.

    2009-01-01

    This paper deals with the application and comparison of several metaheuristic techniques to optimize the placement and supply area of biomass-fueled power plants. Both, trajectory and population-based methods are applied for our goal. In particular, two well-known trajectory method, such as Simulated Annealing (SA) and Tabu Search (TS), and two commonly used population-based methods, such as Genetic Algorithms (GA) and Particle Swarm Optimization (PSO) are hereby considered. In addition, a new binary PSO algorithm has been proposed, which incorporates an inertia weight factor, like the classical continuous approach. The fitness function for the metaheuristics is the profitability index, defined as the ratio between the net present value and the initial investment. In this work, forest residues are considered as biomass source, and the problem constraints are: the generation system must be located inside the supply area, and its maximum electric power is 5 MW. The comparative results obtained by all considered metaheuristics are discussed. Random walk has also been assessed for the problem we deal with.

  5. Comparison of metaheuristic techniques to determine optimal placement of biomass power plants

    Energy Technology Data Exchange (ETDEWEB)

    Reche-Lopez, P.; Ruiz-Reyes, N.; Garcia Galan, S. [Telecommunication Engineering Department, University of Jaen Polytechnic School, C/ Alfonso X el Sabio 28, 23700 Linares, Jaen (Spain); Jurado, F. [Electrical Engineering Department, University of Jaen Polytechnic School, C/ Alfonso X el Sabio 28, 23700 Linares, Jaen (Spain)

    2009-08-15

    This paper deals with the application and comparison of several metaheuristic techniques to optimize the placement and supply area of biomass-fueled power plants. Both, trajectory and population-based methods are applied for our goal. In particular, two well-known trajectory method, such as Simulated Annealing (SA) and Tabu Search (TS), and two commonly used population-based methods, such as Genetic Algorithms (GA) and Particle Swarm Optimization (PSO) are hereby considered. In addition, a new binary PSO algorithm has been proposed, which incorporates an inertia weight factor, like the classical continuous approach. The fitness function for the metaheuristics is the profitability index, defined as the ratio between the net present value and the initial investment. In this work, forest residues are considered as biomass source, and the problem constraints are: the generation system must be located inside the supply area, and its maximum electric power is 5 MW. The comparative results obtained by all considered metaheuristics are discussed. Random walk has also been assessed for the problem we deal with. (author)

  6. Right ventricle/interventricular septum electrophysiological anatomy (determination of optimal right ventricular lead placement

    Directory of Open Access Journals (Sweden)

    М. В. Диденко

    2015-10-01

    Full Text Available Notwithstanding a theoretically justified lead placement into the interventricular septum (IVS, the data from the clinical trials demonstrate somewhat controversial results. One of these controversies is the absence of consolidated criteria for positioning the electrode to deliver pacing from the interventricular septum (IVS area. The study describes anatomic features of RV and IVS with respect to the cardiac conduction system, normal ventricular excitation and electrode implantation techniques for continuous pacing. A comparative study of 73 specimens of cadaver hearts was carried out by using electro-anatomic 3D mapping of the heart, X-ray examination, computer-aided tomography, morphological and morphometric investigation. It was found out that the medium part of IVS in the septomarginal trabecula zone could be considered the best for continuous pacing. The criteria for the RV lead to be implanted in this zone were determined.

  7. Optimization of portal placement for endoscopic calcaneoplasty

    NARCIS (Netherlands)

    van Sterkenburg, Maayke N.; Groot, Minke; Sierevelt, Inger N.; Spennacchio, Pietro A.; Kerkhoffs, Gino M. M. J.; van Dijk, C. Niek

    2011-01-01

    The purpose of our study was to determine an anatomic landmark to help locate portals in endoscopic calcaneoplasty. The device for optimal portal placement (DOPP) was developed to measure the distance from the distal fibula tip to the calcaneus (DFC) in 28 volunteers to determine the location of the

  8. Optimal Product Placement.

    Science.gov (United States)

    Hsu, Chia-Ling; Matta, Rafael; Popov, Sergey V; Sogo, Takeharu

    2017-01-01

    We model a market, such as an online software market, in which an intermediary connects sellers and buyers by displaying sellers' products. With two vertically-differentiated products, an intermediary can place either: (1) one product, not necessarily the better one, on the first page, and the other hidden on the second page; or (2) both products on the first page. We show that it can be optimal for the intermediary to obfuscate a product-possibly the better one-since this weakens price competition and allows the sellers to extract a greater surplus from buyers; however, it is not socially optimal. The choice of which one to obfuscate depends on the distribution of search costs.

  9. Optimal Placement of Cerebral Oximeter Monitors to Avoid the Frontal Sinus as Determined by Computed Tomography.

    Science.gov (United States)

    Gregory, Alexander J; Hatem, Muhammed A; Yee, Kevin; Grocott, Hilary P

    2016-01-01

    To determine the optimal location to place cerebral oximeter optodes to avoid the frontal sinus, using the orbit of the skull as a landmark. Retrospective observational study. Academic hospital. Fifty adult patients with previously acquired computed tomography angiography scans of the head. The distance between the superior orbit of the skull and the most superior edge of the frontal sinus was measured using imaging software. The mean (SD) frontal sinus height was 16.4 (7.2) mm. There was a nonsignificant trend toward larger frontal sinus height in men compared with women (p = 0.12). Age, height, and body surface area did not correlate with frontal sinus height. Head circumference was positively correlated (r = 0.32; p = 0.03) to frontal sinus height, with a low level of predictability based on linear regression (R(2) = 0.10; p = 0.02). Placing cerebral oximeter optodes>3 cm from the superior rim of the orbit will avoid the frontal sinus in>98% of patients. Predicting the frontal sinus height based on common patient variables is difficult. Additional studies are required to evaluate the recommended height in pediatric populations and patients of various ethnic backgrounds. The clinical relevance of avoiding the frontal sinus also needs to be further elucidated. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Sensor Placement Optimization using Chama

    Energy Technology Data Exchange (ETDEWEB)

    Klise, Katherine A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Geotechnology and Engineering Dept.; Nicholson, Bethany L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Discrete Math and Optimization Dept.; Laird, Carl Damon [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Discrete Math and Optimization Dept.

    2017-10-01

    Continuous or regularly scheduled monitoring has the potential to quickly identify changes in the environment. However, even with low - cost sensors, only a limited number of sensors can be deployed. The physical placement of these sensors, along with the sensor technology and operating conditions, can have a large impact on the performance of a monitoring strategy. Chama is an open source Python package which includes mixed - integer, stochastic programming formulations to determine sensor locations and technology that maximize monitoring effectiveness. The methods in Chama are general and can be applied to a wide range of applications. Chama is currently being used to design sensor networks to monitor airborne pollutants and to monitor water quality in water distribution systems. The following documentation includes installation instructions and examples, description of software features, and software license. The software is intended to be used by regulatory agencies, industry, and the research community. It is assumed that the reader is familiar with the Python Programming Language. References are included for addit ional background on software components. Online documentation, hosted at http://chama.readthedocs.io/, will be updated as new features are added. The online version includes API documentation .

  11. RJMCMC based Text Placement to Optimize Label Placement and Quantity

    Science.gov (United States)

    Touya, Guillaume; Chassin, Thibaud

    2018-05-01

    Label placement is a tedious task in map design, and its automation has long been a goal for researchers in cartography, but also in computational geometry. Methods that search for an optimal or nearly optimal solution that satisfies a set of constraints, such as label overlapping, have been proposed in the literature. Most of these methods mainly focus on finding the optimal position for a given set of labels, but rarely allow the removal of labels as part of the optimization. This paper proposes to apply an optimization technique called Reversible-Jump Markov Chain Monte Carlo that enables to easily model the removal or addition during the optimization iterations. The method, quite preliminary for now, is tested on a real dataset, and the first results are encouraging.

  12. Optimal DG placement in deregulated electricity market

    International Nuclear Information System (INIS)

    Gautam, Durga; Mithulananthan, Nadarajah

    2007-01-01

    This paper presents two new methodologies for optimal placement of distributed generation (DG) in an optimal power flow (OPF) based wholesale electricity market. DG is assumed to participate in real time wholesale electricity market. The problem of optimal placement, including size, is formulated for two different objectives, namely, social welfare maximization and profit maximization. The candidate locations for DG placement are identified on the basis of locational marginal price (LMP). Obtained as lagrangian multiplier associated with active power flow equation for each node, LMP gives the short run marginal cost (SRMC) of electricity. Consumer payment, evaluated as a product of LMP and load at each load bus, is proposed as another ranking to identify candidate nodes for DG placement. The proposed rankings bridges engineering aspects of system operation and economic aspects of market operation and act as good indicators for the placement of DG, especially in a market environment. In order to provide a scenario of variety of DGs available in the market, several cost characteristics are assumed. For each DG cost characteristic, an optimal placement and size is identified for each of the objectives. The proposed methodology is tested in a modified IEEE 14 bus test system. (author)

  13. Brocade: Optimal flow placement in SDN networks

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    Today' network poses several challanges to network providers. These challanges fall in to a variety of areas ranging from determining efficient utilization of network bandwidth to finding out which user applications consume majority of network resources. Also, how to protect a given network from volumetric and botnet attacks. Optimal placement of flows deal with identifying network issues and addressing them in a real-time. The overall solution helps in building new services where a network is more secure and more efficient. Benefits derived as a result are increased network efficiency due to better capacity and resource planning, better security with real-time threat mitigation, and improved user experience as a result of increased service velocity.

  14. Optimal PMU Placement with Uncertainty Using Pareto Method

    Directory of Open Access Journals (Sweden)

    A. Ketabi

    2012-01-01

    Full Text Available This paper proposes a method for optimal placement of Phasor Measurement Units (PMUs in state estimation considering uncertainty. State estimation has first been turned into an optimization exercise in which the objective function is selected to be the number of unobservable buses which is determined based on Singular Value Decomposition (SVD. For the normal condition, Differential Evolution (DE algorithm is used to find the optimal placement of PMUs. By considering uncertainty, a multiobjective optimization exercise is hence formulated. To achieve this, DE algorithm based on Pareto optimum method has been proposed here. The suggested strategy is applied on the IEEE 30-bus test system in several case studies to evaluate the optimal PMUs placement.

  15. Optimal PMU Placement By Improved Particle Swarm Optimization

    DEFF Research Database (Denmark)

    Rather, Zakir Hussain; Liu, Leo; Chen, Zhe

    2013-01-01

    This paper presents an improved method of binary particle swarm optimization (IBPSO) technique for optimal phasor measurement unit (PMU) placement in a power network for complete system observability. Various effective improvements have been proposed to enhance the efficiency and convergence rate...... of conventional particle swarm optimization method. The proposed method of IBPSO ensures optimal PMU placement with and without consideration of zero injection measurements. The proposed method has been applied to standard test systems like 17 bus, IEEE 24-bus, IEEE 30-bus, New England 39-bus, IEEE 57-bus system...

  16. SPOT-A SENSOR PLACEMENT OPTIMIZATION TOOL FOR ...

    Science.gov (United States)

    journal article This paper presents SPOT, a Sensor Placement Optimization Tool. SPOT provides a toolkit that facilitates research in sensor placement optimization and enables the practical application of sensor placement solvers to real-world CWS design applications. This paper provides an overview of SPOT’s key features, and then illustrates how this tool can be flexibly applied to solve a variety of different types of sensor placement problems.

  17. An Aggregated Optimization Model for Multi-Head SMD Placements

    NARCIS (Netherlands)

    Ashayeri, J.; Ma, N.; Sotirov, R.

    2010-01-01

    In this article we propose an aggregate optimization approach by formulating the multi-head SMD placement optimization problem into a mixed integer program (MIP) with the variables based on batches of components. This MIP is tractable and effective in balancing workload among placement heads,

  18. An aggregated optimization model for multi-head SMD placements

    NARCIS (Netherlands)

    Ashayeri, J.; Ma, N.; Sotirov, R.

    2011-01-01

    In this article we propose an aggregate optimization approach by formulating the multi-head SMD placement optimization problem into a mixed integer program (MIP) with the variables based on batches of components. This MIP is tractable and effective in balancing workload among placement heads,

  19. Binary cuckoo search based optimal PMU placement scheme for ...

    African Journals Online (AJOL)

    without including zero-injection effect, an Optimal PMU Placement strategy considering ..... in Indian power grid — A case study, Frontiers in Energy, Vol. ... optimization approach, Proceedings: International Conference on Intelligent Systems ...

  20. Two-Phase Algorithm for Optimal Camera Placement

    Directory of Open Access Journals (Sweden)

    Jun-Woo Ahn

    2016-01-01

    Full Text Available As markers for visual sensor networks have become larger, interest in the optimal camera placement problem has continued to increase. The most featured solution for the optimal camera placement problem is based on binary integer programming (BIP. Due to the NP-hard characteristic of the optimal camera placement problem, however, it is difficult to find a solution for a complex, real-world problem using BIP. Many approximation algorithms have been developed to solve this problem. In this paper, a two-phase algorithm is proposed as an approximation algorithm based on BIP that can solve the optimal camera placement problem for a placement space larger than in current studies. This study solves the problem in three-dimensional space for a real-world structure.

  1. Bond graph to digraph conversion: A sensor placement optimization ...

    Indian Academy of Sciences (India)

    In this paper, we consider the optimal sensors placement problem for ... is due to the fact that the construction is generally done from the state equations, ... The Bond Graph (BG) tool defined in Paynter (1961) formal- ... Sensor placement and structural problem formulation .... Thus the obtained four matrices are as follows:.

  2. Optimal Sensor Placement for Latticed Shell Structure Based on an Improved Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Xun Zhang

    2014-01-01

    Full Text Available Optimal sensor placement is a key issue in the structural health monitoring of large-scale structures. However, some aspects in existing approaches require improvement, such as the empirical and unreliable selection of mode and sensor numbers and time-consuming computation. A novel improved particle swarm optimization (IPSO algorithm is proposed to address these problems. The approach firstly employs the cumulative effective modal mass participation ratio to select mode number. Three strategies are then adopted to improve the PSO algorithm. Finally, the IPSO algorithm is utilized to determine the optimal sensors number and configurations. A case study of a latticed shell model is implemented to verify the feasibility of the proposed algorithm and four different PSO algorithms. The effective independence method is also taken as a contrast experiment. The comparison results show that the optimal placement schemes obtained by the PSO algorithms are valid, and the proposed IPSO algorithm has better enhancement in convergence speed and precision.

  3. Optimal experimental design for placement of boreholes

    Science.gov (United States)

    Padalkina, Kateryna; Bücker, H. Martin; Seidler, Ralf; Rath, Volker; Marquart, Gabriele; Niederau, Jan; Herty, Michael

    2014-05-01

    Drilling for deep resources is an expensive endeavor. Among the many problems finding the optimal drilling location for boreholes is one of the challenging questions. We contribute to this discussion by using a simulation based assessment of possible future borehole locations. We study the problem of finding a new borehole location in a given geothermal reservoir in terms of a numerical optimization problem. In a geothermal reservoir the temporal and spatial distribution of temperature and hydraulic pressure may be simulated using the coupled differential equations for heat transport and mass and momentum conservation for Darcy flow. Within this model the permeability and thermal conductivity are dependent on the geological layers present in the subsurface model of the reservoir. In general, those values involve some uncertainty making it difficult to predict actual heat source in the ground. Within optimal experimental the question is which location and to which depth to drill the borehole in order to estimate conductivity and permeability with minimal uncertainty. We introduce a measure for computing the uncertainty based on simulations of the coupled differential equations. The measure is based on the Fisher information matrix of temperature data obtained through the simulations. We assume that the temperature data is available within the full borehole. A minimization of the measure representing the uncertainty in the unknown permeability and conductivity parameters is performed to determine the optimal borehole location. We present the theoretical framework as well as numerical results for several 2d subsurface models including up to six geological layers. Also, the effect of unknown layers on the introduced measure is studied. Finally, to obtain a more realistic estimate of optimal borehole locations, we couple the optimization to a cost model for deep drilling problems.

  4. Optimal placement of distributed generation in distribution networks ...

    African Journals Online (AJOL)

    This paper proposes the application of Particle Swarm Optimization (PSO) technique to find the optimal size and optimum location for the placement of DG in the radial distribution networks for active power compensation by reduction in real power losses and enhancement in voltage profile. In the first segment, the optimal ...

  5. Computer modeling for optimal placement of gloveboxes

    Energy Technology Data Exchange (ETDEWEB)

    Hench, K.W.; Olivas, J.D. [Los Alamos National Lab., NM (United States); Finch, P.R. [New Mexico State Univ., Las Cruces, NM (United States)

    1997-08-01

    Reduction of the nuclear weapons stockpile and the general downsizing of the nuclear weapons complex has presented challenges for Los Alamos. One is to design an optimized fabrication facility to manufacture nuclear weapon primary components (pits) in an environment of intense regulation and shrinking budgets. Historically, the location of gloveboxes in a processing area has been determined without benefit of industrial engineering studies to ascertain the optimal arrangement. The opportunity exists for substantial cost savings and increased process efficiency through careful study and optimization of the proposed layout by constructing a computer model of the fabrication process. This paper presents an integrative two- stage approach to modeling the casting operation for pit fabrication. The first stage uses a mathematical technique for the formulation of the facility layout problem; the solution procedure uses an evolutionary heuristic technique. The best solutions to the layout problem are used as input to the second stage - a computer simulation model that assesses the impact of competing layouts on operational performance. The focus of the simulation model is to determine the layout that minimizes personnel radiation exposures and nuclear material movement, and maximizes the utilization of capacity for finished units.

  6. Computer modeling for optimal placement of gloveboxes

    International Nuclear Information System (INIS)

    Hench, K.W.; Olivas, J.D.; Finch, P.R.

    1997-08-01

    Reduction of the nuclear weapons stockpile and the general downsizing of the nuclear weapons complex has presented challenges for Los Alamos. One is to design an optimized fabrication facility to manufacture nuclear weapon primary components (pits) in an environment of intense regulation and shrinking budgets. Historically, the location of gloveboxes in a processing area has been determined without benefit of industrial engineering studies to ascertain the optimal arrangement. The opportunity exists for substantial cost savings and increased process efficiency through careful study and optimization of the proposed layout by constructing a computer model of the fabrication process. This paper presents an integrative two- stage approach to modeling the casting operation for pit fabrication. The first stage uses a mathematical technique for the formulation of the facility layout problem; the solution procedure uses an evolutionary heuristic technique. The best solutions to the layout problem are used as input to the second stage - a computer simulation model that assesses the impact of competing layouts on operational performance. The focus of the simulation model is to determine the layout that minimizes personnel radiation exposures and nuclear material movement, and maximizes the utilization of capacity for finished units

  7. Determining the brand awareness of product placement in video games

    OpenAIRE

    Král, Marek

    2015-01-01

    This bachelor thesis focusses on the determination of the brand awareness of product placement in video games. The theoretical part includes information about marketing, product placement and video games. The practical part consists of evaluation of the market research about product placements in video games. Conclusion suggests the most important factors influencing the level brand awareness.

  8. Structural damage detection-oriented multi-type sensor placement with multi-objective optimization

    Science.gov (United States)

    Lin, Jian-Fu; Xu, You-Lin; Law, Siu-Seong

    2018-05-01

    A structural damage detection-oriented multi-type sensor placement method with multi-objective optimization is developed in this study. The multi-type response covariance sensitivity-based damage detection method is first introduced. Two objective functions for optimal sensor placement are then introduced in terms of the response covariance sensitivity and the response independence. The multi-objective optimization problem is formed by using the two objective functions, and the non-dominated sorting genetic algorithm (NSGA)-II is adopted to find the solution for the optimal multi-type sensor placement to achieve the best structural damage detection. The proposed method is finally applied to a nine-bay three-dimensional frame structure. Numerical results show that the optimal multi-type sensor placement determined by the proposed method can avoid redundant sensors and provide satisfactory results for structural damage detection. The restriction on the number of each type of sensors in the optimization can reduce the searching space in the optimization to make the proposed method more effective. Moreover, how to select a most optimal sensor placement from the Pareto solutions via the utility function and the knee point method is demonstrated in the case study.

  9. Optimal Placement Algorithms for Virtual Machines

    OpenAIRE

    Bellur, Umesh; Rao, Chetan S; SD, Madhu Kumar

    2010-01-01

    Cloud computing provides a computing platform for the users to meet their demands in an efficient, cost-effective way. Virtualization technologies are used in the clouds to aid the efficient usage of hardware. Virtual machines (VMs) are utilized to satisfy the user needs and are placed on physical machines (PMs) of the cloud for effective usage of hardware resources and electricity in the cloud. Optimizing the number of PMs used helps in cutting down the power consumption by a substantial amo...

  10. Optimal Trajectories Generation in Robotic Fiber Placement Systems

    Science.gov (United States)

    Gao, Jiuchun; Pashkevich, Anatol; Caro, Stéphane

    2017-06-01

    The paper proposes a methodology for optimal trajectories generation in robotic fiber placement systems. A strategy to tune the parameters of the optimization algorithm at hand is also introduced. The presented technique transforms the original continuous problem into a discrete one where the time-optimal motions are generated by using dynamic programming. The developed strategy for the optimization algorithm tuning allows essentially reducing the computing time and obtaining trajectories satisfying industrial constraints. Feasibilities and advantages of the proposed methodology are confirmed by an application example.

  11. Optimal Sparse Upstream Sensor Placement for Hydrokinetic Turbines

    Science.gov (United States)

    Cavagnaro, Robert; Strom, Benjamin; Ross, Hannah; Hill, Craig; Polagye, Brian

    2016-11-01

    Accurate measurement of the flow field incident upon a hydrokinetic turbine is critical for performance evaluation during testing and setting boundary conditions in simulation. Additionally, turbine controllers may leverage real-time flow measurements. Particle image velocimetry (PIV) is capable of rendering a flow field over a wide spatial domain in a controlled, laboratory environment. However, PIV's lack of suitability for natural marine environments, high cost, and intensive post-processing diminish its potential for control applications. Conversely, sensors such as acoustic Doppler velocimeters (ADVs), are designed for field deployment and real-time measurement, but over a small spatial domain. Sparsity-promoting regression analysis such as LASSO is utilized to improve the efficacy of point measurements for real-time applications by determining optimal spatial placement for a small number of ADVs using a training set of PIV velocity fields and turbine data. The study is conducted in a flume (0.8 m2 cross-sectional area, 1 m/s flow) with laboratory-scale axial and cross-flow turbines. Predicted turbine performance utilizing the optimal sparse sensor network and associated regression model is compared to actual performance with corresponding PIV measurements.

  12. Optimization of well placement geothermal reservoirs using artificial intelligence

    Science.gov (United States)

    Akın, Serhat; Kok, Mustafa V.; Uraz, Irtek

    2010-06-01

    This research proposes a framework for determining the optimum location of an injection well using an inference method, artificial neural networks and a search algorithm to create a search space and locate the global maxima. A complex carbonate geothermal reservoir (Kizildere Geothermal field, Turkey) production history is used to evaluate the proposed framework. Neural networks are used as a tool to replicate the behavior of commercial simulators, by capturing the response of the field given a limited number of parameters such as temperature, pressure, injection location, and injection flow rate. A study on different network designs indicates that a combination of neural network and an optimization algorithm (explicit search with variable stepping) to capture local maxima can be used to locate a region or a location for optimum well placement. Results also indicate shortcomings and possible pitfalls associated with the approach. With the provided flexibility of the proposed workflow, it is possible to incorporate various parameters including injection flow rate, temperature, and location. For the field of study, optimum injection well location is found to be in the southeastern part of the field. Specific locations resulting from the workflow indicated a consistent search space, having higher values in that particular region. When studied with fixed flow rates (2500 and 4911 m 3/day), a search run through the whole field located two locations which are in the very same region resulting in consistent predictions. Further study carried out by incorporating effect of different flow rates indicates that the algorithm can be run in a particular region of interest and different flow rates may yield different locations. This analysis resulted with a new location in the same region and an optimum injection rate of 4000 m 3/day). It is observed that use of neural network, as a proxy to numerical simulator is viable for narrowing down or locating the area of interest for

  13. Optimal Placement of Phasor Measurement Units with New Considerations

    DEFF Research Database (Denmark)

    Su, Chi; Chen, Zhe

    2010-01-01

    Conventional phasor measurement unit (PMU) placement methods normally use the number of PMU installations as the objective function which is to be minimized. However, the cost of one installation of PMU is not always the same in different locations. It depends on a number of factors. One of these......Conventional phasor measurement unit (PMU) placement methods normally use the number of PMU installations as the objective function which is to be minimized. However, the cost of one installation of PMU is not always the same in different locations. It depends on a number of factors. One...... of these factors is taken into account in the proposed PMU placement method in this paper, which is the number of adjacent branches to the PMU located buses. The concept of full topological observability is adopted and a version of binary particle swarm optimization (PSO) algorithm is utilized. Results from...

  14. Optimal PMU placement using topology transformation method in power systems

    Directory of Open Access Journals (Sweden)

    Nadia H.A. Rahman

    2016-09-01

    Full Text Available Optimal phasor measurement units (PMUs placement involves the process of minimizing the number of PMUs needed while ensuring the entire power system completely observable. A power system is identified observable when the voltages of all buses in the power system are known. This paper proposes selection rules for topology transformation method that involves a merging process of zero-injection bus with one of its neighbors. The result from the merging process is influenced by the selection of bus selected to merge with the zero-injection bus. The proposed method will determine the best candidate bus to merge with zero-injection bus according to the three rules created in order to determine the minimum number of PMUs required for full observability of the power system. In addition, this paper also considered the case of power flow measurements. The problem is formulated as integer linear programming (ILP. The simulation for the proposed method is tested by using MATLAB for different IEEE bus systems. The explanation of the proposed method is demonstrated by using IEEE 14-bus system. The results obtained in this paper proved the effectiveness of the proposed method since the number of PMUs obtained is comparable with other available techniques.

  15. Optimal PMU placement using topology transformation method in power systems.

    Science.gov (United States)

    Rahman, Nadia H A; Zobaa, Ahmed F

    2016-09-01

    Optimal phasor measurement units (PMUs) placement involves the process of minimizing the number of PMUs needed while ensuring the entire power system completely observable. A power system is identified observable when the voltages of all buses in the power system are known. This paper proposes selection rules for topology transformation method that involves a merging process of zero-injection bus with one of its neighbors. The result from the merging process is influenced by the selection of bus selected to merge with the zero-injection bus. The proposed method will determine the best candidate bus to merge with zero-injection bus according to the three rules created in order to determine the minimum number of PMUs required for full observability of the power system. In addition, this paper also considered the case of power flow measurements. The problem is formulated as integer linear programming (ILP). The simulation for the proposed method is tested by using MATLAB for different IEEE bus systems. The explanation of the proposed method is demonstrated by using IEEE 14-bus system. The results obtained in this paper proved the effectiveness of the proposed method since the number of PMUs obtained is comparable with other available techniques.

  16. Optimization of the Document Placement in the RFID Cabinet

    Directory of Open Access Journals (Sweden)

    Kiedrowicz Maciej

    2016-01-01

    Full Text Available The study is devoted to the issue of optimization of the document placement in a single RFID cabinet. It has been assumed that the optimization problem means the reduction of archivization time with respect to the information on all documents with RFID tags. Since the explicit form of the criterion function remains unknown, for the purpose of its approximation, the regression analysis method has been used. The method uses data from a computer simulation of the process of archiving data about documents. To solve the optimization problem, the modified gradient projection method has been used.

  17. Simulation based optimization on automated fibre placement process

    Science.gov (United States)

    Lei, Shi

    2018-02-01

    In this paper, a software simulation (Autodesk TruPlan & TruFiber) based method is proposed to optimize the automate fibre placement (AFP) process. Different types of manufacturability analysis are introduced to predict potential defects. Advanced fibre path generation algorithms are compared with respect to geometrically different parts. Major manufacturing data have been taken into consideration prior to the tool paths generation to achieve high success rate of manufacturing.

  18. A Framework for Optimizing the Placement of Tidal Turbines

    Science.gov (United States)

    Nelson, K. S.; Roberts, J.; Jones, C.; James, S. C.

    2013-12-01

    Power generation with marine hydrokinetic (MHK) current energy converters (CECs), often in the form of underwater turbines, is receiving growing global interest. Because of reasonable investment, maintenance, reliability, and environmental friendliness, this technology can contribute to national (and global) energy markets and is worthy of research investment. Furthermore, in remote areas, small-scale MHK energy from river, tidal, or ocean currents can provide a local power supply. However, little is known about the potential environmental effects of CEC operation in coastal embayments, estuaries, or rivers, or of the cumulative impacts of these devices on aquatic ecosystems over years or decades of operation. There is an urgent need for practical, accessible tools and peer-reviewed publications to help industry and regulators evaluate environmental impacts and mitigation measures, while establishing best sitting and design practices. Sandia National Laboratories (SNL) and Sea Engineering, Inc. (SEI) have investigated the potential environmental impacts and performance of individual tidal energy converters (TECs) in Cobscook Bay, ME; TECs are a subset of CECs that are specifically deployed in tidal channels. Cobscook Bay is the first deployment location of Ocean Renewable Power Company's (ORPC) TidGenTM unit. One unit is currently in place with four more to follow. Together, SNL and SEI built a coarse-grid, regional-scale model that included Cobscook Bay and all other landward embayments using the modeling platform SNL-EFDC. Within SNL-EFDC tidal turbines are represented using a unique set of momentum extraction, turbulence generation, and turbulence dissipation equations at TEC locations. The global model was then coupled to a local-scale model that was centered on the proposed TEC deployment locations. An optimization frame work was developed that used the refined model to determine optimal device placement locations that maximized array performance. Within the

  19. Efficient Sensor Placement Optimization Using Gradient Descent and Probabilistic Coverage

    Directory of Open Access Journals (Sweden)

    Vahab Akbarzadeh

    2014-08-01

    Full Text Available We are proposing an adaptation of the gradient descent method to optimize the position and orientation of sensors for the sensor placement problem. The novelty of the proposed method lies in the combination of gradient descent optimization with a realistic model, which considers both the topography of the environment and a set of sensors with directional probabilistic sensing. The performance of this approach is compared with two other black box optimization methods over area coverage and processing time. Results show that our proposed method produces competitive results on smaller maps and superior results on larger maps, while requiring much less computation than the other optimization methods to which it has been compared.

  20. An Optimization Model for Product Placement on Product Listing Pages

    Directory of Open Access Journals (Sweden)

    Yan-Kwang Chen

    2014-01-01

    Full Text Available The design of product listing pages is a key component of Website design because it has significant influence on the sales volume on a Website. This study focuses on product placement in designing product listing pages. Product placement concerns how venders of online stores place their products over the product listing pages for maximization of profit. This problem is very similar to the offline shelf management problem. Since product information sources on a Web page are typically communicated through the text and image, visual stimuli such as color, shape, size, and spatial arrangement often have an effect on the visual attention of online shoppers and, in turn, influence their eventual purchase decisions. In view of the above, this study synthesizes the visual attention literature and theory of shelf-space allocation to develop a mathematical programming model with genetic algorithms for finding optimal solutions to the focused issue. The validity of the model is illustrated with example problems.

  1. Influence of model errors in optimal sensor placement

    Science.gov (United States)

    Vincenzi, Loris; Simonini, Laura

    2017-02-01

    The paper investigates the role of model errors and parametric uncertainties in optimal or near optimal sensor placements for structural health monitoring (SHM) and modal testing. The near optimal set of measurement locations is obtained by the Information Entropy theory; the results of placement process considerably depend on the so-called covariance matrix of prediction error as well as on the definition of the correlation function. A constant and an exponential correlation function depending on the distance between sensors are firstly assumed; then a proposal depending on both distance and modal vectors is presented. With reference to a simple case-study, the effect of model uncertainties on results is described and the reliability and the robustness of the proposed correlation function in the case of model errors are tested with reference to 2D and 3D benchmark case studies. A measure of the quality of the obtained sensor configuration is considered through the use of independent assessment criteria. In conclusion, the results obtained by applying the proposed procedure on a real 5-spans steel footbridge are described. The proposed method also allows to better estimate higher modes when the number of sensors is greater than the number of modes of interest. In addition, the results show a smaller variation in the sensor position when uncertainties occur.

  2. Optimal placement of FACTS devices using optimization techniques: A review

    Science.gov (United States)

    Gaur, Dipesh; Mathew, Lini

    2018-03-01

    Modern power system is dealt with overloading problem especially transmission network which works on their maximum limit. Today’s power system network tends to become unstable and prone to collapse due to disturbances. Flexible AC Transmission system (FACTS) provides solution to problems like line overloading, voltage stability, losses, power flow etc. FACTS can play important role in improving static and dynamic performance of power system. FACTS devices need high initial investment. Therefore, FACTS location, type and their rating are vital and should be optimized to place in the network for maximum benefit. In this paper, different optimization methods like Particle Swarm Optimization (PSO), Genetic Algorithm (GA) etc. are discussed and compared for optimal location, type and rating of devices. FACTS devices such as Thyristor Controlled Series Compensator (TCSC), Static Var Compensator (SVC) and Static Synchronous Compensator (STATCOM) are considered here. Mentioned FACTS controllers effects on different IEEE bus network parameters like generation cost, active power loss, voltage stability etc. have been analyzed and compared among the devices.

  3. Optimal sensor placement for modal testing on wind turbines

    Science.gov (United States)

    Schulze, Andreas; Zierath, János; Rosenow, Sven-Erik; Bockhahn, Reik; Rachholz, Roman; Woernle, Christoph

    2016-09-01

    The mechanical design of wind turbines requires a profound understanding of the dynamic behaviour. Even though highly detailed simulation models are already in use to support wind turbine design, modal testing on a real prototype is irreplaceable to identify site-specific conditions such as the stiffness of the tower foundation. Correct identification of the mode shapes of a complex mechanical structure much depends on the placement of the sensors. For operational modal analysis of a 3 MW wind turbine with a 120 m rotor on a 100 m tower developed by W2E Wind to Energy, algorithms for optimal placement of acceleration sensors are applied. The mode shapes used for the optimisation are calculated by means of a detailed flexible multibody model of the wind turbine. Among the three algorithms in this study, the genetic algorithm with weighted off-diagonal criterion yields the sensor configuration with the highest quality. The ongoing measurements on the prototype will be the basis for the development of optimised wind turbine designs.

  4. Optimal placement and sizing of multiple distributed generating units in distribution

    Directory of Open Access Journals (Sweden)

    D. Rama Prabha

    2016-06-01

    Full Text Available Distributed generation (DG is becoming more important due to the increase in the demands for electrical energy. DG plays a vital role in reducing real power losses, operating cost and enhancing the voltage stability which is the objective function in this problem. This paper proposes a multi-objective technique for optimally determining the location and sizing of multiple distributed generation (DG units in the distribution network with different load models. The loss sensitivity factor (LSF determines the optimal placement of DGs. Invasive weed optimization (IWO is a population based meta-heuristic algorithm based on the behavior of weeds. This algorithm is used to find optimal sizing of the DGs. The proposed method has been tested for different load models on IEEE-33 bus and 69 bus radial distribution systems. This method has been compared with other nature inspired optimization methods. The simulated results illustrate the good applicability and performance of the proposed method.

  5. Optimal Node Placement in Underwater Wireless Sensor Networks

    KAUST Repository

    Felamban, M.

    2013-03-25

    Wireless Sensor Networks (WSN) are expected to play a vital role in the exploration and monitoring of underwater areas which are not easily reachable by humans. However, underwater communication via acoustic waves is subject to several performance limitations that are very different from those used for terresstrial networks. In this paper, we investigate node placement for building an initial underwater WSN infrastructure. We formulate this problem as a nonlinear mathematical program with the objective of minimizing the total transmission loss under a given number of sensor nodes and targeted coverage volume. The obtained solution is the location of each node represented via a truncated octahedron to fill out the 3D space. Experiments are conducted to verify the proposed formulation, which is solved using Matlab optimization tool. Simulation is also conducted using an ns-3 simulator, and the simulation results are consistent with the obtained results from mathematical model with less than 10% error.

  6. Optimized baffle and aperture placement in neutral beamlines

    International Nuclear Information System (INIS)

    Stone, R.; Duffy, T.; Vetrovec, J.

    1983-01-01

    Most neutral beamlines contain an iron-core ion-bending magnet that requires shielding between the end of the neutralizer and this magnet. This shielding allows the gas pressure to drop prior to the beam entering the magnet and therefore reduces beam losses in this drift region. We have found that the beam losses can be reduced even further by eliminating the iron-core magnet and the magnetic shielding altogether. The required bending field can be supplied by current coils without the iron poles. In addition, placement of the baffles and apertures can affect the cold gas entering the plasma region and the losses in the neutral beam due to re-ionization. In our study we varied the placement of the baffles, which determine the amount of pumping in each chamber, and the apertures, which determine the beam loss. Our results indicate that a baffle/aperture configuration can be set for either minimum cold gas into the plasma region or minimum beam losses, but not both

  7. Looking for optimal number and placement of FACTS devices to manage the transmission congestion

    International Nuclear Information System (INIS)

    Rahimzadeh, Sajad; Tavakoli Bina, Mohammad

    2011-01-01

    Some applications of FACTS devices show that they are proper and effective tools to control the technical parameters of power systems. However determination of optimal number, location, size and type of these devices is a difficult problem. Moreover, applying a suitable objective function for optimal placement of FACTS devices plays a very important role in economic improvement of a power market. In this paper optimal placement of parallel and series FACTS devices is studied. The STATCOM is selected as a parallel FACTS device and SSSC as a series one. The optimization problem is formulated in regard to restructured environment and a new objective function is defined so that its minimization can alleviate the congestion and provide fairer conditions for power market participants. Moreover, an index based on objective function value is presented to determine the optimal number of each FACTS device in a specific designed algorithm. The power injection models for STATCOM and SSSC are adopted by applying neural models based on the averaging technique. This model takes the converter power losses into account and produces the required PQ-phasor that is suitable for power system steady state analysis. The proposed method is applied on modified IEEE 14-bus, 30-bus and 118-bus test systems and the results are analyzed.

  8. Optimal Capacitor Placement in Wind Farms by Considering Harmonics Using Discrete Lightning Search Algorithm

    Directory of Open Access Journals (Sweden)

    Reza Sirjani

    2017-09-01

    Full Text Available Currently, many wind farms exist throughout the world and, in some cases, supply a significant portion of energy to networks. However, numerous uncertainties remain with respect to the amount of energy generated by wind turbines and other sophisticated operational aspects, such as voltage and reactive power management, which requires further development and consideration. To fix the problem of poor reactive power compensation in wind farms, optimal capacitor placement has been proposed in existing wind farms as a simple and relatively inexpensive method. However, the use of induction generators, transformers, and additional capacitors represent potential problems for the harmonics of a system and therefore must be taken into account at wind farms. The optimal location and size of capacitors at buses of an 80-MW wind farm were determined according to modelled wind speed, system equivalent circuits, and harmonics in order to minimize energy losses, optimize reactive power and reduce the management costs. The discrete version of the lightning search algorithm (DLSA is a powerful and flexible nature-inspired optimization technique that was developed and implemented herein for optimal capacitor placement in wind farms. The obtained results are compared with the results of the genetic algorithm (GA and the discrete harmony search algorithm (DHSA.

  9. Optimal placement of biomass fuelled gas turbines for reduced losses

    International Nuclear Information System (INIS)

    Jurado, Francisco; Cano, Antonio

    2006-01-01

    This paper presents a method for the optimal location and sizing of biomass fuelled gas turbine power plants. Both profitability in using biomass and power loss are considered in the cost function. The first step is to assess the plant size that maximizes the profitability of the project. The second step is to determine the optimal location of the gas turbines in the electric system to minimize the power loss of the system

  10. Optimal Node Placement in Underwater Acoustic Sensor Network

    KAUST Repository

    Felemban, Muhamad

    2011-10-01

    Almost 70% of planet Earth is covered by water. A large percentage of underwater environment is unexplored. In the past two decades, there has been an increase in the interest of exploring and monitoring underwater life among scientists and in industry. Underwater operations are extremely difficult due to the lack of cheap and efficient means. Recently, Wireless Sensor Networks have been introduced in underwater environment applications. However, underwater communication via acoustic waves is subject to several performance limitations, which makes the relevant research issues very different from those on land. In this thesis, we investigate node placement for building an initial Underwater Wireless Sensor Network infrastructure. Firstly, we formulated the problem into a nonlinear mathematic program with objectives of minimizing the total transmission loss under a given number of sensor nodes and targeted volume. We conducted experiments to verify the proposed formulation, which is solved using Matlab optimization tool. We represented each node with a truncated octahedron to fill out the 3D space. The truncated octahedrons are tiled in the 3D space with each node in the center where locations of the nodes are given using 3D coordinates. Results are supported using ns-3 simulator. Results from simulation are consistent with the obtained results from mathematical model with less than 10% error.

  11. Optimization of Placement Driven by the Cost of Wire Crossing

    National Research Council Canada - National Science Library

    Kapur, Nevin

    1997-01-01

    .... We implemented a prototype placement algorithm TOCO that minimizes the cost of wire crossing, and a universal unit-grid based placement evaluator place_eval. We have designed a number of statistical experiments to demonstrate the feasibility and the promise of the proposed approach.

  12. Determining Optimal Decision Version

    Directory of Open Access Journals (Sweden)

    Olga Ioana Amariei

    2014-06-01

    Full Text Available In this paper we start from the calculation of the product cost, applying the method of calculating the cost of hour- machine (THM, on each of the three cutting machines, namely: the cutting machine with plasma, the combined cutting machine (plasma and water jet and the cutting machine with a water jet. Following the calculation of cost and taking into account the precision of manufacturing of each machine, as well as the quality of the processed surface, the optimal decisional version needs to be determined regarding the product manufacturing. To determine the optimal decisional version, we resort firstly to calculating the optimal version on each criterion, and then overall using multiattribute decision methods.

  13. A New Optimization Framework To Solve The Optimal Feeder Reconfiguration And Capacitor Placement Problems

    Directory of Open Access Journals (Sweden)

    Mohammad-Reza Askari

    2015-07-01

    Full Text Available Abstract This paper introduces a new stochastic optimization framework based bat algorithm BA to solve the optimal distribution feeder reconfiguration DFR as well as the shunt capacitor placement and sizing in the distribution systems. The objective functions to be investigated are minimization of the active power losses and minimization of the total network costs an. In order to consider the uncertainties of the active and reactive loads in the problem point estimate method PEM with 2m scheme is employed as the stochastic tool. The feasibility and good performance of the proposed method are examined on the IEEE 69-bus test system.

  14. Optimal placement of multiple types of communicating sensors with availability and coverage redundancy constraints

    Science.gov (United States)

    Vecherin, Sergey N.; Wilson, D. Keith; Pettit, Chris L.

    2010-04-01

    Determination of an optimal configuration (numbers, types, and locations) of a sensor network is an important practical problem. In most applications, complex signal propagation effects and inhomogeneous coverage preferences lead to an optimal solution that is highly irregular and nonintuitive. The general optimization problem can be strictly formulated as a binary linear programming problem. Due to the combinatorial nature of this problem, however, its strict solution requires significant computational resources (NP-complete class of complexity) and is unobtainable for large spatial grids of candidate sensor locations. For this reason, a greedy algorithm for approximate solution was recently introduced [S. N. Vecherin, D. K. Wilson, and C. L. Pettit, "Optimal sensor placement with terrain-based constraints and signal propagation effects," Unattended Ground, Sea, and Air Sensor Technologies and Applications XI, SPIE Proc. Vol. 7333, paper 73330S (2009)]. Here further extensions to the developed algorithm are presented to include such practical needs and constraints as sensor availability, coverage by multiple sensors, and wireless communication of the sensor information. Both communication and detection are considered in a probabilistic framework. Communication signal and signature propagation effects are taken into account when calculating probabilities of communication and detection. Comparison of approximate and strict solutions on reduced-size problems suggests that the approximate algorithm yields quick and good solutions, which thus justifies using that algorithm for full-size problems. Examples of three-dimensional outdoor sensor placement are provided using a terrain-based software analysis tool.

  15. Optimizing placement and equalization of multiple low frequency loudspeakers in rooms

    DEFF Research Database (Denmark)

    Celestinos, Adrian; Nielsen, Sofus Birkedal

    2005-01-01

    loudspeakers in rooms a simulation tool has been created based on finite-difference time-domain approximations (FDTD). Simulations have shown that by increasing the number of loudspeakers and modifying their placement a significant improvement is achieved. A more even sound pressure level distribution along...... a listening area is obtained. The placement of loudspeakers has been optimized. Furthermore an equalization strategy can be implemented for optimization purpose. This solution can be combined with multi channel sound systems....

  16. A mixed integer linear programming approach for optimal DER portfolio, sizing, and placement in multi-energy microgrids

    International Nuclear Information System (INIS)

    Mashayekh, Salman; Stadler, Michael; Cardoso, Gonçalo; Heleno, Miguel

    2017-01-01

    Highlights: • This paper presents a MILP model for optimal design of multi-energy microgrids. • Our microgrid design includes optimal technology portfolio, placement, and operation. • Our model includes microgrid electrical power flow and heat transfer equations. • The case study shows advantages of our model over aggregate single-node approaches. • The case study shows the accuracy of the integrated linearized power flow model. - Abstract: Optimal microgrid design is a challenging problem, especially for multi-energy microgrids with electricity, heating, and cooling loads as well as sources, and multiple energy carriers. To address this problem, this paper presents an optimization model formulated as a mixed-integer linear program, which determines the optimal technology portfolio, the optimal technology placement, and the associated optimal dispatch, in a microgrid with multiple energy types. The developed model uses a multi-node modeling approach (as opposed to an aggregate single-node approach) that includes electrical power flow and heat flow equations, and hence, offers the ability to perform optimal siting considering physical and operational constraints of electrical and heating/cooling networks. The new model is founded on the existing optimization model DER-CAM, a state-of-the-art decision support tool for microgrid planning and design. The results of a case study that compares single-node vs. multi-node optimal design for an example microgrid show the importance of multi-node modeling. It has been shown that single-node approaches are not only incapable of optimal DER placement, but may also result in sub-optimal DER portfolio, as well as underestimation of investment costs.

  17. Improved Formulation for the Optimization of Wind Turbine Placement in a Wind Farm

    Directory of Open Access Journals (Sweden)

    Zong Woo Geem

    2013-01-01

    Full Text Available As an alternative to fossil fuels, wind can be considered because it is a renewable and greenhouse gas-free natural resource. When wind power is generated by wind turbines in a wind farm, the optimal placement of turbines is critical because different layouts produce different efficiencies. The objective of the wind turbine placement problem is to maximize the generated power while minimizing the cost in installing the turbines. This study proposes an efficient optimization formulation for the optimal layout of wind turbine placements under the resources (e.g., number of turbines or budget limit by introducing corresponding constraints. The proposed formulation gave users more conveniences in considering resources and budget bounds. After performing the optimization, results were compared using two different methods (branch and bound method and genetic algorithm and two different objective functions.

  18. MVMO-based approach for optimal placement and tuning of supplementary damping controller

    NARCIS (Netherlands)

    Rueda Torres, J.L.; Gonzalez-Longatt, F.

    2015-01-01

    This paper introduces an approach based on the Swarm Variant of the Mean-Variance Mapping Optimization (MVMO-S) to solve the multi-scenario formulation of the optimal placement and coordinated tuning of power system supplementary damping controllers (POCDCs). The effectiveness of the approach is

  19. MVMO-based approach for optimal placement and tuning of ...

    African Journals Online (AJOL)

    bus (New England) test system. Numerical results include performance comparisons with other metaheuristic optimization techniques, namely, comprehensive learning particle swarm optimization (CLPSO), genetic algorithm with multi-parent ...

  20. Multi-projector auto-calibration and placement optimization for non-planar surfaces

    Science.gov (United States)

    Li, Dong; Xie, Jinghui; Zhao, Lu; Zhou, Lijing; Weng, Dongdong

    2015-10-01

    Non-planar projection has been widely applied in virtual reality and digital entertainment and exhibitions because of its flexible layout and immersive display effects. Compared with planar projection, a non-planar projection is more difficult to achieve because projector calibration and image distortion correction are difficult processes. This paper uses a cylindrical screen as an example to present a new method for automatically calibrating a multi-projector system in a non-planar environment without using 3D reconstruction. This method corrects the geometric calibration error caused by the screen's manufactured imperfections, such as an undulating surface or a slant in the vertical plane. In addition, based on actual projection demand, this paper presents the overall performance evaluation criteria for the multi-projector system. According to these criteria, we determined the optimal placement for the projectors. This method also extends to surfaces that can be parameterized, such as spheres, ellipsoids, and paraboloids, and demonstrates a broad applicability.

  1. Optimal capacitor placement and sizing using combined fuzzy ...

    African Journals Online (AJOL)

    Then the sizing of the capacitors is modeled as an optimization problem and the objective function (loss minimization) is solved using Hybrid Particle Swarm Optimization (HPSO) technique. A case study with an IEEE 34 bus distribution feeder is presented to illustrate the applicability of the algorithm. A comparison is made ...

  2. Joint Optimization of Receiver Placement and Illuminator Selection for a Multiband Passive Radar Network.

    Science.gov (United States)

    Xie, Rui; Wan, Xianrong; Hong, Sheng; Yi, Jianxin

    2017-06-14

    The performance of a passive radar network can be greatly improved by an optimal radar network structure. Generally, radar network structure optimization consists of two aspects, namely the placement of receivers in suitable places and selection of appropriate illuminators. The present study investigates issues concerning the joint optimization of receiver placement and illuminator selection for a passive radar network. Firstly, the required radar cross section (RCS) for target detection is chosen as the performance metric, and the joint optimization model boils down to the partition p -center problem (PPCP). The PPCP is then solved by a proposed bisection algorithm. The key of the bisection algorithm lies in solving the partition set covering problem (PSCP), which can be solved by a hybrid algorithm developed by coupling the convex optimization with the greedy dropping algorithm. In the end, the performance of the proposed algorithm is validated via numerical simulations.

  3. Sensor placement optimization for structural modal identification of flexible structures using genetic algorithm

    International Nuclear Information System (INIS)

    Jung, B. K.; Cho, J. R.; Jeong, W. B.

    2015-01-01

    The position of vibration sensors influences the modal identification quality of flexible structures for a given number of sensors, and the quality of modal identification is usually estimated in terms of correlation between the natural modes using the modal assurance criterion (MAC). The sensor placement optimization is characterized by the fact that the design variables are not continuous but discrete, implying that the conventional sensitivity-driven optimization methods are not applicable. In this context, this paper presents the application of genetic algorithm to the sensor placement optimization for improving the modal identification quality of flexible structures. A discrete-type optimization problem using genetic algorithm is formulated by defining the sensor positions and the MAC as the design variables and the objective function, respectively. The proposed GA-based evolutionary optimization method is validated through the numerical experiment with a rectangular plate, and its excellence is verified from the comparison with the cases using different modal correlation measures.

  4. Minimal invasive epicardial lead implantation: optimizing cardiac resynchronization with a new mapping device for epicardial lead placement.

    Science.gov (United States)

    Maessen, J G; Phelps, B; Dekker, A L A J; Dijkman, B

    2004-05-01

    To optimize resynchronization in biventricular pacing with epicardial leads, mapping to determine the best pacing site, is a prerequisite. A port access surgical mapping technique was developed that allowed multiple pace site selection and reproducible lead evaluation and implantation. Pressure-volume loops analysis was used for real time guidance in targeting epicardial lead placement. Even the smallest changes in lead position revealed significantly different functional results. Optimizing the pacing site with this technique allowed functional improvement up to 40% versus random pace site selection.

  5. Simulation-Based Optimization of Camera Placement in the Context of Industrial Pose Estimation

    DEFF Research Database (Denmark)

    Jørgensen, Troels Bo; Iversen, Thorbjørn Mosekjær; Lindvig, Anders Prier

    2018-01-01

    In this paper, we optimize the placement of a camera in simulation in order to achieve a high success rate for a pose estimation problem. This is achieved by simulating 2D images from a stereo camera in a virtual scene. The stereo images are then used to generate 3D point clouds based on two diff...

  6. On the use of PGD for optimal control applied to automated fibre placement

    Science.gov (United States)

    Bur, N.; Joyot, P.

    2017-10-01

    Automated Fibre Placement (AFP) is an incipient manufacturing process for composite structures. Despite its concep-tual simplicity it involves many complexities related to the necessity of melting the thermoplastic at the interface tape-substrate, ensuring the consolidation that needs the diffusion of molecules and control the residual stresses installation responsible of the residual deformations of the formed parts. The optimisation of the process and the determination of the process window cannot be achieved in a traditional way since it requires a plethora of trials/errors or numerical simulations, because there are many parameters involved in the characterisation of the material and the process. Using reduced order modelling such as the so called Proper Generalised Decomposition method, allows the construction of multi-parametric solution taking into account many parameters. This leads to virtual charts that can be explored on-line in real time in order to perform process optimisation or on-line simulation-based control. Thus, for a given set of parameters, determining the power leading to an optimal temperature becomes easy. However, instead of controlling the power knowing the temperature field by particularizing an abacus, we propose here an approach based on optimal control: we solve by PGD a dual problem from heat equation and optimality criteria. To circumvent numerical issue due to ill-conditioned system, we propose an algorithm based on Uzawa's method. That way, we are able to solve the dual problem, setting the desired state as an extra-coordinate in the PGD framework. In a single computation, we get both the temperature field and the required heat flux to reach a parametric optimal temperature on a given zone.

  7. MVMO-based approach for optimal placement and tuning of ...

    African Journals Online (AJOL)

    DR OKE

    differential evolution DE algorithm with adaptive crossover operator, .... x are assigned by using a sequential scheme which accounts for mean and ... the representative scenarios from probabilistic model based Monte Carlo ... Comparison of average convergence of MVMO-S with other metaheuristic optimization methods.

  8. Multi-objective PSO based optimal placement of solar power DG in radial distribution system

    Directory of Open Access Journals (Sweden)

    Mahesh Kumar

    2017-06-01

    Full Text Available Ever increasing trend of electricity demand, fossil fuel depletion and environmental issues request the integration of renewable energy into the distribution system. The optimal planning of renewable distributed generation (DG is much essential for ensuring maximum benefits. Hence, this paper proposes the optimal placement of probabilistic based solar power DG into the distribution system. The two objective functions such as power loss reduction and voltage stability index improvement are optimized. The power balance and voltage limits are kept as constraints of the problem. The non-sorting pare to-front based multi-objective particle swarm optimization (MOPSO technique is proposed on standard IEEE 33 radial distribution test system.

  9. Evaluation of Effective Factors on Travel Time in Optimization of Bus Stops Placement Using Genetic Algorithm

    Science.gov (United States)

    Bargegol, Iraj; Ghorbanzadeh, Mahyar; Ghasedi, Meisam; Rastbod, Mohammad

    2017-10-01

    In congested cities, locating and proper designing of bus stops according to the unequal distribution of passengers is crucial issue economically and functionally, since this subject plays an important role in the use of bus system by passengers. Location of bus stops is a complicated subject; by reducing distances between stops, walking time decreases, but the total travel time may increase. In this paper, a specified corridor in the city of Rasht in north of Iran is studied. Firstly, a new formula is presented to calculate the travel time, by which the number of stops and consequently, the travel time can be optimized. An intended corridor with specified number of stops and distances between them is addressed, the related formulas to travel time are created, and its travel time is calculated. Then the corridor is modelled using a meta-heuristic method in order that the placement and the optimal distances of bus stops for that are determined. It was found that alighting and boarding time along with bus capacity are the most effective factors affecting travel time. Consequently, it is better to have more concentration on indicated factors for improving the efficiency of bus system.

  10. Optimal Placement of Energy Storage and Wind Power under Uncertainty

    Directory of Open Access Journals (Sweden)

    Pilar Meneses de Quevedo

    2016-07-01

    Full Text Available Due to the rapid growth in the amount of wind energy connected to distribution grids, they are exposed to higher network constraints, which poses additional challenges to system operation. Based on regulation, the system operator has the right to curtail wind energy in order to avoid any violation of system constraints. Energy storage systems (ESS are considered to be a viable solution to solve this problem. The aim of this paper is to provide the best locations of both ESS and wind power by optimizing distribution system costs taking into account network constraints and the uncertainty associated to the nature of wind, load and price. To do that, we use a mixed integer linear programming (MILP approach consisting of loss reduction, voltage improvement and minimization of generation costs. An alternative current (AC linear optimal power flow (OPF, which employs binary variables to define the location of the generation, is implemented. The proposed stochastic MILP approach has been applied to the IEEE 69-bus distribution network and the results show the performance of the model under different values of installed capacities of ESS and wind power.

  11. Optimal Colostomy Placement in Spinal Cord Injury Patients.

    Science.gov (United States)

    Xu, Jiashou; Dharmarajan, Sekhar; Johnson, Frank E

    2016-03-01

    Barring unusual circumstances, sigmoid colostomy is the optimal technique for management of defecation in spinal cord injury (SCI) patients. We sought to provide evidence that a sigmoid colostomy is not difficult to perform in SCI patients and has better long-term results. The St. Louis Department of Veterans Affairs has a Commission on Accreditation of Rehabilitation Facilities (CARF)-approved SCI Unit. We reviewed the operative notes on all SCI patients who received a colostomy for fecal management by three ASCRS-certified colorectal surgeons at the St. Louis Department of Veterans Affairs from January 1, 2007 to November 26, 2012. There were 27 operations for which the recorded indication for surgery suggested that the primary disorder was SCI. Fourteen had traumatic SCI of the thoracic and/or lumbar spine and were evaluable. Of these 14 patients, 12 had laparoscopic sigmoid colostomy and two had open sigmoid colostomy. We encountered one evaluable patient with a remarkably large amount of retroperitoneal bony debris who successfully underwent laparoscopic sigmoid colostomy. In conclusion, sigmoid colostomy is the consensus optimal procedure for fecal management in SCI patients. Laparoscopic procedures are preferred. Care providers should specify sigmoid colostomy when contacting a surgeon.

  12. Proposal for optimal placement platform of bikes using queueing networks.

    Science.gov (United States)

    Mizuno, Shinya; Iwamoto, Shogo; Seki, Mutsumi; Yamaki, Naokazu

    2016-01-01

    In recent social experiments, rental motorbikes and rental bicycles have been arranged at nodes, and environments where users can ride these bikes have been improved. When people borrow bikes, they return them to nearby nodes. Some experiments have been conducted using the models of Hamachari of Yokohama, the Niigata Rental Cycle, and Bicing. However, from these experiments, the effectiveness of distributing bikes was unclear, and many models were discontinued midway. Thus, we need to consider whether these models are effectively designed to represent the distribution system. Therefore, we construct a model to arrange the nodes for distributing bikes using a queueing network. To adopt realistic values for our model, we use the Google Maps application program interface. Thus, we can easily obtain values of distance and transit time between nodes in various places in the world. Moreover, we apply the distribution of a population to a gravity model and we compute the effective transition probability for this queueing network. If the arrangement of the nodes and number of bikes at each node is known, we can precisely design the system. We illustrate our system using convenience stores as nodes and optimize the node configuration. As a result, we can optimize simultaneously the number of nodes, node places, and number of bikes for each node, and we can construct a base for a rental cycle business to use our system.

  13. Optimal placement and sizing of wind / solar based DG sources in distribution system

    Science.gov (United States)

    Guan, Wanlin; Guo, Niao; Yu, Chunlai; Chen, Xiaoguang; Yu, Haiyang; Liu, Zhipeng; Cui, Jiapeng

    2017-06-01

    Proper placement and sizing of Distributed Generation (DG) in distribution system can obtain maximum potential benefits. This paper proposes quantum particle swarm algorithm (QPSO) based wind turbine generation unit (WTGU) and photovoltaic (PV) array placement and sizing approach for real power loss reduction and voltage stability improvement of distribution system. Performance modeling of wind and solar generation system are described and classified into PQ\\PQ (V)\\PI type models in power flow. Considering the WTGU and PV based DGs in distribution system is geographical restrictive, the optimal area and DG capacity limits of each bus in the setting area need to be set before optimization, the area optimization method is proposed . The method has been tested on IEEE 33-bus radial distribution systems to demonstrate the performance and effectiveness of the proposed method.

  14. Optimized Virtual Machine Placement with Traffic-Aware Balancing in Data Center Networks

    Directory of Open Access Journals (Sweden)

    Tao Chen

    2016-01-01

    Full Text Available Virtualization has been an efficient method to fully utilize computing resources such as servers. The way of placing virtual machines (VMs among a large pool of servers greatly affects the performance of data center networks (DCNs. As network resources have become a main bottleneck of the performance of DCNs, we concentrate on VM placement with Traffic-Aware Balancing to evenly utilize the links in DCNs. In this paper, we first proposed a Virtual Machine Placement Problem with Traffic-Aware Balancing (VMPPTB and then proved it to be NP-hard and designed a Longest Processing Time Based Placement algorithm (LPTBP algorithm to solve it. To take advantage of the communication locality, we proposed Locality-Aware Virtual Machine Placement Problem with Traffic-Aware Balancing (LVMPPTB, which is a multiobjective optimization problem of simultaneously minimizing the maximum number of VM partitions of requests and minimizing the maximum bandwidth occupancy on uplinks of Top of Rack (ToR switches. We also proved it to be NP-hard and designed a heuristic algorithm (Least-Load First Based Placement algorithm, LLBP algorithm to solve it. Through extensive simulations, the proposed heuristic algorithm is proven to significantly balance the bandwidth occupancy on uplinks of ToR switches, while keeping the number of VM partitions of each request small enough.

  15. Optimal Placement and Sizing of Fault Current Limiters in Distributed Generation Systems Using a Hybrid Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    N. Bayati

    2017-02-01

    Full Text Available Distributed Generation (DG connection in a power system tends to increase the short circuit level in the entire system which, in turn, could eliminate the protection coordination between the existing relays. Fault Current Limiters (FCLs are often used to reduce the short-circuit level of the network to a desirable level, provided that they are dully placed and appropriately sized. In this paper, a method is proposed for optimal placement of FCLs and optimal determination of their impedance values by which the relay operation time, the number and size of the FCL are minimized while maintaining the relay coordination before and after DG connection. The proposed method adopts the removal of low-impact FCLs and uses a hybrid Genetic Algorithm (GA optimization scheme to determine the optimal placement of FCLs and the values of their impedances. The suitability of the proposed method is demonstrated by examining the results of relay coordination in a typical DG network before and after DG connection.

  16. Epicardial left ventricular lead placement for cardiac resynchronization therapy: optimal pace site selection with pressure-volume loops.

    Science.gov (United States)

    Dekker, A L A J; Phelps, B; Dijkman, B; van der Nagel, T; van der Veen, F H; Geskes, G G; Maessen, J G

    2004-06-01

    Patients in heart failure with left bundle branch block benefit from cardiac resynchronization therapy. Usually the left ventricular pacing lead is placed by coronary sinus catheterization; however, this procedure is not always successful, and patients may be referred for surgical epicardial lead placement. The objective of this study was to develop a method to guide epicardial lead placement in cardiac resynchronization therapy. Eleven patients in heart failure who were eligible for cardiac resynchronization therapy were referred for surgery because of failed coronary sinus left ventricular lead implantation. Minithoracotomy or thoracoscopy was performed, and a temporary epicardial electrode was used for biventricular pacing at various sites on the left ventricle. Pressure-volume loops with the conductance catheter were used to select the best site for each individual patient. Relative to the baseline situation, biventricular pacing with an optimal left ventricular lead position significantly increased stroke volume (+39%, P =.01), maximal left ventricular pressure derivative (+20%, P =.02), ejection fraction (+30%, P =.007), and stroke work (+66%, P =.006) and reduced end-systolic volume (-6%, P =.04). In contrast, biventricular pacing at a suboptimal site did not significantly change left ventricular function and even worsened it in some cases. To optimize cardiac resynchronization therapy with epicardial leads, mapping to determine the best pace site is a prerequisite. Pressure-volume loops offer real-time guidance for targeting epicardial lead placement during minimal invasive surgery.

  17. Genetic evolutionary taboo search for optimal marker placement in infrared patient setup

    International Nuclear Information System (INIS)

    Riboldi, M; Baroni, G; Spadea, M F; Tagaste, B; Garibaldi, C; Cambria, R; Orecchia, R; Pedotti, A

    2007-01-01

    In infrared patient setup adequate selection of the external fiducial configuration is required for compensating inner target displacements (target registration error, TRE). Genetic algorithms (GA) and taboo search (TS) were applied in a newly designed approach to optimal marker placement: the genetic evolutionary taboo search (GETS) algorithm. In the GETS paradigm, multiple solutions are simultaneously tested in a stochastic evolutionary scheme, where taboo-based decision making and adaptive memory guide the optimization process. The GETS algorithm was tested on a group of ten prostate patients, to be compared to standard optimization and to randomly selected configurations. The changes in the optimal marker configuration, when TRE is minimized for OARs, were specifically examined. Optimal GETS configurations ensured a 26.5% mean decrease in the TRE value, versus 19.4% for conventional quasi-Newton optimization. Common features in GETS marker configurations were highlighted in the dataset of ten patients, even when multiple runs of the stochastic algorithm were performed. Including OARs in TRE minimization did not considerably affect the spatial distribution of GETS marker configurations. In conclusion, the GETS algorithm proved to be highly effective in solving the optimal marker placement problem. Further work is needed to embed site-specific deformation models in the optimization process

  18. Spatial Model for Determining the Optimum Placement of Logistics Centers in a Predefined Economic Area

    Directory of Open Access Journals (Sweden)

    Ramona Iulia Țarțavulea (Dieaconescu

    2016-08-01

    Full Text Available The process of globalization has stimulated the demand for logistics services at a level of speed and increased efficiency, which involves using of techniques, tools, technologies and modern models in supply chain management. The aim of this research paper is to present a model that can be used in order to achieve an optimized supply chain, associated with minimum transportation costs. The utilization of spatial modeling for determining the optimal locations for logistics centers in a predefined economic area is proposd in this paper. The principal methods used to design the model are mathematic optimization and linear programming. The output data of the model are the precise placement of one up to ten logistics centers, in terms of minimum operational costs for delivery from the optimum locations to consumer points. The results of the research indicate that by using the proposed model, an efficient supply chain that is consistent with optimization of transport can be designed, in order to streamline the delivery process and thus reduce operational costs

  19. Optimal sensor placement for leak location in water distribution networks using genetic algorithms.

    Science.gov (United States)

    Casillas, Myrna V; Puig, Vicenç; Garza-Castañón, Luis E; Rosich, Albert

    2013-11-04

    This paper proposes a new sensor placement approach for leak location in water distribution networks (WDNs). The sensor placement problem is formulated as an integer optimization problem. The optimization criterion consists in minimizing the number of non-isolable leaks according to the isolability criteria introduced. Because of the large size and non-linear integer nature of the resulting optimization problem, genetic algorithms (GAs) are used as the solution approach. The obtained results are compared with a semi-exhaustive search method with higher computational effort, proving that GA allows one to find near-optimal solutions with less computational load. Moreover, three ways of increasing the robustness of the GA-based sensor placement method have been proposed using a time horizon analysis, a distance-based scoring and considering different leaks sizes. A great advantage of the proposed methodology is that it does not depend on the isolation method chosen by the user, as long as it is based on leak sensitivity analysis. Experiments in two networks allow us to evaluate the performance of the proposed approach.

  20. Optimal Sensor Placement for Leak Location in Water Distribution Networks Using Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Luis E. Garza-Castañón

    2013-11-01

    Full Text Available This paper proposes a new sensor placement approach for leak location in water distribution networks (WDNs. The sensor placement problem is formulated as an integer optimization problem. The optimization criterion consists in minimizing the number of non-isolable leaks according to the isolability criteria introduced. Because of the large size and non-linear integer nature of the resulting optimization problem, genetic algorithms (GAs are used as the solution approach. The obtained results are compared with a semi-exhaustive search method with higher computational effort, proving that GA allows one to find near-optimal solutions with less computational load. Moreover, three ways of increasing the robustness of the GA-based sensor placement method have been proposed using a time horizon analysis, a distance-based scoring and considering different leaks sizes. A great advantage of the proposed methodology is that it does not depend on the isolation method chosen by the user, as long as it is based on leak sensitivity analysis. Experiments in two networks allow us to evaluate the performance of the proposed approach.

  1. Optimal Sensor Placement for Leak Location in Water Distribution Networks Using Genetic Algorithms

    Science.gov (United States)

    Casillas, Myrna V.; Puig, Vicenç; Garza-Castañón, Luis E.; Rosich, Albert

    2013-01-01

    This paper proposes a new sensor placement approach for leak location in water distribution networks (WDNs). The sensor placement problem is formulated as an integer optimization problem. The optimization criterion consists in minimizing the number of non-isolable leaks according to the isolability criteria introduced. Because of the large size and non-linear integer nature of the resulting optimization problem, genetic algorithms (GAs) are used as the solution approach. The obtained results are compared with a semi-exhaustive search method with higher computational effort, proving that GA allows one to find near-optimal solutions with less computational load. Moreover, three ways of increasing the robustness of the GA-based sensor placement method have been proposed using a time horizon analysis, a distance-based scoring and considering different leaks sizes. A great advantage of the proposed methodology is that it does not depend on the isolation method chosen by the user, as long as it is based on leak sensitivity analysis. Experiments in two networks allow us to evaluate the performance of the proposed approach. PMID:24193099

  2. Optimal placement of switching equipment in reconfigurable distribution systems

    Directory of Open Access Journals (Sweden)

    Mijailović Vladica

    2011-01-01

    Full Text Available This paper presents a comparative analysis of some measures that can improve the reliability of medium-voltage (MV distribution feeder. Strictly speaking, the impact of certain types of switching equipment installed on the feeder and possibilities of backup supply from the adjacent feeders were analyzed. For each analyzed case, equations for the calculation of System Average Interruption Duration Index and energy not delivered to the customers are given. The effects of certain measures are calculated for one real MV-feeder for radial supply to customers and in cases of possible backup supply to the customers. Installation locations of certain types of switching equipment for the given concept of energy supply are determined according to the criterion of minimum value of System Average Interruption Duration Index and according to the criterion of minimum value of energy not delivered to the customers.

  3. A risk-based multi-objective model for optimal placement of sensors in water distribution system

    Science.gov (United States)

    Naserizade, Sareh S.; Nikoo, Mohammad Reza; Montaseri, Hossein

    2018-02-01

    In this study, a new stochastic model based on Conditional Value at Risk (CVaR) and multi-objective optimization methods is developed for optimal placement of sensors in water distribution system (WDS). This model determines minimization of risk which is caused by simultaneous multi-point contamination injection in WDS using CVaR approach. The CVaR considers uncertainties of contamination injection in the form of probability distribution function and calculates low-probability extreme events. In this approach, extreme losses occur at tail of the losses distribution function. Four-objective optimization model based on NSGA-II algorithm is developed to minimize losses of contamination injection (through CVaR of affected population and detection time) and also minimize the two other main criteria of optimal placement of sensors including probability of undetected events and cost. Finally, to determine the best solution, Preference Ranking Organization METHod for Enrichment Evaluation (PROMETHEE), as a subgroup of Multi Criteria Decision Making (MCDM) approach, is utilized to rank the alternatives on the trade-off curve among objective functions. Also, sensitivity analysis is done to investigate the importance of each criterion on PROMETHEE results considering three relative weighting scenarios. The effectiveness of the proposed methodology is examined through applying it to Lamerd WDS in the southwestern part of Iran. The PROMETHEE suggests 6 sensors with suitable distribution that approximately cover all regions of WDS. Optimal values related to CVaR of affected population and detection time as well as probability of undetected events for the best optimal solution are equal to 17,055 persons, 31 mins and 0.045%, respectively. The obtained results of the proposed methodology in Lamerd WDS show applicability of CVaR-based multi-objective simulation-optimization model for incorporating the main uncertainties of contamination injection in order to evaluate extreme value

  4. Optimal Meter Placement for Distribution Network State Estimation: A Circuit Representation Based MILP Approach

    DEFF Research Database (Denmark)

    Chen, Xiaoshuang; Lin, Jin; Wan, Can

    2016-01-01

    State estimation (SE) in distribution networks is not as accurate as that in transmission networks. Traditionally, distribution networks (DNs) are lack of direct measurements due to the limitations of investments and the difficulties of maintenance. Therefore, it is critical to improve the accuracy...... of SE in distribution networks by placing additional physical meters. For state-of-the-art SE models, it is difficult to clearly quantify measurements' influences on SE errors, so the problems of optimal meter placement for reducing SE errors are mostly solved by heuristic or suboptimal algorithms....... Under this background, this paper proposes a circuit representation model to represent SE errors. Based on the matrix formulation of the circuit representation model, the problem of optimal meter placement can be transformed to a mixed integer linear programming problem (MILP) via the disjunctive model...

  5. Optimal Sensor placement for acoustic range-based underwater robotic positioning

    Digital Repository Service at National Institute of Oceanography (India)

    Glotzbach, T.; Moreno-Salinas, D.; Aranda, J.; Pascoal, A.M.

    by affording the reviewer an overview of relevant principles, methods, and results available in the literature in the area, as well as of the practical motivation for this challenging topic of research. After a brief literature survey, a method... position estimator. Naturally, the optimal placement solution is a function of the actual measurement setup, the measurement model, and the actual position of the target. At first inspection this problem may seem to have little practical relevance...

  6. Optimal sensor placement for leakage detection and isolation in water distribution networks

    OpenAIRE

    Rosich Oliva, Albert; Sarrate Estruch, Ramon; Nejjari Akhi-Elarab, Fatiha

    2012-01-01

    In this paper, the problem of leakage detection and isolation in water distribution networks is addressed applying an optimal sensor placement methodology. The chosen technique is based on structural models and thus it is suitable to handle non-linear and large scale systems. A drawback of this technique arises when costs are assigned uniformly. A main contribution of this paper is the proposal of an iterative methodology that focuses on identifying essential sensors which ultimately leads to...

  7. Optimal power flow with optimal placement TCSC device on 500 kV Java-Bali electrical power system using genetic Algorithm-Taguchi method

    Science.gov (United States)

    Apribowo, Chico Hermanu Brillianto; Ibrahim, Muhammad Hamka; Wicaksono, F. X. Rian

    2018-02-01

    The growing burden of the load and the complexity of the power system has had an impact on the need for optimization of power system operation. Optimal power flow (OPF) with optimal location placement and rating of thyristor controlled series capacitor (TCSC) is an effective solution used to determine the economic cost of operating the plant and regulate the power flow in the power system. The purpose of this study is to minimize the total cost of generation by placing the location and the optimal rating of TCSC using genetic algorithm-design of experiment techniques (GA-DOE). Simulation on Java-Bali system 500 kV with the amount of TCSC used by 5 compensator, the proposed method can reduce the generation cost by 0.89% compared to OPF without using TCSC.

  8. Temperature Simulation of Greenhouse with CFD Methods and Optimal Sensor Placement

    Directory of Open Access Journals (Sweden)

    Yanzheng Liu

    2014-03-01

    Full Text Available The accuracy of information monitoring is significant to increase the effect of Greenhouse Environment Control. In this paper, by taking simulation for the temperature field in the greenhouse as an example, the CFD (Computational Fluid Dynamics simulation model for measuring the microclimate environment of greenhouse with the principle of thermal environment formation was established, and the temperature distributions under the condition of mechanical ventilation was also simulated. The results showed that the CFD model and its solution simulated for greenhouse thermal environment could describe the changing process of temperature environment within the greenhouse; the most suitable turbulent simulation model was the standard k?? model. Under the condition of mechanical ventilation, the average deviation between the simulated value and the measured value was 0.6, which was 4.5 percent of the measured value. The distribution of temperature filed had obvious layering structures, and the temperature in the greenhouse model decreased gradually from the periphery to the center. Based on these results, the sensor number and the optimal sensor placement were determined with CFD simulation method.

  9. Application of HGSO to security based optimal placement and parameter setting of UPFC

    International Nuclear Information System (INIS)

    Tarafdar Hagh, Mehrdad; Alipour, Manijeh; Teimourzadeh, Saeed

    2014-01-01

    Highlights: • A new method for solving the security based UPFC placement and parameter setting problem is proposed. • The proposed method is a global method for all mixed-integer problems. • The proposed method has the ability of the parallel search in binary and continues space. • By using the proposed method, most of the problems due to line contingencies are solved. • Comparison studies are done to compare the performance of the proposed method. - Abstract: This paper presents a novel method to solve security based optimal placement and parameter setting of unified power flow controller (UPFC) problem based on hybrid group search optimization (HGSO) technique. Firstly, HGSO is introduced in order to solve mix-integer type problems. Afterwards, the proposed method is applied to the security based optimal placement and parameter setting of UPFC problem. The focus of the paper is to enhance the power system security through eliminating or minimizing the over loaded lines and the bus voltage limit violations under single line contingencies. Simulation studies are carried out on the IEEE 6-bus, IEEE 14-bus and IEEE 30-bus systems in order to verify the accuracy and robustness of the proposed method. The results indicate that by using the proposed method, the power system remains secure under single line contingencies

  10. Optimal placement of excitations and sensors for verification of large dynamical systems

    Science.gov (United States)

    Salama, M.; Rose, T.; Garba, J.

    1987-01-01

    The computationally difficult problem of the optimal placement of excitations and sensors to maximize the observed measurements is studied within the framework of combinatorial optimization, and is solved numerically using a variation of the simulated annealing heuristic algorithm. Results of numerical experiments including a square plate and a 960 degrees-of-freedom Control of Flexible Structure (COFS) truss structure, are presented. Though the algorithm produces suboptimal solutions, its generality and simplicity allow the treatment of complex dynamical systems which would otherwise be difficult to handle.

  11. Optimal Placement of A Heat Pump in An Integrated Power and Heat Energy System

    DEFF Research Database (Denmark)

    Klyapovskiy, Sergey; You, Shi; Bindner, Henrik W.

    2017-01-01

    With the present trend towards Smart Grids and Smart Energy Systems it is important to look for the opportunities for integrated development between different energy sectors, such as electricity, heating, gas and transportation. This paper investigates the problem of optimal placement of a heat...... pump – a component that links electric and heating utilities together. The system used to demonstrate the integrated planning approach has two neighboring 10kV feeders and several distribution substations with loads that require central heating from the heat pump. The optimal location is found...

  12. Fast Optimal Replica Placement with Exhaustive Search Using Dynamically Reconfigurable Processor

    Directory of Open Access Journals (Sweden)

    Hidetoshi Takeshita

    2011-01-01

    Full Text Available This paper proposes a new replica placement algorithm that expands the exhaustive search limit with reasonable calculation time. It combines a new type of parallel data-flow processor with an architecture tuned for fast calculation. The replica placement problem is to find a replica-server set satisfying service constraints in a content delivery network (CDN. It is derived from the set cover problem which is known to be NP-hard. It is impractical to use exhaustive search to obtain optimal replica placement in large-scale networks, because calculation time increases with the number of combinations. To reduce calculation time, heuristic algorithms have been proposed, but it is known that no heuristic algorithm is assured of finding the optimal solution. The proposed algorithm suits parallel processing and pipeline execution and is implemented on DAPDNA-2, a dynamically reconfigurable processor. Experiments show that the proposed algorithm expands the exhaustive search limit by the factor of 18.8 compared to the conventional algorithm search limit running on a Neumann-type processor.

  13. Optimizing virtual machine placement for energy and SLA in clouds using utility functions

    Directory of Open Access Journals (Sweden)

    Abdelkhalik Mosa

    2016-10-01

    Full Text Available Abstract Cloud computing provides on-demand access to a shared pool of computing resources, which enables organizations to outsource their IT infrastructure. Cloud providers are building data centers to handle the continuous increase in cloud users’ demands. Consequently, these cloud data centers consume, and have the potential to waste, substantial amounts of energy. This energy consumption increases the operational cost and the CO2 emissions. The goal of this paper is to develop an optimized energy and SLA-aware virtual machine (VM placement strategy that dynamically assigns VMs to Physical Machines (PMs in cloud data centers. This placement strategy co-optimizes energy consumption and service level agreement (SLA violations. The proposed solution adopts utility functions to formulate the VM placement problem. A genetic algorithm searches the possible VMs-to-PMs assignments with a view to finding an assignment that maximizes utility. Simulation results using CloudSim show that the proposed utility-based approach reduced the average energy consumption by approximately 6 % and the overall SLA violations by more than 38 %, using fewer VM migrations and PM shutdowns, compared to a well-known heuristics-based approach.

  14. Optimal placement of water-lubricated rubber bearings for vibration reduction of flexible multistage rotor systems

    Science.gov (United States)

    Liu, Shibing; Yang, Bingen

    2017-10-01

    Flexible multistage rotor systems with water-lubricated rubber bearings (WLRBs) have a variety of engineering applications. Filling a technical gap in the literature, this effort proposes a method of optimal bearing placement that minimizes the vibration amplitude of a WLRB-supported flexible rotor system with a minimum number of bearings. In the development, a new model of WLRBs and a distributed transfer function formulation are used to define a mixed continuous-and-discrete optimization problem. To deal with the case of uncertain number of WLRBs in rotor design, a virtual bearing method is devised. Solution of the optimization problem by a real-coded genetic algorithm yields the locations and lengths of water-lubricated rubber bearings, by which the prescribed operational requirements for the rotor system are satisfied. The proposed method is applicable either to preliminary design of a new rotor system with the number of bearings unforeknown or to redesign of an existing rotor system with a given number of bearings. Numerical examples show that the proposed optimal bearing placement is efficient, accurate and versatile in different design cases.

  15. Optimal Placement and Sizing of PV-STATCOM in Power Systems Using Empirical Data and Adaptive Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Reza Sirjani

    2018-03-01

    Full Text Available Solar energy is a source of free, clean energy which avoids the destructive effects on the environment that have long been caused by power generation. Solar energy technology rivals fossil fuels, and its development has increased recently. Photovoltaic (PV solar farms can only produce active power during the day, while at night, they are completely idle. At the same time, though, active power should be supported by reactive power. Reactive power compensation in power systems improves power quality and stability. The use during the night of a PV solar farm inverter as a static synchronous compensator (or PV-STATCOM device has recently been proposed which can improve system performance and increase the utility of a PV solar farm. In this paper, a method for optimal PV-STATCOM placement and sizing is proposed using empirical data. Considering the objectives of power loss and cost minimization as well as voltage improvement, two sub-problems of placement and sizing, respectively, are solved by a power loss index and adaptive particle swarm optimization (APSO. Test results show that APSO not only performs better in finding optimal solutions but also converges faster compared with bee colony optimization (BCO and lightening search algorithm (LSA. Installation of a PV solar farm, STATCOM, and PV-STATCOM in a system are each evaluated in terms of efficiency and cost.

  16. Optimized Placement of Wind Turbines in Large-Scale Offshore Wind Farm using Particle Swarm Optimization Algorithm

    DEFF Research Database (Denmark)

    Hou, Peng; Hu, Weihao; Soltani, Mohsen

    2015-01-01

    With the increasing size of wind farm, the impact of the wake effect on wind farm energy yields become more and more evident. The arrangement of the wind turbines’ (WT) locations will influence the capital investment and contribute to the wake losses which incur the reduction of energy production....... As a consequence, the optimized placement of the wind turbines may be done by considering the wake effect as well as the components cost within the wind farm. In this paper, a mathematical model which includes the variation of both wind direction and wake deficit is proposed. The problem is formulated by using...... Levelized Production Cost (LPC) as the objective function. The optimization procedure is performed by Particle Swarm Optimization (PSO) algorithm with the purpose of maximizing the energy yields while minimizing the total investment. The simulation results indicate that the proposed method is effective...

  17. Optimal placement of active braces by using PSO algorithm in near- and far-field earthquakes

    Science.gov (United States)

    Mastali, M.; Kheyroddin, A.; Samali, B.; Vahdani, R.

    2016-03-01

    One of the most important issues in tall buildings is lateral resistance of the load-bearing systems against applied loads such as earthquake, wind and blast. Dual systems comprising core wall systems (single or multi-cell core) and moment-resisting frames are used as resistance systems in tall buildings. In addition to adequate stiffness provided by the dual system, most tall buildings may have to rely on various control systems to reduce the level of unwanted motions stemming from severe dynamic loads. One of the main challenges to effectively control the motion of a structure is limitation in distributing the required control along the structure height optimally. In this paper, concrete shear walls are used as secondary resistance system at three different heights as well as actuators installed in the braces. The optimal actuator positions are found by using optimized PSO algorithm as well as arbitrarily. The control performance of buildings that are equipped and controlled using the PSO algorithm method placement is assessed and compared with arbitrary placement of controllers using both near- and far-field ground motions of Kobe and Chi-Chi earthquakes.

  18. A triaxial accelerometer monkey algorithm for optimal sensor placement in structural health monitoring

    Science.gov (United States)

    Jia, Jingqing; Feng, Shuo; Liu, Wei

    2015-06-01

    Optimal sensor placement (OSP) technique is a vital part of the field of structural health monitoring (SHM). Triaxial accelerometers have been widely used in the SHM of large-scale structures in recent years. Triaxial accelerometers must be placed in such a way that all of the important dynamic information is obtained. At the same time, the sensor configuration must be optimal, so that the test resources are conserved. The recommended practice is to select proper degrees of freedom (DOF) based upon several criteria and the triaxial accelerometers are placed at the nodes corresponding to these DOFs. This results in non-optimal placement of many accelerometers. A ‘triaxial accelerometer monkey algorithm’ (TAMA) is presented in this paper to solve OSP problems of triaxial accelerometers. The EFI3 measurement theory is modified and involved in the objective function to make it more adaptable in the OSP technique of triaxial accelerometers. A method of calculating the threshold value based on probability theory is proposed to improve the healthy rate of monkeys in a troop generation process. Meanwhile, the processes of harmony ladder climb and scanning watch jump are proposed and given in detail. Finally, Xinghai NO.1 Bridge in Dalian is implemented to demonstrate the effectiveness of TAMA. The final results obtained by TAMA are compared with those of the original monkey algorithm and EFI3 measurement, which show that TAMA can improve computational efficiency and get a better sensor configuration.

  19. Control of chaos in permanent magnet synchronous motor by using optimal Lyapunov exponents placement

    Energy Technology Data Exchange (ETDEWEB)

    Ataei, Mohammad, E-mail: ataei@eng.ui.ac.i [Department of Electrical Engineering, Faculty of Engineering, University of Isfahan, Hezar-Jerib St., Postal Code 8174673441, Isfahan (Iran, Islamic Republic of); Kiyoumarsi, Arash, E-mail: kiyoumarsi@eng.ui.ac.i [Department of Electrical Engineering, Faculty of Engineering, University of Isfahan, Hezar-Jerib St., Postal Code 8174673441, Isfahan (Iran, Islamic Republic of); Ghorbani, Behzad, E-mail: behzad.ghorbani63@gmail.co [Department of Control Engineering, Najafabad Azad University, Najafabad, Isfahan (Iran, Islamic Republic of)

    2010-09-13

    Permanent Magnet Synchronous Motor (PMSM) experiences chaotic behavior for a certain range of its parameters. In this case, since the performance of the PMSM degrades, the chaos should be eliminated. In this Letter, the control of the undesirable chaos in PMSM using Lyapunov exponents (LEs) placement is proposed that is also improved by choosing optimal locations of the LEs in the sense of predefined cost function. Moreover, in order to provide the physical realization of the method, nonlinear parameter estimator for the system is suggested. Finally, to show the effectiveness of the proposed methodology, the simulation results for applying this control strategy are provided.

  20. [Method for optimal sensor placement in water distribution systems with nodal demand uncertainties].

    Science.gov (United States)

    Liu, Shu-Ming; Wu, Xue; Ouyang, Le-Yan

    2013-08-01

    The notion of identification fitness was proposed for optimizing sensor placement in water distribution systems. Nondominated Sorting Genetic Algorithm II was used to find the Pareto front between minimum overlap of possible detection times of two events and the best probability of detection, taking nodal demand uncertainties into account. This methodology was applied to an example network. The solutions show that the probability of detection and the number of possible locations are not remarkably affected by nodal demand uncertainties, but the sources identification accuracy declines with nodal demand uncertainties.

  1. Strain sensors optimal placement for vibration-based structural health monitoring. The effect of damage on the initially optimal configuration

    Science.gov (United States)

    Loutas, T. H.; Bourikas, A.

    2017-12-01

    We revisit the optimal sensor placement of engineering structures problem with an emphasis on in-plane dynamic strain measurements and to the direction of modal identification as well as vibration-based damage detection for structural health monitoring purposes. The approach utilized is based on the maximization of a norm of the Fisher Information Matrix built with numerically obtained mode shapes of the structure and at the same time prohibit the sensorization of neighbor degrees of freedom as well as those carrying similar information, in order to obtain a satisfactory coverage. A new convergence criterion of the Fisher Information Matrix (FIM) norm is proposed in order to deal with the issue of choosing an appropriate sensor redundancy threshold, a concept recently introduced but not further investigated concerning its choice. The sensor configurations obtained via a forward sequential placement algorithm are sub-optimal in terms of FIM norm values but the selected sensors are not allowed to be placed in neighbor degrees of freedom providing thus a better coverage of the structure and a subsequent better identification of the experimental mode shapes. The issue of how service induced damage affects the initially nominated as optimal sensor configuration is also investigated and reported. The numerical model of a composite sandwich panel serves as a representative aerospace structure upon which our investigations are based.

  2. Neural network for optimal capacitor placement and its impact on power quality in electric distribution systems

    International Nuclear Information System (INIS)

    Mohamed, A.A.E.S.

    2013-01-01

    Capacitors are widely installed in distribution systems for reactive power compensation to achieve power and energy loss reduction, voltage regulation and system capacity release. The extent of these benefits depends greatly on how the capacitors are placed on the system. The problem of how to place capacitors on the system such that these benefits are achieved and maximized against the cost associated with the capacitor placement is termed the general capacitor placement problem. The capacitor placement problem has been formulated as the maximization of the savings resulted from reduction in both peak power and energy losses considering capacitor installation cost and maintaining the buses voltage within acceptable limits. After an appropriate analysis, the optimization problem was formulated in a quadratic form. For solving capacitor placement a new combinatorial heuristic and quadratic programming technique has been presented and applied in the MATLAB software. The proposed strategy was applied on two different radial distribution feeders. The results have been compared with previous works. The comparison showed the validity and the effectiveness of this strategy. Secondly, two artificial intelligence techniques for predicting the capacitor switching state in radial distribution feeders have been investigated; one is based on basis Radial Basis Neural Network (RBNN) and the other is based on Adaptive Neuro-Fuzzy Inference System (ANFIS). The ANFIS technique gives better results with a minimum total error compared to RBNN. The learning duration of ANFIS was very short than the neural network case. It implied that ANFIS reaches to the target faster than neural network. Thirdly, an artificial intelligence (RBNN) approach for estimation of transient overvoltage during capacitor switching has been studied. The artificial intelligence approach estimated the transient overvoltages with a minimum error in a short computational time. Finally, a capacitor switching

  3. Determining Student Competency in Field Placements: An Emerging Theoretical Model

    Directory of Open Access Journals (Sweden)

    Twyla L. Salm

    2016-06-01

    Full Text Available This paper describes a qualitative case study that explores how twenty-three field advisors, representing three human service professions including education, nursing, and social work, experience the process of assessment with students who are struggling to meet minimum competencies in field placements. Five themes emerged from the analysis of qualitative interviews. The field advisors primary concern was the level of professional competency achieved by practicum students. Related to competency were themes concerned with the field advisor's role in being accountable and protecting the reputation of his/her profession as well as the reputation of the professional program affiliated with the practicum student's professional education. The final theme – teacher-student relationship –emerged from the data, both as a stand-alone and global or umbrella theme. As an umbrella theme, teacher-student relationship permeated each of the other themes as the participants interpreted their experiences of the process of assessment through the mentor relationships. A theoretical model was derived from these findings and the description of the model is presented. Cet article décrit une étude de cas qualitative qui explore comment vingt-trois conseillers de stages, représentant trois professions de services sociaux comprenant l’éducation, les soins infirmiers et le travail social, ont vécu l’expérience du processus d’évaluation avec des étudiants qui ont des difficultés à acquérir les compétences minimales durant les stages. Cinq thèmes ont été identifiés lors de l’analyse des entrevues qualitatives. La préoccupation principale des conseillers de stages était le niveau de compétence professionnelle acquis par les stagiaires. Les thèmes liés à la compétence étaient le rôle des conseillers de stages dans leur responsabilité pour protéger la réputation de leur profession ainsi que la réputation d’un programme professionnel

  4. On the Design of Smart Parking Networks in the Smart Cities: An Optimal Sensor Placement Model

    Science.gov (United States)

    Bagula, Antoine; Castelli, Lorenzo; Zennaro, Marco

    2015-01-01

    Smart parking is a typical IoT application that can benefit from advances in sensor, actuator and RFID technologies to provide many services to its users and parking owners of a smart city. This paper considers a smart parking infrastructure where sensors are laid down on the parking spots to detect car presence and RFID readers are embedded into parking gates to identify cars and help in the billing of the smart parking. Both types of devices are endowed with wired and wireless communication capabilities for reporting to a gateway where the situation recognition is performed. The sensor devices are tasked to play one of the three roles: (1) slave sensor nodes located on the parking spot to detect car presence/absence; (2) master nodes located at one of the edges of a parking lot to detect presence and collect the sensor readings from the slave nodes; and (3) repeater sensor nodes, also called “anchor” nodes, located strategically at specific locations in the parking lot to increase the coverage and connectivity of the wireless sensor network. While slave and master nodes are placed based on geographic constraints, the optimal placement of the relay/anchor sensor nodes in smart parking is an important parameter upon which the cost and efficiency of the parking system depends. We formulate the optimal placement of sensors in smart parking as an integer linear programming multi-objective problem optimizing the sensor network engineering efficiency in terms of coverage and lifetime maximization, as well as its economic gain in terms of the number of sensors deployed for a specific coverage and lifetime. We propose an exact solution to the node placement problem using single-step and two-step solutions implemented in the Mosel language based on the Xpress-MPsuite of libraries. Experimental results reveal the relative efficiency of the single-step compared to the two-step model on different performance parameters. These results are consolidated by simulation results

  5. On the Design of Smart Parking Networks in the Smart Cities: An Optimal Sensor Placement Model

    Directory of Open Access Journals (Sweden)

    Antoine Bagula

    2015-06-01

    Full Text Available Smart parking is a typical IoT application that can benefit from advances in sensor, actuator and RFID technologies to provide many services to its users and parking owners of a smart city. This paper considers a smart parking infrastructure where sensors are laid down on the parking spots to detect car presence and RFID readers are embedded into parking gates to identify cars and help in the billing of the smart parking. Both types of devices are endowed with wired and wireless communication capabilities for reporting to a gateway where the situation recognition is performed. The sensor devices are tasked to play one of the three roles: (1 slave sensor nodes located on the parking spot to detect car presence/absence; (2 master nodes located at one of the edges of a parking lot to detect presence and collect the sensor readings from the slave nodes; and (3 repeater sensor nodes, also called “anchor” nodes, located strategically at specific locations in the parking lot to increase the coverage and connectivity of the wireless sensor network. While slave and master nodes are placed based on geographic constraints, the optimal placement of the relay/anchor sensor nodes in smart parking is an important parameter upon which the cost and efficiency of the parking system depends. We formulate the optimal placement of sensors in smart parking as an integer linear programming multi-objective problem optimizing the sensor network engineering efficiency in terms of coverage and lifetime maximization, as well as its economic gain in terms of the number of sensors deployed for a specific coverage and lifetime. We propose an exact solution to the node placement problem using single-step and two-step solutions implemented in the Mosel language based on the Xpress-MPsuite of libraries. Experimental results reveal the relative efficiency of the single-step compared to the two-step model on different performance parameters. These results are consolidated by

  6. On the Design of Smart Parking Networks in the Smart Cities: An Optimal Sensor Placement Model.

    Science.gov (United States)

    Bagula, Antoine; Castelli, Lorenzo; Zennaro, Marco

    2015-06-30

    Smart parking is a typical IoT application that can benefit from advances in sensor, actuator and RFID technologies to provide many services to its users and parking owners of a smart city. This paper considers a smart parking infrastructure where sensors are laid down on the parking spots to detect car presence and RFID readers are embedded into parking gates to identify cars and help in the billing of the smart parking. Both types of devices are endowed with wired and wireless communication capabilities for reporting to a gateway where the situation recognition is performed. The sensor devices are tasked to play one of the three roles: (1) slave sensor nodes located on the parking spot to detect car presence/absence; (2) master nodes located at one of the edges of a parking lot to detect presence and collect the sensor readings from the slave nodes; and (3) repeater sensor nodes, also called "anchor" nodes, located strategically at specific locations in the parking lot to increase the coverage and connectivity of the wireless sensor network. While slave and master nodes are placed based on geographic constraints, the optimal placement of the relay/anchor sensor nodes in smart parking is an important parameter upon which the cost and efficiency of the parking system depends. We formulate the optimal placement of sensors in smart parking as an integer linear programming multi-objective problem optimizing the sensor network engineering efficiency in terms of coverage and lifetime maximization, as well as its economic gain in terms of the number of sensors deployed for a specific coverage and lifetime. We propose an exact solution to the node placement problem using single-step and two-step solutions implemented in the Mosel language based on the Xpress-MPsuite of libraries. Experimental results reveal the relative efficiency of the single-step compared to the two-step model on different performance parameters. These results are consolidated by simulation results

  7. Optimal Sequential Diagnostic Strategy Generation Considering Test Placement Cost for Multimode Systems

    Directory of Open Access Journals (Sweden)

    Shigang Zhang

    2015-10-01

    Full Text Available Sequential fault diagnosis is an approach that realizes fault isolation by executing the optimal test step by step. The strategy used, i.e., the sequential diagnostic strategy, has great influence on diagnostic accuracy and cost. Optimal sequential diagnostic strategy generation is an important step in the process of diagnosis system construction, which has been studied extensively in the literature. However, previous algorithms either are designed for single mode systems or do not consider test placement cost. They are not suitable to solve the sequential diagnostic strategy generation problem considering test placement cost for multimode systems. Therefore, this problem is studied in this paper. A formulation is presented. Two algorithms are proposed, one of which is realized by system transformation and the other is newly designed. Extensive simulations are carried out to test the effectiveness of the algorithms. A real-world system is also presented. All the results show that both of them have the ability to solve the diagnostic strategy generation problem, and they have different characteristics.

  8. Optimal Sequential Diagnostic Strategy Generation Considering Test Placement Cost for Multimode Systems

    Science.gov (United States)

    Zhang, Shigang; Song, Lijun; Zhang, Wei; Hu, Zheng; Yang, Yongmin

    2015-01-01

    Sequential fault diagnosis is an approach that realizes fault isolation by executing the optimal test step by step. The strategy used, i.e., the sequential diagnostic strategy, has great influence on diagnostic accuracy and cost. Optimal sequential diagnostic strategy generation is an important step in the process of diagnosis system construction, which has been studied extensively in the literature. However, previous algorithms either are designed for single mode systems or do not consider test placement cost. They are not suitable to solve the sequential diagnostic strategy generation problem considering test placement cost for multimode systems. Therefore, this problem is studied in this paper. A formulation is presented. Two algorithms are proposed, one of which is realized by system transformation and the other is newly designed. Extensive simulations are carried out to test the effectiveness of the algorithms. A real-world system is also presented. All the results show that both of them have the ability to solve the diagnostic strategy generation problem, and they have different characteristics. PMID:26457709

  9. Finite element analysis and genetic algorithm optimization design for the actuator placement on a large adaptive structure

    Science.gov (United States)

    Sheng, Lizeng

    The dissertation focuses on one of the major research needs in the area of adaptive/intelligent/smart structures, the development and application of finite element analysis and genetic algorithms for optimal design of large-scale adaptive structures. We first review some basic concepts in finite element method and genetic algorithms, along with the research on smart structures. Then we propose a solution methodology for solving a critical problem in the design of a next generation of large-scale adaptive structures---optimal placements of a large number of actuators to control thermal deformations. After briefly reviewing the three most frequently used general approaches to derive a finite element formulation, the dissertation presents techniques associated with general shell finite element analysis using flat triangular laminated composite elements. The element used here has three nodes and eighteen degrees of freedom and is obtained by combining a triangular membrane element and a triangular plate bending element. The element includes the coupling effect between membrane deformation and bending deformation. The membrane element is derived from the linear strain triangular element using Cook's transformation. The discrete Kirchhoff triangular (DKT) element is used as the plate bending element. For completeness, a complete derivation of the DKT is presented. Geometrically nonlinear finite element formulation is derived for the analysis of adaptive structures under the combined thermal and electrical loads. Next, we solve the optimization problems of placing a large number of piezoelectric actuators to control thermal distortions in a large mirror in the presence of four different thermal loads. We then extend this to a multi-objective optimization problem of determining only one set of piezoelectric actuator locations that can be used to control the deformation in the same mirror under the action of any one of the four thermal loads. A series of genetic algorithms

  10. Optimal placement and sizing of fixed and switched capacitor banks under non sinusoidal operating conditions

    International Nuclear Information System (INIS)

    Ladjevardi, M.; Masoum, M.A.S.; Fuchs, E.F.

    2004-01-01

    An iterative nonlinear algorithm is generated for optimal sizing and placement of fixed and switched capacitor banks on radial distribution lines in the presence of linear and nonlinear loads. The HARMFLOW algorithm and the maximum sensitivities selection method are used to solve the constrained optimizations problem with discrete variables. To limit the burden of calculations and improve convergence, the problem is decomposed into two subproblems. Objective functions include minimum system losses and capacitor cost while IEEE 519 power quality limits are used as constraints. Results are presented and analyzed for the 18 bus IEEE distorted system. The advantage of the proposed algorithm compared to the previous work is the consideration of harmonic couplings and reactions of actual nonlinear loads of the distribution system

  11. Optimizing Virtual Network Functions Placement in Virtual Data Center Infrastructure Using Machine Learning

    Science.gov (United States)

    Bolodurina, I. P.; Parfenov, D. I.

    2018-01-01

    We have elaborated a neural network model of virtual network flow identification based on the statistical properties of flows circulating in the network of the data center and characteristics that describe the content of packets transmitted through network objects. This enabled us to establish the optimal set of attributes to identify virtual network functions. We have established an algorithm for optimizing the placement of virtual data functions using the data obtained in our research. Our approach uses a hybrid method of visualization using virtual machines and containers, which enables to reduce the infrastructure load and the response time in the network of the virtual data center. The algorithmic solution is based on neural networks, which enables to scale it at any number of the network function copies.

  12. Wiring economy and volume exclusion determine neuronal placement in the Drosophila brain.

    Science.gov (United States)

    Rivera-Alba, Marta; Vitaladevuni, Shiv N; Mishchenko, Yuriy; Mischenko, Yuriy; Lu, Zhiyuan; Takemura, Shin-Ya; Scheffer, Lou; Meinertzhagen, Ian A; Chklovskii, Dmitri B; de Polavieja, Gonzalo G

    2011-12-06

    Wiring economy has successfully explained the individual placement of neurons in simple nervous systems like that of Caenorhabditis elegans [1-3] and the locations of coarser structures like cortical areas in complex vertebrate brains [4]. However, it remains unclear whether wiring economy can explain the placement of individual neurons in brains larger than that of C. elegans. Indeed, given the greater number of neuronal interconnections in larger brains, simply minimizing the length of connections results in unrealistic configurations, with multiple neurons occupying the same position in space. Avoiding such configurations, or volume exclusion, repels neurons from each other, thus counteracting wiring economy. Here we test whether wiring economy together with volume exclusion can explain the placement of neurons in a module of the Drosophila melanogaster brain known as lamina cartridge [5-13]. We used newly developed techniques for semiautomated reconstruction from serial electron microscopy (EM) [14] to obtain the shapes of neurons, the location of synapses, and the resultant synaptic connectivity. We show that wiring length minimization and volume exclusion together can explain the structure of the lamina microcircuit. Therefore, even in brains larger than that of C. elegans, at least for some circuits, optimization can play an important role in individual neuron placement. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. Development of Decision-Making Automated System for Optimal Placement of Physical Access Control System’s Elements

    Science.gov (United States)

    Danilova, Olga; Semenova, Zinaida

    2018-04-01

    The objective of this study is a detailed analysis of physical protection systems development for information resources. The optimization theory and decision-making mathematical apparatus is used to formulate correctly and create an algorithm of selection procedure for security systems optimal configuration considering the location of the secured object’s access point and zones. The result of this study is a software implementation scheme of decision-making system for optimal placement of the physical access control system’s elements.

  14. Sensor Placement via Optimal Experiment Design in EMI Sensing of Metallic Objects

    Directory of Open Access Journals (Sweden)

    Lin-Ping Song

    2016-01-01

    Full Text Available This work, under the optimal experimental design framework, investigates the sensor placement problem that aims to guide electromagnetic induction (EMI sensing of multiple objects. We use the linearized model covariance matrix as a measure of estimation error to present a sequential experimental design (SED technique. The technique recursively minimizes data misfit to update model parameters and maximizes an information gain function for a future survey relative to previous surveys. The fundamental process of the SED seeks to increase weighted sensitivities to targets when placing sensors. The synthetic and field experiments demonstrate that SED can be used to guide the sensing process for an effective interrogation. It also can serve as a theoretic basis to improve empirical survey operation. We further study the sensitivity of the SED to the number of objects within the sensing range. The tests suggest that an appropriately overrepresented model about expected anomalies might be a feasible choice.

  15. Optimal Placement and Sizing of Renewable Distributed Generations and Capacitor Banks into Radial Distribution Systems

    Directory of Open Access Journals (Sweden)

    Mahesh Kumar

    2017-06-01

    Full Text Available In recent years, renewable types of distributed generation in the distribution system have been much appreciated due to their enormous technical and environmental advantages. This paper proposes a methodology for optimal placement and sizing of renewable distributed generation(s (i.e., wind, solar and biomass and capacitor banks into a radial distribution system. The intermittency of wind speed and solar irradiance are handled with multi-state modeling using suitable probability distribution functions. The three objective functions, i.e., power loss reduction, voltage stability improvement, and voltage deviation minimization are optimized using advanced Pareto-front non-dominated sorting multi-objective particle swarm optimization method. First a set of non-dominated Pareto-front data are called from the algorithm. Later, a fuzzy decision technique is applied to extract the trade-off solution set. The effectiveness of the proposed methodology is tested on the standard IEEE 33 test system. The overall results reveal that combination of renewable distributed generations and capacitor banks are dominant in power loss reduction, voltage stability and voltage profile improvement.

  16. Rise and Shock: Optimal Defibrillator Placement in a High-rise Building.

    Science.gov (United States)

    Chan, Timothy C Y

    2017-01-01

    Out-of-hospital cardiac arrests (OHCA) in high-rise buildings experience lower survival and longer delays until paramedic arrival. Use of publicly accessible automated external defibrillators (AED) can improve survival, but "vertical" placement has not been studied. We aim to determine whether elevator-based or lobby-based AED placement results in shorter vertical distance travelled ("response distance") to OHCAs in a high-rise building. We developed a model of a single-elevator, n-floor high-rise building. We calculated and compared the average distance from AED to floor of arrest for the two AED locations. We modeled OHCA occurrences using floor-specific Poisson processes, the risk of OHCA on the ground floor (λ 1 ) and the risk on any above-ground floor (λ). The elevator was modeled with an override function enabling direct travel to the target floor. The elevator location upon override was modeled as a discrete uniform random variable. Calculations used the laws of probability. Elevator-based AED placement had shorter average response distance if the number of floors (n) in the building exceeded three quarters of the ratio of ground-floor OHCA risk to above-ground floor risk (λ 1 /λ) plus one half (n ≥ 3λ 1 /4λ + 0.5). Otherwise, a lobby-based AED had shorter average response distance. If OHCA risk on each floor was equal, an elevator-based AED had shorter average response distance. Elevator-based AEDs travel less vertical distance to OHCAs in tall buildings or those with uniform vertical risk, while lobby-based AEDs travel less vertical distance in buildings with substantial lobby, underground, and nearby street-level traffic and OHCA risk.

  17. Photovoltaic and Wind Turbine Integration Applying Cuckoo Search for Probabilistic Reliable Optimal Placement

    OpenAIRE

    R. A. Swief; T. S. Abdel-Salam; Noha H. El-Amary

    2018-01-01

    This paper presents an efficient Cuckoo Search Optimization technique to improve the reliability of electrical power systems. Various reliability objective indices such as Energy Not Supplied, System Average Interruption Frequency Index, System Average Interruption, and Duration Index are the main indices indicating reliability. The Cuckoo Search Optimization (CSO) technique is applied to optimally place the protection devices, install the distributed generators, and to determine the size of ...

  18. Integrated method to optimize well connection and platform placement on a multi-reservoir scenario

    Energy Technology Data Exchange (ETDEWEB)

    Sousa, Sergio Henrique Guerra de; Madeira, Marcelo Gomes; Franca, Martha Salles [Halliburton, Rio de Janeiro, RJ (Brazil); Mota, Rosane Oliveira; Silva, Edilon Ribeiro da; King, Vanessa Pereira Spear [Petroleo Brasileiro S.A. (PETROBRAS), Rio de Janeiro, RJ (Brazil)

    2012-07-01

    This paper describes a workflow created to optimize the platform placement and well-platform connections on a multi reservoir scenario using an integrated reservoir simulator paired with an optimization engine. The proposed methodology describes how a new platform, being incorporated into a pre-existing asset, can be better used to develop newly-discovered fields, while helping increase the production of existing fields by sharing their production load. The sharing of production facilities is highly important in Brazilian offshore assets because of their high price (a few billion dollars per facility) and the fact that total production is usually limited to the installed capacity of liquid processing, which is an important constraint on high water-cut well production rates typical to this region. The case study asset used to present the workflow consists of two deep water oil fields, each one developed by its own production platform, and a newly-discovered field with strong aquifer support that will be entirely developed with a new production platform. Because this new field should not include injector wells owing to the strong aquifer presence, the idea is to consider reconnecting existing wells from the two pre-existing fields to better use the production resources. In this scenario, the platform location is an important optimization issue, as a balance between supporting the production of the planned wells on the new field and the production of re-routed wells from the existing fields must be reached to achieve improved overall asset production. If the new platform is too far away from any interconnected production well, pressure-drop issues along the pipeline might actually decrease production from the existing fields rather than augment it. The main contribution of this work is giving the reader insights on how to model and optimize these complex decisions to generate high-quality scenarios. (author)

  19. Field-Based Optimal Placement of Antennas for Body-Worn Wireless Sensors

    Directory of Open Access Journals (Sweden)

    Łukasz Januszkiewicz

    2016-05-01

    Full Text Available We investigate a case of automated energy-budget-aware optimization of the physical position of nodes (sensors in a Wireless Body Area Network (WBAN. This problem has not been presented in the literature yet, as opposed to antenna and routing optimization, which are relatively well-addressed. In our research, which was inspired by a safety-critical application for firefighters, the sensor network consists of three nodes located on the human body. The nodes communicate over a radio link operating in the 2.4 GHz or 5.8 GHz ISM frequency band. Two sensors have a fixed location: one on the head (earlobe pulse oximetry and one on the arm (with accelerometers, temperature and humidity sensors, and a GPS receiver, while the position of the third sensor can be adjusted within a predefined region on the wearer’s chest. The path loss between each node pair strongly depends on the location of the nodes and is difficult to predict without performing a full-wave electromagnetic simulation. Our optimization scheme employs evolutionary computing. The novelty of our approach lies not only in the formulation of the problem but also in linking a fully automated optimization procedure with an electromagnetic simulator and a simplified human body model. This combination turns out to be a computationally effective solution, which, depending on the initial placement, has a potential to improve performance of our example sensor network setup by up to about 20 dB with respect to the path loss between selected nodes.

  20. Optimal placement of capacitors in a radial network using conic and mixed integer linear programming

    Energy Technology Data Exchange (ETDEWEB)

    Jabr, R.A. [Electrical, Computer and Communication Engineering Department, Notre Dame University, P.O. Box: 72, Zouk Mikhael, Zouk Mosbeh (Lebanon)

    2008-06-15

    This paper considers the problem of optimally placing fixed and switched type capacitors in a radial distribution network. The aim of this problem is to minimize the costs associated with capacitor banks, peak power, and energy losses whilst satisfying a pre-specified set of physical and technical constraints. The proposed solution is obtained using a two-phase approach. In phase-I, the problem is formulated as a conic program in which all nodes are candidates for placement of capacitor banks whose sizes are considered as continuous variables. A global solution of the phase-I problem is obtained using an interior-point based conic programming solver. Phase-II seeks a practical optimal solution by considering capacitor sizes as discrete variables. The problem in this phase is formulated as a mixed integer linear program based on minimizing the L1-norm of deviations from the phase-I state variable values. The solution to the phase-II problem is obtained using a mixed integer linear programming solver. The proposed method is validated via extensive comparisons with previously published results. (author)

  1. Solve: a non linear least-squares code and its application to the optimal placement of torsatron vertical field coils

    International Nuclear Information System (INIS)

    Aspinall, J.

    1982-01-01

    A computational method was developed which alleviates the need for lengthy parametric scans as part of a design process. The method makes use of a least squares algorithm to find the optimal value of a parameter vector. Optimal is defined in terms of a utility function prescribed by the user. The placement of the vertical field coils of a torsatron is such a non linear problem

  2. Multiobjective optimal placement of switches and protective devices in electric power distribution systems using ant colony optimization

    Energy Technology Data Exchange (ETDEWEB)

    Tippachon, Wiwat; Rerkpreedapong, Dulpichet [Department of Electrical Engineering, Kasetsart University, 50 Phaholyothin Rd., Ladyao, Jatujak, Bangkok 10900 (Thailand)

    2009-07-15

    This paper presents a multiobjective optimization methodology to optimally place switches and protective devices in electric power distribution networks. Identifying the type and location of them is a combinatorial optimization problem described by a nonlinear and nondifferential function. The multiobjective ant colony optimization (MACO) has been applied to this problem to minimize the total cost while simultaneously minimize two distribution network reliability indices including system average interruption frequency index (SAIFI) and system interruption duration index (SAIDI). Actual distribution feeders are used in the tests, and test results have shown that the algorithm can determine the set of optimal nondominated solutions. It allows the utility to obtain the optimal type and location of devices to achieve the best system reliability with the lowest cost. (author)

  3. Optimal needle placement for the accurate magnetic material quantification based on uncertainty analysis in the inverse approach

    International Nuclear Information System (INIS)

    Abdallh, A; Crevecoeur, G; Dupré, L

    2010-01-01

    The measured voltage signals picked up by the needle probe method can be interpreted by a numerical method so as to identify the magnetic material properties of the magnetic circuit of an electromagnetic device. However, when solving this electromagnetic inverse problem, the uncertainties in the numerical method give rise to recovery errors since the calculated needle signals in the forward problem are sensitive to these uncertainties. This paper proposes a stochastic Cramér–Rao bound method for determining the optimal sensor placement in the experimental setup. The numerical method is computationally time efficient where the geometrical parameters need to be provided. We apply the method for the non-destructive magnetic material characterization of an EI inductor where we ascertain the optimal experiment design. This design corresponds to the highest possible resolution that can be obtained when solving the inverse problem. Moreover, the presented results are validated by comparison with the exact material characteristics. The results show that the proposed methodology is independent of the values of the material parameter so that it can be applied before solving the inverse problem, i.e. as a priori estimation stage

  4. Optimal training for emergency needle thoracostomy placement by prehospital personnel: didactic teaching versus a cadaver-based training program.

    Science.gov (United States)

    Grabo, Daniel; Inaba, Kenji; Hammer, Peter; Karamanos, Efstathios; Skiada, Dimitra; Martin, Matthew; Sullivan, Maura; Demetriades, Demetrios

    2014-09-01

    Tension pneumothorax can rapidly progress to cardiac arrest and death if not promptly recognized and appropriately treated. We sought to evaluate the effectiveness of traditional didactic slide-based lectures (SBLs) as compared with fresh tissue cadaver-based training (CBT) for placement of needle thoracostomy (NT). Forty randomly selected US Navy corpsmen were recruited to participate from incoming classes of the Navy Trauma Training Center at the LAC + USC Medical Center and were then randomized to one of two NT teaching methods. The following outcomes were compared between the two study arms: (1) time required to perform the procedure, (2) correct placement of the needle, and (3) magnitude of deviation from the correct position. During the study period, a total of 40 corpsmen were enrolled, 20 randomized to SBL and 20 to CBT arms. When outcomes were analyzed, time required to NT placement was not different between the two arms. Examination of the location of needle placement revealed marked differences between the two study groups. Only a minority of the SBL group (35%) placed the NT correctly in the second intercostal space. In comparison, the majority of corpsmen assigned to the CBT group demonstrated accurate placement in the second intercostal space (75%). In a CBT module, US Navy corpsmen were better trained to place NT accurately than their traditional didactic SBL counterparts. Further studies are indicated to identify the optimal components of effective simulation training for NT and other emergent interventions.

  5. Optimizing VM allocation and data placement for data-intensive applications in cloud using ACO metaheuristic algorithm

    Directory of Open Access Journals (Sweden)

    T.P. Shabeera

    2017-04-01

    Full Text Available Nowadays data-intensive applications for processing big data are being hosted in the cloud. Since the cloud environment provides virtualized resources for computation, and data-intensive applications require communication between the computing nodes, the placement of Virtual Machines (VMs and location of data affect the overall computation time. Majority of the research work reported in the current literature consider the selection of physical nodes for placing data and VMs as independent problems. This paper proposes an approach which considers VM placement and data placement hand in hand. The primary objective is to reduce cross network traffic and bandwidth usage, by placing required number of VMs and data in Physical Machines (PMs which are physically closer. The VM and data placement problem (referred as MinDistVMDataPlacement problem is defined in this paper and has been proved to be NP- Hard. This paper presents and evaluates a metaheuristic algorithm based on Ant Colony Optimization (ACO, which selects a set of adjacent PMs for placing data and VMs. Data is distributed in the physical storage devices of the selected PMs. According to the processing capacity of each PM, a set of VMs are placed on these PMs to process data stored in them. We use simulation to evaluate our algorithm. The results show that the proposed algorithm selects PMs in close proximity and the jobs executed in the VMs allocated by the proposed scheme outperforms other allocation schemes.

  6. Ensemble-based data assimilation and optimal sensor placement for scalar source reconstruction

    Science.gov (United States)

    Mons, Vincent; Wang, Qi; Zaki, Tamer

    2017-11-01

    Reconstructing the characteristics of a scalar source from limited remote measurements in a turbulent flow is a problem of great interest for environmental monitoring, and is challenging due to several aspects. Firstly, the numerical estimation of the scalar dispersion in a turbulent flow requires significant computational resources. Secondly, in actual practice, only a limited number of observations are available, which generally makes the corresponding inverse problem ill-posed. Ensemble-based variational data assimilation techniques are adopted to solve the problem of scalar source localization in a turbulent channel flow at Reτ = 180 . This approach combines the components of variational data assimilation and ensemble Kalman filtering, and inherits the robustness from the former and the ease of implementation from the latter. An ensemble-based methodology for optimal sensor placement is also proposed in order to improve the condition of the inverse problem, which enhances the performances of the data assimilation scheme. This work has been partially funded by the Office of Naval Research (Grant N00014-16-1-2542) and by the National Science Foundation (Grant 1461870).

  7. Optimal placement and decentralized robust vibration control for spacecraft smart solar panel structures

    International Nuclear Information System (INIS)

    Jiang, Jian-ping; Li, Dong-xu

    2010-01-01

    The decentralized robust vibration control with collocated piezoelectric actuator and strain sensor pairs is considered in this paper for spacecraft solar panel structures. Each actuator is driven individually by the output of the corresponding sensor so that only local feedback control is implemented, with each actuator, sensor and controller operating independently. Firstly, an optimal placement method for the location of the collocated piezoelectric actuator and strain gauge sensor pairs is developed based on the degree of observability and controllability indices for solar panel structures. Secondly, a decentralized robust H ∞ controller is designed to suppress the vibration induced by external disturbance. Finally, a numerical comparison between centralized and decentralized control systems is performed in order to investigate their effectiveness to suppress vibration of the smart solar panel. The simulation results show that the vibration can be significantly suppressed with permitted actuator voltages by the controllers. The decentralized control system almost has the same disturbance attenuation level as the centralized control system with a bit higher control voltages. More importantly, the decentralized controller composed of four three-order systems is a better practical implementation than a high-order centralized controller is

  8. Pattern placement errors: application of in-situ interferometer-determined Zernike coefficients in determining printed image deviations

    Science.gov (United States)

    Roberts, William R.; Gould, Christopher J.; Smith, Adlai H.; Rebitz, Ken

    2000-08-01

    Several ideas have recently been presented which attempt to measure and predict lens aberrations for new low k1 imaging systems. Abbreviated sets of Zernike coefficients have been produced and used to predict Across Chip Linewidth Variation. Empirical use of the wavefront aberrations can now be used in commercially available lithography simulators to predict pattern distortion and placement errors. Measurement and Determination of Zernike coefficients has been a significant effort of many. However the use of this data has generally been limited to matching lenses or picking best fit lense pairs. We will use wavefront aberration data collected using the Litel InspecStep in-situ Interferometer as input data for Prolith/3D to model and predict pattern placement errors and intrafield overlay variation. Experiment data will be collected and compared to the simulated predictions.

  9. Enabling High-performance Interactive Geoscience Data Analysis Through Data Placement and Movement Optimization

    Science.gov (United States)

    Zhu, F.; Yu, H.; Rilee, M. L.; Kuo, K. S.; Yu, L.; Pan, Y.; Jiang, H.

    2017-12-01

    Since the establishment of data archive centers and the standardization of file formats, scientists are required to search metadata catalogs for data needed and download the data files to their local machines to carry out data analysis. This approach has facilitated data discovery and access for decades, but it inevitably leads to data transfer from data archive centers to scientists' computers through low-bandwidth Internet connections. Data transfer becomes a major performance bottleneck in such an approach. Combined with generally constrained local compute/storage resources, they limit the extent of scientists' studies and deprive them of timely outcomes. Thus, this conventional approach is not scalable with respect to both the volume and variety of geoscience data. A much more viable solution is to couple analysis and storage systems to minimize data transfer. In our study, we compare loosely coupled approaches (exemplified by Spark and Hadoop) and tightly coupled approaches (exemplified by parallel distributed database management systems, e.g., SciDB). In particular, we investigate the optimization of data placement and movement to effectively tackle the variety challenge, and boost the popularization of parallelization to address the volume challenge. Our goal is to enable high-performance interactive analysis for a good portion of geoscience data analysis exercise. We show that tightly coupled approaches can concentrate data traffic between local storage systems and compute units, and thereby optimizing bandwidth utilization to achieve a better throughput. Based on our observations, we develop a geoscience data analysis system that tightly couples analysis engines with storages, which has direct access to the detailed map of data partition locations. Through an innovation data partitioning and distribution scheme, our system has demonstrated scalable and interactive performance in real-world geoscience data analysis applications.

  10. Optimal Capacitor Bank Capacity and Placement in Distribution Systems with High Distributed Solar Power Penetration

    Energy Technology Data Exchange (ETDEWEB)

    Hodge, Brian S [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Mather, Barry A [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Cho, Gyu-Jung [Sungkyunkwan University, Korea; Oh, Yun-Sik [Sungkyunkwan University, Korea; Kim, Min-Sung [Sungkyunkwan University, Korea; Kim, Ji-Soo [Sungkyunkwan University, Korea; Kim, Chul-Hwan [Sungkyunkwan University, Korea

    2018-02-01

    Capacitor banks have been generally installed and utilized to support distribution voltage during period of higher load or on longer, higher impedance, feeders. Installations of distributed energy resources in distribution systems are rapidly increasing, and many of these generation resources have variable and uncertain power output. These generators can significantly change the voltage profile across a feeder, and therefore when a new capacitor bank is needed analysis of optimal capacity and location of the capacitor bank is required. In this paper, we model a particular distribution system including essential equipment. An optimization method is adopted to determine the best capacity and location sets of the newly installed capacitor banks, in the presence of distributed solar power generation. Finally we analyze the optimal capacitor banks configuration through the optimization and simulation results.

  11. Optimizing placements of ground-based snow sensors for areal snow cover estimation using a machine-learning algorithm and melt-season snow-LiDAR data

    Science.gov (United States)

    Oroza, C.; Zheng, Z.; Glaser, S. D.; Bales, R. C.; Conklin, M. H.

    2016-12-01

    We present a structured, analytical approach to optimize ground-sensor placements based on time-series remotely sensed (LiDAR) data and machine-learning algorithms. We focused on catchments within the Merced and Tuolumne river basins, covered by the JPL Airborne Snow Observatory LiDAR program. First, we used a Gaussian mixture model to identify representative sensor locations in the space of independent variables for each catchment. Multiple independent variables that govern the distribution of snow depth were used, including elevation, slope, and aspect. Second, we used a Gaussian process to estimate the areal distribution of snow depth from the initial set of measurements. This is a covariance-based model that also estimates the areal distribution of model uncertainty based on the independent variable weights and autocorrelation. The uncertainty raster was used to strategically add sensors to minimize model uncertainty. We assessed the temporal accuracy of the method using LiDAR-derived snow-depth rasters collected in water-year 2014. In each area, optimal sensor placements were determined using the first available snow raster for the year. The accuracy in the remaining LiDAR surveys was compared to 100 configurations of sensors selected at random. We found the accuracy of the model from the proposed placements to be higher and more consistent in each remaining survey than the average random configuration. We found that a relatively small number of sensors can be used to accurately reproduce the spatial patterns of snow depth across the basins, when placed using spatial snow data. Our approach also simplifies sensor placement. At present, field surveys are required to identify representative locations for such networks, a process that is labor intensive and provides limited guarantees on the networks' representation of catchment independent variables.

  12. Method for Vibration Response Simulation and Sensor Placement Optimization of a Machine Tool Spindle System with a Bearing Defect

    Science.gov (United States)

    Cao, Hongrui; Niu, Linkai; He, Zhengjia

    2012-01-01

    Bearing defects are one of the most important mechanical sources for vibration and noise generation in machine tool spindles. In this study, an integrated finite element (FE) model is proposed to predict the vibration responses of a spindle bearing system with localized bearing defects and then the sensor placement for better detection of bearing faults is optimized. A nonlinear bearing model is developed based on Jones' bearing theory, while the drawbar, shaft and housing are modeled as Timoshenko's beam. The bearing model is then integrated into the FE model of drawbar/shaft/housing by assembling equations of motion. The Newmark time integration method is used to solve the vibration responses numerically. The FE model of the spindle-bearing system was verified by conducting dynamic tests. Then, the localized bearing defects were modeled and vibration responses generated by the outer ring defect were simulated as an illustration. The optimization scheme of the sensor placement was carried out on the test spindle. The results proved that, the optimal sensor placement depends on the vibration modes under different boundary conditions and the transfer path between the excitation and the response. PMID:23012514

  13. Optimization of the tape placement process parameters for carbon–PPS composites

    NARCIS (Netherlands)

    Grouve, Wouter Johannes Bernardus; Warnet, Laurent; Rietman, B.; Visser, Roy; Akkerman, Remko

    2013-01-01

    The interrelation between process parameters, material properties and interlaminar bond strength is investigated for the laser assisted tape placement process. Unidirectionally carbon reinforced poly(phenylene sulfide) (PPS) tapes were welded onto carbon woven fabric reinforced PPS laminates. The

  14. Optimal Placement of Actors in WSANs Based on Imposed Delay Constraints

    Directory of Open Access Journals (Sweden)

    Chunxi Yang

    2014-01-01

    Full Text Available Wireless Sensor and Actor Networks (WSANs refer to a group of sensors and actors linked by wireless medium to probe environment and perform specific actions. Such certain actions should always be taken before a deadline when an event of interest is detected. In order to provide such services, the whole monitor area is divided into several virtual areas and nodes in the same area form a cluster. Clustering of the WSANs is often pursued to give that each actor acts as a cluster-head. The number of actors is related to the size and the deployment of WSANs cluster. In this paper, we find a method to determine the accurate number of actors which enables them to receive data and take actions in an imposed time-delay. The k-MinTE and the k-MaxTE clustering algorithm are proposed to form the minimum and maximum size of cluster, respectively. In those clustering algorithms, actors are deployed in such a way that sensors could route data to actors within k hops. Then, clusters are arranged by the regular hexagon. At last, we evaluate the placement of actors and results show that our approach is effective.

  15. Optimal Design for Placements of Tsunami Observing Systems to Accurately Characterize the Inducing Earthquake

    Science.gov (United States)

    Mulia, Iyan E.; Gusman, Aditya Riadi; Satake, Kenji

    2017-12-01

    Recently, there are numerous tsunami observation networks deployed in several major tsunamigenic regions. However, guidance on where to optimally place the measurement devices is limited. This study presents a methodological approach to select strategic observation locations for the purpose of tsunami source characterizations, particularly in terms of the fault slip distribution. Initially, we identify favorable locations and determine the initial number of observations. These locations are selected based on extrema of empirical orthogonal function (EOF) spatial modes. To further improve the accuracy, we apply an optimization algorithm called a mesh adaptive direct search to remove redundant measurement locations from the EOF-generated points. We test the proposed approach using multiple hypothetical tsunami sources around the Nankai Trough, Japan. The results suggest that the optimized observation points can produce more accurate fault slip estimates with considerably less number of observations compared to the existing tsunami observation networks.

  16. Implementation of strength pareto evolutionary algorithm II in the multiobjective burnable poison placement optimization of KWU pressurized water reactor

    International Nuclear Information System (INIS)

    Gharari, Rahman; Poursalehi, Navid; Abbasi, Mohmmadreza; Aghale, Mahdi

    2016-01-01

    In this research, for the first time, a new optimization method, i.e., strength Pareto evolutionary algorithm II (SPEA-II), is developed for the burnable poison placement (BPP) optimization of a nuclear reactor core. In the BPP problem, an optimized placement map of fuel assemblies with burnable poison is searched for a given core loading pattern according to defined objectives. In this work, SPEA-II coupled with a nodal expansion code is used for solving the BPP problem of Kraftwerk Union AG (KWU) pressurized water reactor. Our optimization goal for the BPP is to achieve a greater multiplication factor (K-e-f-f) for gaining possible longer operation cycles along with more flattening of fuel assembly relative power distribution, considering a safety constraint on the radial power peaking factor. For appraising the proposed methodology, the basic approach, i.e., SPEA, is also developed in order to compare obtained results. In general, results reveal the acceptance performance and high strength of SPEA, particularly its new version, i.e., SPEA-II, in achieving a semioptimized loading pattern for the BPP optimization of KWU pressurized water reactor

  17. Implementation of strength pareto evolutionary algorithm II in the multiobjective burnable poison placement optimization of KWU pressurized water reactor

    Energy Technology Data Exchange (ETDEWEB)

    Gharari, Rahman [Nuclear Science and Technology Research Institute (NSTRI), Tehran (Iran, Islamic Republic of); Poursalehi, Navid; Abbasi, Mohmmadreza; Aghale, Mahdi [Nuclear Engineering Dept, Shahid Beheshti University, Tehran (Iran, Islamic Republic of)

    2016-10-15

    In this research, for the first time, a new optimization method, i.e., strength Pareto evolutionary algorithm II (SPEA-II), is developed for the burnable poison placement (BPP) optimization of a nuclear reactor core. In the BPP problem, an optimized placement map of fuel assemblies with burnable poison is searched for a given core loading pattern according to defined objectives. In this work, SPEA-II coupled with a nodal expansion code is used for solving the BPP problem of Kraftwerk Union AG (KWU) pressurized water reactor. Our optimization goal for the BPP is to achieve a greater multiplication factor (K-e-f-f) for gaining possible longer operation cycles along with more flattening of fuel assembly relative power distribution, considering a safety constraint on the radial power peaking factor. For appraising the proposed methodology, the basic approach, i.e., SPEA, is also developed in order to compare obtained results. In general, results reveal the acceptance performance and high strength of SPEA, particularly its new version, i.e., SPEA-II, in achieving a semioptimized loading pattern for the BPP optimization of KWU pressurized water reactor.

  18. Comparing the Selection and Placement of Best Management Practices in Improving Water Quality Using a Multiobjective Optimization and Targeting Method

    Directory of Open Access Journals (Sweden)

    Li-Chi Chiang

    2014-03-01

    Full Text Available Suites of Best Management Practices (BMPs are usually selected to be economically and environmentally efficient in reducing nonpoint source (NPS pollutants from agricultural areas in a watershed. The objective of this research was to compare the selection and placement of BMPs in a pasture-dominated watershed using multiobjective optimization and targeting methods. Two objective functions were used in the optimization process, which minimize pollutant losses and the BMP placement areas. The optimization tool was an integration of a multi-objective genetic algorithm (GA and a watershed model (Soil and Water Assessment Tool—SWAT. For the targeting method, an optimum BMP option was implemented in critical areas in the watershed that contribute the greatest pollutant losses. A total of 171 BMP combinations, which consist of grazing management, vegetated filter strips (VFS, and poultry litter applications were considered. The results showed that the optimization is less effective when vegetated filter strips (VFS are not considered, and it requires much longer computation times than the targeting method to search for optimum BMPs. Although the targeting method is effective in selecting and placing an optimum BMP, larger areas are needed for BMP implementation to achieve the same pollutant reductions as the optimization method.

  19. Multi-Objective Distribution Network Operation Based on Distributed Generation Optimal Placement Using New Antlion Optimizer Considering Reliability

    Directory of Open Access Journals (Sweden)

    KHANBABAZADEH Javad

    2016-10-01

    Full Text Available Distribution network designers and operators are trying to deliver electrical energy with high reliability and quality to their subscribers. Due to high losses in the distribution systems, using distributed generation can improves reliability, reduces losses and improves voltage profile of distribution network. Therefore, the choice of the location of these resources and also determining the amount of their generated power to maximize the benefits of this type of resource is an important issue which is discussed from different points of view today. In this paper, a new multi-objective optimal location and sizing of distributed generation resources is performed to maximize its benefits on the 33 bus distribution test network considering reliability and using a new Antlion Optimizer (ALO. The benefits for DG are considered as system losses reduction, system reliability improvement and benefits from the sale electricity and voltage profile improvement. For each of the mentioned benefits, the ALO algorithm is used to optimize the location and sizing of distributed generation resources. In order to verify the proposed approach, the obtained results have been analyzed and compared with the results of particle swarm optimization (PSO algorithm. The results show that the ALO has shown better performance in optimization problem solution versus PSO.

  20. Artificial Intelligence based technique for BTS placement

    Science.gov (United States)

    Alenoghena, C. O.; Emagbetere, J. O.; Aibinu, A. M.

    2013-12-01

    The increase of the base transceiver station (BTS) in most urban areas can be traced to the drive by network providers to meet demand for coverage and capacity. In traditional network planning, the final decision of BTS placement is taken by a team of radio planners, this decision is not fool proof against regulatory requirements. In this paper, an intelligent based algorithm for optimal BTS site placement has been proposed. The proposed technique takes into consideration neighbour and regulation considerations objectively while determining cell site. The application will lead to a quantitatively unbiased evaluated decision making process in BTS placement. An experimental data of a 2km by 3km territory was simulated for testing the new algorithm, results obtained show a 100% performance of the neighbour constrained algorithm in BTS placement optimization. Results on the application of GA with neighbourhood constraint indicate that the choices of location can be unbiased and optimization of facility placement for network design can be carried out.

  1. Artificial Intelligence based technique for BTS placement

    International Nuclear Information System (INIS)

    Alenoghena, C O; Emagbetere, J O; 1 Minna (Nigeria))" data-affiliation=" (Department of Telecommunications Engineering, Federal University of Techn.1 Minna (Nigeria))" >Aibinu, A M

    2013-01-01

    The increase of the base transceiver station (BTS) in most urban areas can be traced to the drive by network providers to meet demand for coverage and capacity. In traditional network planning, the final decision of BTS placement is taken by a team of radio planners, this decision is not fool proof against regulatory requirements. In this paper, an intelligent based algorithm for optimal BTS site placement has been proposed. The proposed technique takes into consideration neighbour and regulation considerations objectively while determining cell site. The application will lead to a quantitatively unbiased evaluated decision making process in BTS placement. An experimental data of a 2km by 3km territory was simulated for testing the new algorithm, results obtained show a 100% performance of the neighbour constrained algorithm in BTS placement optimization. Results on the application of GA with neighbourhood constraint indicate that the choices of location can be unbiased and optimization of facility placement for network design can be carried out

  2. Optimal placement of trailing-edge flaps for helicopter vibration reduction using response surface methods

    Science.gov (United States)

    Viswamurthy, S. R.; Ganguli, Ranjan

    2007-03-01

    This study aims to determine optimal locations of dual trailing-edge flaps to achieve minimum hub vibration levels in a helicopter, while incurring low penalty in terms of required trailing-edge flap control power. An aeroelastic analysis based on finite elements in space and time is used in conjunction with an optimal control algorithm to determine the flap time history for vibration minimization. The reduced hub vibration levels and required flap control power (due to flap motion) are the two objectives considered in this study and the flap locations along the blade are the design variables. It is found that second order polynomial response surfaces based on the central composite design of the theory of design of experiments describe both objectives adequately. Numerical studies for a four-bladed hingeless rotor show that both objectives are more sensitive to outboard flap location compared to the inboard flap location by an order of magnitude. Optimization results show a disjoint Pareto surface between the two objectives. Two interesting design points are obtained. The first design gives 77 percent vibration reduction from baseline conditions (no flap motion) with a 7 percent increase in flap power compared to the initial design. The second design yields 70 percent reduction in hub vibration with a 27 percent reduction in flap power from the initial design.

  3. Photovoltaic and Wind Turbine Integration Applying Cuckoo Search for Probabilistic Reliable Optimal Placement

    Directory of Open Access Journals (Sweden)

    R. A. Swief

    2018-01-01

    Full Text Available This paper presents an efficient Cuckoo Search Optimization technique to improve the reliability of electrical power systems. Various reliability objective indices such as Energy Not Supplied, System Average Interruption Frequency Index, System Average Interruption, and Duration Index are the main indices indicating reliability. The Cuckoo Search Optimization (CSO technique is applied to optimally place the protection devices, install the distributed generators, and to determine the size of distributed generators in radial feeders for reliability improvement. Distributed generator affects reliability and system power losses and voltage profile. The volatility behaviour for both photovoltaic cells and the wind turbine farms affect the values and the selection of protection devices and distributed generators allocation. To improve reliability, the reconfiguration will take place before installing both protection devices and distributed generators. Assessment of consumer power system reliability is a vital part of distribution system behaviour and development. Distribution system reliability calculation will be relayed on probabilistic reliability indices, which can expect the disruption profile of a distribution system based on the volatility behaviour of added generators and load behaviour. The validity of the anticipated algorithm has been tested using a standard IEEE 69 bus system.

  4. Optimizing Placement of Weather Stations: Exploring Objective Functions of Meaningful Combinations of Multiple Weather Variables

    Science.gov (United States)

    Snyder, A.; Dietterich, T.; Selker, J. S.

    2017-12-01

    Many regions of the world lack ground-based weather data due to inadequate or unreliable weather station networks. For example, most countries in Sub-Saharan Africa have unreliable, sparse networks of weather stations. The absence of these data can have consequences on weather forecasting, prediction of severe weather events, agricultural planning, and climate change monitoring. The Trans-African Hydro-Meteorological Observatory (TAHMO.org) project seeks to address these problems by deploying and operating a large network of weather stations throughout Sub-Saharan Africa. To design the TAHMO network, we must determine where to place weather stations within each country. We should consider how we can create accurate spatio-temporal maps of weather data and how to balance the desired accuracy of each weather variable of interest (precipitation, temperature, relative humidity, etc.). We can express this problem as a joint optimization of multiple weather variables, given a fixed number of weather stations. We use reanalysis data as the best representation of the "true" weather patterns that occur in the region of interest. For each possible combination of sites, we interpolate the reanalysis data between selected locations and calculate the mean average error between the reanalysis ("true") data and the interpolated data. In order to formulate our multi-variate optimization problem, we explore different methods of weighting each weather variable in our objective function. These methods include systematic variation of weights to determine which weather variables have the strongest influence on the network design, as well as combinations targeted for specific purposes. For example, we can use computed evapotranspiration as a metric that combines many weather variables in a way that is meaningful for agricultural and hydrological applications. We compare the errors of the weather station networks produced by each optimization problem formulation. We also compare these

  5. Altered Passive Eruption Complicating Optimal Orthodontic Bracket Placement: A Case Report and Review of Literature.

    Science.gov (United States)

    Pulgaonkar, Rohan; Chitra, Prasad

    2015-11-01

    An unusual case of altered passive eruption with gingival hyperpigmentation and a Class I malocclusion in a 12-year-old girl having no previous history of medication is presented. The patient reported with spacing in the upper arch, moderate crowding in the lower arch, anterior crossbite and excessive gingival tissue on the labial surfaces of teeth in both the arches. The inadequate crown lengths made placement of the orthodontic brackets difficult. Preadjusted orthodontic brackets have a very precise placement protocol which can affect tooth movement in all 3 planes of space if violated. The periodontal condition was diagnosed as altered passive eruption Type IA. Interdisciplinary treatment protocols including periodontal surgical and orthodontic procedures were used. The periodontal surgical procedures were carried out prior to orthodontic therapy and the results obtained were satisfactory. It is suggested that orthodontists should be aware of conditions like altered passive eruption and modalities of management. In most instances, orthodontic therapy is not hindered.

  6. Optimal base station placement for wireless sensor networks with successive interference cancellation.

    Science.gov (United States)

    Shi, Lei; Zhang, Jianjun; Shi, Yi; Ding, Xu; Wei, Zhenchun

    2015-01-14

    We consider the base station placement problem for wireless sensor networks with successive interference cancellation (SIC) to improve throughput. We build a mathematical model for SIC. Although this model cannot be solved directly, it enables us to identify a necessary condition for SIC on distances from sensor nodes to the base station. Based on this relationship, we propose to divide the feasible region of the base station into small pieces and choose a point within each piece for base station placement. The point with the largest throughput is identified as the solution. The complexity of this algorithm is polynomial. Simulation results show that this algorithm can achieve about 25% improvement compared with the case that the base station is placed at the center of the network coverage area when using SIC.

  7. An Analysis of the Optimal Placement of Beacon in Bluetooth-INS Indoor Localization

    OpenAIRE

    Zhao, Xinyu; Ruan, Ling; Zhang, Ling; Long, Yi; Cheng, Fei

    2018-01-01

    The placement of Bluetooth beacon has immediate impact on the accuracy and stability of indoor positioning. Affected by the shelter of building and human, Bluetooth shows uncertain spatial transmission characteristics. Therefore, the scientific deployment of the beacon nodes is closely related to the indoor space environment. In the study of positioning technology using Bluetooth, some scholars have discussed the deployment of Bluetooth beacon in different scenarios. In the principle of avoid...

  8. An Optimal Design for Placements of Tsunami Observing Systems Around the Nankai Trough, Japan

    Science.gov (United States)

    Mulia, I. E.; Gusman, A. R.; Satake, K.

    2017-12-01

    Presently, there are numerous tsunami observing systems deployed in several major tsunamigenic regions throughout the world. However, documentations on how and where to optimally place such measurement devices are limited. This study presents a methodological approach to select the best and fewest observation points for the purpose of tsunami source characterizations, particularly in the form of fault slip distributions. We apply the method to design a new tsunami observation network around the Nankai Trough, Japan. In brief, our method can be divided into two stages: initialization and optimization. The initialization stage aims to identify favorable locations of observation points, as well as to determine the initial number of observations. These points are generated based on extrema of an empirical orthogonal function (EOF) spatial modes derived from 11 hypothetical tsunami events in the region. In order to further improve the accuracy, we apply an optimization algorithm called a mesh adaptive direct search (MADS) to remove redundant measurements from the initially generated points by the first stage. A combinatorial search by the MADS will improve the accuracy and reduce the number of observations simultaneously. The EOF analysis of the hypothetical tsunamis using first 2 leading modes with 4 extrema on each mode results in 30 observation points spread along the trench. This is obtained after replacing some clustered points within the radius of 30 km with only one representative. Furthermore, the MADS optimization can improve the accuracy of the EOF-generated points by approximately 10-20% with fewer observations (23 points). Finally, we compare our result with the existing observation points (68 stations) in the region. The result shows that the optimized design with fewer number of observations can produce better source characterizations with approximately 20-60% improvement of accuracies at all the 11 hypothetical cases. It should be note, however, that our

  9. Theoretical modeling for optimizing horizontal production well placement in thermal recovery environments to maximize recovery

    Energy Technology Data Exchange (ETDEWEB)

    Bourgeois, D.J. [Schlumberger Canada Ltd., Calgary, AB (Canada)

    2008-07-01

    Heavy oil has a high viscosity and a low API gravity rating. Since it is difficult to get a fluid of this nature to flow, enhanced oil recovery techniques are required to extract the oil from the reservoir. Thermal recovery strategies such as steam assisted gravity drainage (SAGD) and cyclic steam injection stimulation (CSS) can be used. These techniques involve injecting steam into a formation which heats up the fluid in place decreasing its viscosity and allowing it to flow into the producing well bore. In order to maximize hydrocarbon recovery from this type of geological environment, the placement of the horizontal production well bore relative to the base of the reservoir is important. In conventional oil and gas plays, well placement methods involving directional deep resistivity logging while drilling (DDR-LWD) measurements to map formation contacts while drilling have enabled wells to be placed relative to formation boundaries. This paper discussed a study that presented some theoretical resistivity inversion and forward modeling results generated from a three-dimensional geocellular model to confirm that this evolving DDR-LWD technology may be applicable to western Canada's Athabasca heavy oil drilling environments. The paper discussed the effect of well bore position, thermal recovery, and pro-active well placement. Resistivity modeling work flow was also presented. It was concluded that being able to drill a horizontal production well relative to the base of the formation could help minimize abandoned oil ultimately leading to better recovery. 4 refs., 8 figs.

  10. Intercorrelation of the WISC-R and the Renzulli-Hartman Scale for Determination of Gifted Placement.

    Science.gov (United States)

    Lowrance, Dan; Anderson, Howard N.

    In order to compare the Wechsler Intelligence Scale for Children--Revised (WISC-R) and the Renzulli-Hartman Scale for Determination of Gifted Placement, 192 potentially gifted elementary students were rated on both tests. A correlation matrix indicated that one of the four subscales of the Renzulli-Hartman Scale, the Learning Characteristics…

  11. Determining an optimal supply chain strategy

    Directory of Open Access Journals (Sweden)

    Intaher M. Ambe

    2012-11-01

    Full Text Available In today’s business environment, many companies want to become efficient and flexible, but have struggled, in part, because they have not been able to formulate optimal supply chain strategies. Often this is as a result of insufficient knowledge about the costs involved in maintaining supply chains and the impact of the supply chain on their operations. Hence, these companies find it difficult to manufacture at a competitive cost and respond quickly and reliably to market demand. Mismatched strategies are the root cause of the problems that plague supply chains, and supply-chain strategies based on a one-size-fits-all strategy often fail. The purpose of this article is to suggest instruments to determine an optimal supply chain strategy. This article, which is conceptual in nature, provides a review of current supply chain strategies and suggests a framework for determining an optimal strategy.

  12. Optimal sensor placement for control of a supersonic mixed-compression inlet with variable geometry

    Science.gov (United States)

    Moore, Kenneth Thomas

    A method of using fluid dynamics models for the generation of models that are useable for control design and analysis is investigated. The problem considered is the control of the normal shock location in the VDC inlet, which is a mixed-compression, supersonic, variable-geometry inlet of a jet engine. A quasi-one-dimensional set of fluid equations incorporating bleed and moving walls is developed. An object-oriented environment is developed for simulation of flow systems under closed-loop control. A public interface between the controller and fluid classes is defined. A linear model representing the dynamics of the VDC inlet is developed from the finite difference equations, and its eigenstructure is analyzed. The order of this model is reduced using the square root balanced model reduction method to produce a reduced-order linear model that is suitable for control design and analysis tasks. A modification to this method that improves the accuracy of the reduced-order linear model for the purpose of sensor placement is presented and analyzed. The reduced-order linear model is used to develop a sensor placement method that quantifies as a function of the sensor location the ability of a sensor to provide information on the variable of interest for control. This method is used to develop a sensor placement metric for the VDC inlet. The reduced-order linear model is also used to design a closed loop control system to control the shock position in the VDC inlet. The object-oriented simulation code is used to simulate the nonlinear fluid equations under closed-loop control.

  13. Effect of walking speed and placement position interactions in determining the accuracy of various newer pedometers

    Directory of Open Access Journals (Sweden)

    Wonil Park

    2014-06-01

    Full Text Available Older types of pedometers had varied levels of accuracy, which ranged from 0% to 45%. In addition, to obtain accurate results, it was also necessary to position them in a certain way. By contrast, newer models can be placed anywhere on the body; however, their accuracy is unknown when they are placed at different body sites. We determined the accuracy of various newer pedometers under controlled laboratory and free walking conditions. A total of 40 participants, who varied widely in age and body mass index, were recruited for the study. The numbers of steps recorded using five different pedometers placed at the waist, the chest, in a pocket, and on an armband were compared against those counted with a hand tally counter. With the exception of one, all the pedometers were accurate at moderate walking speeds, irrespective of their placement on the body. However, the accuracy tended to decrease at slower and faster walking speeds, especially when the pedometers were worn in the pockets or kept in the purse (p < 0.05. In conclusion, most pedometers examined were accurate when they were placed at the waist, chest, and armband irrespective of the walking speed or terrain. However, some pedometers had reduced accuracy when they were kept in a pocket or placed in a purse, especially at a slower and faster walking speeds.

  14. Enhanced nonlinearity interval mapping scheme for high-performance simulation-optimization of watershed-scale BMP placement

    Science.gov (United States)

    Zou, Rui; Riverson, John; Liu, Yong; Murphy, Ryan; Sim, Youn

    2015-03-01

    Integrated continuous simulation-optimization models can be effective predictors of a process-based responses for cost-benefit optimization of best management practices (BMPs) selection and placement. However, practical application of simulation-optimization model is computationally prohibitive for large-scale systems. This study proposes an enhanced Nonlinearity Interval Mapping Scheme (NIMS) to solve large-scale watershed simulation-optimization problems several orders of magnitude faster than other commonly used algorithms. An efficient interval response coefficient (IRC) derivation method was incorporated into the NIMS framework to overcome a computational bottleneck. The proposed algorithm was evaluated using a case study watershed in the Los Angeles County Flood Control District. Using a continuous simulation watershed/stream-transport model, Loading Simulation Program in C++ (LSPC), three nested in-stream compliance points (CP)—each with multiple Total Maximum Daily Loads (TMDL) targets—were selected to derive optimal treatment levels for each of the 28 subwatersheds, so that the TMDL targets at all the CP were met with the lowest possible BMP implementation cost. Genetic Algorithm (GA) and NIMS were both applied and compared. The results showed that the NIMS took 11 iterations (about 11 min) to complete with the resulting optimal solution having a total cost of 67.2 million, while each of the multiple GA executions took 21-38 days to reach near optimal solutions. The best solution obtained among all the GA executions compared had a minimized cost of 67.7 million—marginally higher, but approximately equal to that of the NIMS solution. The results highlight the utility for decision making in large-scale watershed simulation-optimization formulations.

  15. Optimization of Lightweight Axles for an Innovative Carving Skateboard Based on Carbon Fiber Placement

    Directory of Open Access Journals (Sweden)

    Marc Fleischmann

    2018-02-01

    Full Text Available In 2003, the BMW Group developed a longboard called “StreetCarver”. The idea behind this product was to bring the perfect carving feeling of surf- and snowboarding on the streets by increasing the maneuverability of classical skateboard trucks. The outcome was a chassis based on complex kinematics. The negative side effect was the StreetCarver’s exceptional high weight of almost 8 kg. The main reason for this heaviness was the choice of traditional metallic engineering materials. In this research, modern fiber reinforced composites were used to lower the chassis’ mass by up to 50% to reach the weight of a common longboard. To accomplish that goal, carbon fibers were placed along pre-simulated load paths of the structural components in a so-called Tailored- Fiber-Placement process. This technology allows an angle-independent single-roving placement and leads not only to the reduction of weight but also helps to save valuable fiber material by avoiding cutting waste.

  16. Optimal Geometrical Set for Automated Marker Placement to Virtualized Real-Time Facial Emotions.

    Directory of Open Access Journals (Sweden)

    Vasanthan Maruthapillai

    Full Text Available In recent years, real-time face recognition has been a major topic of interest in developing intelligent human-machine interaction systems. Over the past several decades, researchers have proposed different algorithms for facial expression recognition, but there has been little focus on detection in real-time scenarios. The present work proposes a new algorithmic method of automated marker placement used to classify six facial expressions: happiness, sadness, anger, fear, disgust, and surprise. Emotional facial expressions were captured using a webcam, while the proposed algorithm placed a set of eight virtual markers on each subject's face. Facial feature extraction methods, including marker distance (distance between each marker to the center of the face and change in marker distance (change in distance between the original and new marker positions, were used to extract three statistical features (mean, variance, and root mean square from the real-time video sequence. The initial position of each marker was subjected to the optical flow algorithm for marker tracking with each emotional facial expression. Finally, the extracted statistical features were mapped into corresponding emotional facial expressions using two simple non-linear classifiers, K-nearest neighbor and probabilistic neural network. The results indicate that the proposed automated marker placement algorithm effectively placed eight virtual markers on each subject's face and gave a maximum mean emotion classification rate of 96.94% using the probabilistic neural network.

  17. Optimal Geometrical Set for Automated Marker Placement to Virtualized Real-Time Facial Emotions.

    Science.gov (United States)

    Maruthapillai, Vasanthan; Murugappan, Murugappan

    2016-01-01

    In recent years, real-time face recognition has been a major topic of interest in developing intelligent human-machine interaction systems. Over the past several decades, researchers have proposed different algorithms for facial expression recognition, but there has been little focus on detection in real-time scenarios. The present work proposes a new algorithmic method of automated marker placement used to classify six facial expressions: happiness, sadness, anger, fear, disgust, and surprise. Emotional facial expressions were captured using a webcam, while the proposed algorithm placed a set of eight virtual markers on each subject's face. Facial feature extraction methods, including marker distance (distance between each marker to the center of the face) and change in marker distance (change in distance between the original and new marker positions), were used to extract three statistical features (mean, variance, and root mean square) from the real-time video sequence. The initial position of each marker was subjected to the optical flow algorithm for marker tracking with each emotional facial expression. Finally, the extracted statistical features were mapped into corresponding emotional facial expressions using two simple non-linear classifiers, K-nearest neighbor and probabilistic neural network. The results indicate that the proposed automated marker placement algorithm effectively placed eight virtual markers on each subject's face and gave a maximum mean emotion classification rate of 96.94% using the probabilistic neural network.

  18. Accurate pre-surgical determination for self-drilling miniscrew implant placement using surgical guides and cone-beam computed tomography.

    Science.gov (United States)

    Miyazawa, Ken; Kawaguchi, Misuzu; Tabuchi, Masako; Goto, Shigemi

    2010-12-01

    Miniscrew implants have proven to be effective in providing absolute orthodontic anchorage. However, as self-drilling miniscrew implants have become more popular, a problem has emerged, i.e. root contact, which can lead to perforation and other root injuries. To avoid possible root damage, a surgical guide was fabricated and cone-beam computed tomography (CBCT) was used to incorporate guide tubes drilled in accordance with the planned direction of the implants. Eighteen patients (5 males and 13 females; mean age 23.8 years; minimum 10.7, maximum 45.5) were included in the study. Forty-four self-drilling miniscrew implants (diameter 1.6, and length 8 mm) were placed in interradicular bone using a surgical guide procedure, the majority in the maxillary molar area. To determine the success rates, statistical analysis was undertaken using Fisher's exact probability test. CBCT images of post-surgical self-drilling miniscrew implant placement showed no root contact (0/44). However, based on CBCT evaluation, it was necessary to change the location or angle of 52.3 per cent (23/44) of the guide tubes prior to surgery in order to obtain optimal placement. If orthodontic force could be applied to the screw until completion of orthodontic treatment, screw anchorage was recorded as successful. The total success rate of all miniscrews was 90.9 per cent (40/44). Orthodontic self-drilling miniscrew implants must be inserted carefully, particularly in the case of blind placement, since even guide tubes made on casts frequently require repositioning to avoid the roots of the teeth. The use of surgical guides, fabricated using CBCT images, appears to be a promising technique for placement of orthodontic self-drilling miniscrew implants adjacent to the dental roots and maxillary sinuses.

  19. Swarm intelligence algorithms for integrated optimization of piezoelectric actuator and sensor placement and feedback gains

    International Nuclear Information System (INIS)

    Dutta, Rajdeep; Ganguli, Ranjan; Mani, V

    2011-01-01

    Swarm intelligence algorithms are applied for optimal control of flexible smart structures bonded with piezoelectric actuators and sensors. The optimal locations of actuators/sensors and feedback gain are obtained by maximizing the energy dissipated by the feedback control system. We provide a mathematical proof that this system is uncontrollable if the actuators and sensors are placed at the nodal points of the mode shapes. The optimal locations of actuators/sensors and feedback gain represent a constrained non-linear optimization problem. This problem is converted to an unconstrained optimization problem by using penalty functions. Two swarm intelligence algorithms, namely, Artificial bee colony (ABC) and glowworm swarm optimization (GSO) algorithms, are considered to obtain the optimal solution. In earlier published research, a cantilever beam with one and two collocated actuator(s)/sensor(s) was considered and the numerical results were obtained by using genetic algorithm and gradient based optimization methods. We consider the same problem and present the results obtained by using the swarm intelligence algorithms ABC and GSO. An extension of this cantilever beam problem with five collocated actuators/sensors is considered and the numerical results obtained by using the ABC and GSO algorithms are presented. The effect of increasing the number of design variables (locations of actuators and sensors and gain) on the optimization process is investigated. It is shown that the ABC and GSO algorithms are robust and are good choices for the optimization of smart structures

  20. Reliability of pressure waveform analysis to determine correct epidural needle placement in labouring women.

    Science.gov (United States)

    Al-Aamri, I; Derzi, S H; Moore, A; Elgueta, M F; Moustafa, M; Schricker, T; Tran, D Q

    2017-07-01

    Pressure waveform analysis provides a reliable confirmatory adjunct to the loss-of-resistance technique to identify the epidural space during thoracic epidural anaesthesia, but its role remains controversial in lumbar epidural analgesia during labour. We performed an observational study in 100 labouring women of the sensitivity and specificity of waveform analysis to determine the correct location of the epidural needle. After obtaining loss-of-resistance, the anaesthetist injected 5 ml saline through the epidural needle (accounting for the volume already used in the loss-of-resistance). Sterile extension tubing, connected to a pressure transducer, was attached to the needle. An investigator determined the presence or absence of a pulsatile waveform, synchronised with the heart rate, on a monitor screen that was not in the view of the anaesthetist or the parturient. A bolus of 4 ml lidocaine 2% with adrenaline 5 μg.ml -1 was administered, and the epidural block was assessed after 15 min. Three women displayed no sensory block at 15 min. The results showed: epidural block present, epidural waveform present 93; epidural block absent, epidural waveform absent 2; epidural block present, epidural waveform absent 4; epidural block absent, epidural waveform present 1. Compared with the use of a local anaesthetic bolus to ascertain the epidural space, the sensitivity, specificity, positive and negative predictive values of waveform analysis were 95.9%, 66.7%, 98.9% and 33.3%, respectively. Epidural waveform analysis provides a simple adjunct to loss-of-resistance for confirming needle placement during performance of obstetric epidurals, however, further studies are required before its routine implementation in clinical practice. © 2017 The Association of Anaesthetists of Great Britain and Ireland.

  1. A Cross-Entropy-Based Admission Control Optimization Approach for Heterogeneous Virtual Machine Placement in Public Clouds

    Directory of Open Access Journals (Sweden)

    Li Pan

    2016-03-01

    Full Text Available Virtualization technologies make it possible for cloud providers to consolidate multiple IaaS provisions into a single server in the form of virtual machines (VMs. Additionally, in order to fulfill the divergent service requirements from multiple users, a cloud provider needs to offer several types of VM instances, which are associated with varying configurations and performance, as well as different prices. In such a heterogeneous virtual machine placement process, one significant problem faced by a cloud provider is how to optimally accept and place multiple VM service requests into its cloud data centers to achieve revenue maximization. To address this issue, in this paper, we first formulate such a revenue maximization problem during VM admission control as a multiple-dimensional knapsack problem, which is known to be NP-hard to solve. Then, we propose to use a cross-entropy-based optimization approach to address this revenue maximization problem, by obtaining a near-optimal eligible set for the provider to accept into its data centers, from the waiting VM service requests in the system. Finally, through extensive experiments and measurements in a simulated environment with the settings of VM instance classes derived from real-world cloud systems, we show that our proposed cross-entropy-based admission control optimization algorithm is efficient and effective in maximizing cloud providers’ revenue in a public cloud computing environment.

  2. Optimal sensor placement for large structures using the nearest neighbour index and a hybrid swarm intelligence algorithm

    International Nuclear Information System (INIS)

    Lian, Jijian; He, Longjun; Ma, Bin; Peng, Wenxiang; Li, Huokun

    2013-01-01

    Research on optimal sensor placement (OSP) has become very important due to the need to obtain effective testing results with limited testing resources in health monitoring. In this study, a new methodology is proposed to select the best sensor locations for large structures. First, a novel fitness function derived from the nearest neighbour index is proposed to overcome the drawbacks of the effective independence method for OSP for large structures. This method maximizes the contribution of each sensor to modal observability and simultaneously avoids the redundancy of information between the selected degrees of freedom. A hybrid algorithm combining the improved discrete particle swarm optimization (DPSO) with the clonal selection algorithm is then implemented to optimize the proposed fitness function effectively. Finally, the proposed method is applied to an arch dam for performance verification. The results show that the proposed hybrid swarm intelligence algorithm outperforms a genetic algorithm with decimal two-dimension array encoding and DPSO in the capability of global optimization. The new fitness function is advantageous in terms of sensor distribution and ensuring a well-conditioned information matrix and orthogonality of modes, indicating that this method may be used to provide guidance for OSP in various large structures. (paper)

  3. Mechanical Elongation of the Small Intestine: Evaluation of Techniques for Optimal Screw Placement in a Rodent Model

    Directory of Open Access Journals (Sweden)

    P. A. Hausbrandt

    2013-01-01

    Full Text Available Introduction. The aim of this study was to evaluate techniques and establish an optimal method for mechanical elongation of small intestine (MESI using screws in a rodent model in order to develop a potential therapy for short bowel syndrome (SBS. Material and Methods. Adult female Sprague Dawley rats (n=24 with body weight from 250 to 300 g (Σ=283 were evaluated using 5 different groups in which the basic denominator for the technique involved the fixation of a blind loop of the intestine on the abdominal wall with the placement of a screw in the lumen secured to the abdominal wall. Results. In all groups with accessible screws, the rodents removed the implants despite the use of washers or suits to prevent removal. Subcutaneous placement of the screw combined with antibiotic treatment and dietary modifications was finally successful. In two animals autologous transplantation of the lengthened intestinal segment was successful. Discussion. While the rodent model may provide useful basic information on mechanical intestinal lengthening, further investigations should be performed in larger animals to make use of the translational nature of MESI in human SBS treatment.

  4. Optimal Sizing and Placement of Power-to-Gas Systems in Future Active Distribution Networks

    DEFF Research Database (Denmark)

    Diaz de Cerio Mendaza, Iker; Bhattarai, Bishnu Prasad; Kouzelis, Konstantinos

    2015-01-01

    Power-to-Gas is recently attracting lots of interest as a new alternative for the regulation of renewable based power system. In cases, where the re-powering of old wind turbines threatens the normal operation of the local distribution network, this becomes especially relevant. However, the design...... -investment cost- and the technical losses in the system under study. The results obtained from the assessed test system show how such non-linear methods could help distribution system operators to obtain a fast and precise perception of what is the best way to integrate the Power-to-Gas facilities...... of medium voltage distribution networks does not normally follow a common pattern, finding a singular and very particular layouts in each case. This fact, makes the placement and dimensioning of such flexible loads a complicated task for the distribution system operator in the future. This paper describes...

  5. PI controller design of a wind turbine: evaluation of the pole-placement method and tuning using constrained optimization

    DEFF Research Database (Denmark)

    Mirzaei, Mahmood; Tibaldi, Carlo; Hansen, Morten Hartvig

    2016-01-01

    to automatically tune the wind turbine controller subject to robustness constraints. The properties of the system such as the maximum sensitivity and complementary sensitivity functions (Ms and Mt), along with some of the responses of the system, are used to investigate the controller performance and formulate...... the optimization problem. The cost function is the integral absolute error (IAE) of the rotational speed from a disturbance modeled as a step in wind speed. Linearized model of the DTU 10-MW reference wind turbine is obtained using HAWCStab2. Thereafter, the model is reduced with model order reduction. The trade......PI/PID controllers are the most common wind turbine controllers. Normally a first tuning is obtained using methods such as pole-placement or Ziegler-Nichols and then extensive aeroelastic simulations are used to obtain the best tuning in terms of regulation of the outputs and reduction of the loads...

  6. Voltage stability index based optimal placement of static VAR compensator and sizing using Cuckoo search algorithm

    Science.gov (United States)

    Venkateswara Rao, B.; Kumar, G. V. Nagesh; Chowdary, D. Deepak; Bharathi, M. Aruna; Patra, Stutee

    2017-07-01

    This paper furnish the new Metaheuristic algorithm called Cuckoo Search Algorithm (CSA) for solving optimal power flow (OPF) problem with minimization of real power generation cost. The CSA is found to be the most efficient algorithm for solving single objective optimal power flow problems. The CSA performance is tested on IEEE 57 bus test system with real power generation cost minimization as objective function. Static VAR Compensator (SVC) is one of the best shunt connected device in the Flexible Alternating Current Transmission System (FACTS) family. It has capable of controlling the voltage magnitudes of buses by injecting the reactive power to system. In this paper SVC is integrated in CSA based Optimal Power Flow to optimize the real power generation cost. SVC is used to improve the voltage profile of the system. CSA gives better results as compared to genetic algorithm (GA) in both without and with SVC conditions.

  7. Determining the Most Appropriate Physical Education Placement for Students with Disabilities

    Science.gov (United States)

    Columna, Luis; Davis, Timothy; Lieberman, Lauren; Lytle, Rebecca

    2010-01-01

    Adapted physical education (APE) is designed to meet the unique needs of children with disabilities within the least restrictive environment. Placement in the right environment can help the child succeed, but the wrong environment can create a very negative experience. This article presents a systematic approach to making decisions when…

  8. Particle swarm optimization algorithm for simultaneous optimal placement and sizing of shunt active power conditioner (APC) and shunt capacitor inharmonic distorted distribution system

    Institute of Scientific and Technical Information of China (English)

    Mohammadi Mohammad

    2017-01-01

    Due to development of distribution systems and increase in electricity demand, the use of capacitor banks increases. From the other point of view, nonlinear loads generate and inject considerable harmonic currents into power system. Under this condition if capacitor banks are not properly selected and placed in the power system, they could amplify and propagate these harmonics and deteriorate power quality to unacceptable levels. With attention of disadvantages of passive filters, such as occurring resonance, nowadays the usage of this type of harmonic compensator is restricted. On the other side, one of parallel multi-function compensating devices which are recently used in distribution system to mitigate voltage sag and harmonic distortion, performs power factor correction, and improves the overall power quality as active power conditioner (APC). Therefore, the utilization of APC in harmonic distorted system can affect and change the optimal location and size of shunt capacitor bank under harmonic distortion condition. This paper presents an optimization algorithm for improvement of power quality using simultaneous optimal placement and sizing of APC and shunt capacitor banks in radial distribution networks in the presence of voltage and current harmonics. The algorithm is based on particle swarm optimization (PSO). The objective function includes the cost of power losses, energy losses and those of the capacitor banks and APCs.

  9. A small perturbation based optimization approach for the frequency placement of high aspect ratio wings

    Science.gov (United States)

    Goltsch, Mandy

    Design denotes the transformation of an identified need to its physical embodiment in a traditionally iterative approach of trial and error. Conceptual design plays a prominent role but an almost infinite number of possible solutions at the outset of design necessitates fast evaluations. The corresponding practice of empirical equations and low fidelity analyses becomes obsolete in the light of novel concepts. Ever increasing system complexity and resource scarcity mandate new approaches to adequately capture system characteristics. Contemporary concerns in atmospheric science and homeland security created an operational need for unconventional configurations. Unmanned long endurance flight at high altitudes offers a unique showcase for the exploration of new design spaces and the incidental deficit of conceptual modeling and simulation capabilities. Structural and aerodynamic performance requirements necessitate light weight materials and high aspect ratio wings resulting in distinct structural and aeroelastic response characteristics that stand in close correlation with natural vibration modes. The present research effort evolves around the development of an efficient and accurate optimization algorithm for high aspect ratio wings subject to natural frequency constraints. Foundational corner stones are beam dimensional reduction and modal perturbation redesign. Local and global analyses inherent to the former suggest corresponding levels of local and global optimization. The present approach departs from this suggestion. It introduces local level surrogate models to capacitate a methodology that consists of multi level analyses feeding into a single level optimization. The innovative heart of the new algorithm originates in small perturbation theory. A sequence of small perturbation solutions allows the optimizer to make incremental movements within the design space. It enables a directed search that is free of costly gradients. System matrices are decomposed

  10. Accounting for connectivity and spatial correlation in the optimal placement of wildlife habitat

    Science.gov (United States)

    John Hof; Curtis H. Flather

    1996-01-01

    This paper investigates optimization approaches to simultaneously modelling habitat fragmentation and spatial correlation between patch populations. The problem is formulated with habitat connectivity affecting population means and variances, with spatial correlations accounted for in covariance calculations. Population with a pre-specifled confidence level is then...

  11. The determination of optimal climate policy

    International Nuclear Information System (INIS)

    Aaheim, Asbjoern

    2010-01-01

    Analyses of the costs and benefits of climate policy, such as the Stern Review, evaluate alternative strategies to reduce greenhouse gas emissions by requiring that the cost of emission cuts in each and every year has to be covered by the associated value of avoided damage, discounted by a an exogenously chosen rate. An alternative is to optimize abatement programmes towards a stationary state, where the concentrations of greenhouse gases are stabilized and shadow prices, including the rate of discount, are determined endogenously. This paper examines the properties of optimized stabilization. It turns out that the implications for the evaluation of climate policy are substantial if compared with evaluations of the present value of costs and benefits based on exogenously chosen shadow prices. Comparisons of discounted costs and benefits tend to exaggerate the importance of the choice of discount rate, while ignoring the importance of future abatement costs, which turns out to be essential for the optimal abatement path. Numerical examples suggest that early action may be more beneficial than indicated by comparisons of costs and benefits discounted by a rate chosen on the basis of current observations. (author)

  12. Geo-Spotting: Mining Online Location-based Services for Optimal Retail Store Placement

    OpenAIRE

    Karamshuk, Dmytro; Noulas, Anastasios; Scellato, Salvatore; Nicosia, Vincenzo; Mascolo, Cecilia

    2013-01-01

    The problem of identifying the optimal location for a new retail store has been the focus of past research, especially in the field of land economy, due to its importance in the success of a business. Traditional approaches to the problem have factored in demographics, revenue and aggregated human flow statistics from nearby or remote areas. However, the acquisition of relevant data is usually expensive. With the growth of location-based social networks, fine grained data describing user mobi...

  13. Optimizing virtual machine placement for energy and SLA in clouds using utility functions

    OpenAIRE

    Abdelkhalik Mosa; Norman W. Paton

    2016-01-01

    Cloud computing provides on-demand access to a shared pool of computing resources, which enables organizations to outsource their IT infrastructure. Cloud providers are building data centers to handle the continuous increase in cloud users’ demands. Consequently, these cloud data centers consume, and have the potential to waste, substantial amounts of energy. This energy consumption increases the operational cost and the CO2 emissions. The goal of this paper is to develop an optimized energy ...

  14. Web thickness determines the therapeutic effect of endoscopic keel placement on anterior glottic web.

    Science.gov (United States)

    Chen, Jian; Shi, Fang; Chen, Min; Yang, Yue; Cheng, Lei; Wu, Haitao

    2017-10-01

    This work is a retrospective analysis to investigate the critical risk factor for the therapeutic effect of endoscopic keel placement on anterior glottic web. Altogether, 36 patients with anterior glottic web undergoing endoscopic lysis and silicone keel placement were enrolled. Their voice qualities were evaluated using the voice handicap index-10 (VHI-10) questionnaire, and improved significantly 3 months after surgery (21.53 ± 3.89 vs 9.81 ± 6.68, P web recurrence during the at least 1-year follow-up. Therefore, patients were classified according to the Cohen classification or web thickness, and the recurrence rates were compared. The distribution of recurrence rates for Cohen type 1 ~ 4 were 28.6, 16.7, 33.3, and 40%, respectively. The difference was not statistically significant (P = 0.461). When classified by web thickness, only 2 of 27 (7.41%) thin type cases relapsed whereas 8 of 9 (88.9%) cases in the thick group reformed webs (P web thickness rather than the Cohen grades. Endoscopic lysis and keel placement is only effective for cases with thin glottic webs. Patients with thick webs should be treated by other means.

  15. Medical school clinical placements - the optimal method for assessing the clinical educational environment from a graduate entry perspective.

    Science.gov (United States)

    Hyde, Sarah; Hannigan, Ailish; Dornan, Tim; McGrath, Deirdre

    2018-01-05

    Educational environment is a strong determinant of student satisfaction and achievement. The learning environments of medical students on clinical placements are busy workplaces, composed of many variables. There is no universally accepted method of evaluating the clinical learning environment, nor is there consensus on what concepts or aspects should be measured. The aims of this study were to compare the Dundee ready educational environment measure (DREEM - the current de facto standard) and the more recently developed Manchester clinical placement index (MCPI) for the assessment of the clinical learning environment in a graduate entry medical student cohort by correlating the scores of each and analysing free text comments. This study also explored student perceptionof how the clinical educational environment is assessed. An online, anonymous survey comprising of both the DREEM and MCPI instruments was delivered to students on clinical placement in a graduate entry medical school. Additional questions explored students' perceptions of instruments for giving feedback. Numeric variables (DREEM score, MCPI score, ratings) were tested for normality and summarised. Pearson's correlation coefficient was used to measure the strength of the association between total DREEM score and total MCPI scores. Thematic analysis was used to analyse the free text comments. The overall response rate to the questionnaire was 67% (n = 180), with a completed response rate for the MCPI of 60% (n = 161) and for the DREEM of 58% (n = 154). There was a strong, positive correlation between total DREEM and MCPI scores (r = 0.71, p < 0.001). On a scale of 0 to 7, the mean rating for how worthwhile students found completing the DREEM was 3.27 (SD 1.41) and for the MCPI was 3.49 (SD 1.57). 'Finding balance' and 'learning at work' were among the themes to emerge from analysis of free text comments. The present study confirms that DREEM and MCPI total scores are strongly correlated

  16. A Clustering Based Approach for Observability and Controllability Analysis for Optimal Placement of PMU

    Science.gov (United States)

    Murthy, Ch; MIEEE; Mohanta, D. K.; SMIEE; Meher, Mahendra

    2017-08-01

    Continuous monitoring and control of the power system is essential for its healthy operation. This can be achieved by making the system observable as well as controllable. Many efforts have been made by several researchers to make the system observable by placing the Phasor Measurement Units (PMUs) at the optimal locations. But so far the idea of controllability with PMUs is not considered. This paper contributes how to check whether the system is controllable or not, if not then how make it controllable using a clustering approach. IEEE 14 bus system is considered to illustrate the concept of controllability.

  17. Determination of the Prosumer's Optimal Bids

    Science.gov (United States)

    Ferruzzi, Gabriella; Rossi, Federico; Russo, Angela

    2015-12-01

    This paper considers a microgrid connected with a medium-voltage (MV) distribution network. It is assumed that the microgrid, which is managed by a prosumer, operates in a competitive environment and participates in the day-ahead market. Then, as the first step of the short-term management problem, the prosumer must determine the bids to be submitted to the market. The offer strategy is based on the application of an optimization model, which is solved for different hourly price profiles of energy exchanged with the main grid. The proposed procedure is applied to a microgrid and four different its configurations were analyzed. The configurations consider the presence of thermoelectric units that only produce electricity, a boiler or/and cogeneration power plants for the thermal loads, and an electric storage system. The numerical results confirmed the numerous theoretical considerations that have been made.

  18. Optimal Placement Method of RFID Readers in Industrial Rail Transport for Uneven Rail Traflc Volume Management

    Science.gov (United States)

    Rakhmangulov, Aleksandr; Muravev, Dmitri; Mishkurov, Pavel

    2016-11-01

    The issue of operative data reception on location and movement of railcars is significant the constantly growing requirements of the provision of timely and safe transportation. The technical solution for efficiency improvement of data collection on rail rolling stock is the implementation of an identification system. Nowadays, there are several such systems, distinguished in working principle. In the authors' opinion, the most promising for rail transportation is the RFID technology, proposing the equipping of the railway tracks by the stationary points of data reading (RFID readers) from the onboard sensors on the railcars. However, regardless of a specific type and manufacturer of these systems, their implementation is affiliated with the significant financing costs for large, industrial, rail transport systems, owning the extensive network of special railway tracks with a large number of stations and loading areas. To reduce the investment costs for creation, the identification system of rolling stock on the special railway tracks of industrial enterprises has developed the method based on the idea of priority installation of the RFID readers on railway hauls, where rail traffic volumes are uneven in structure and power, parameters of which is difficult or impossible to predict on the basis of existing data in an information system. To select the optimal locations of RFID readers, the mathematical model of the staged installation of such readers has developed depending on the non-uniformity value of rail traffic volumes, passing through the specific railway hauls. As a result of that approach, installation of the numerous RFID readers at all station tracks and loading areas of industrial railway stations might be not necessary,which reduces the total cost of the rolling stock identification and the implementation of the method for optimal management of transportation process.

  19. Design of a correlated validated CFD and genetic algorithm model for optimized sensors placement for indoor air quality monitoring

    Science.gov (United States)

    Mousavi, Monireh Sadat; Ashrafi, Khosro; Motlagh, Majid Shafie Pour; Niksokhan, Mohhamad Hosein; Vosoughifar, HamidReza

    2018-02-01

    In this study, coupled method for simulation of flow pattern based on computational methods for fluid dynamics with optimization technique using genetic algorithms is presented to determine the optimal location and number of sensors in an enclosed residential complex parking in Tehran. The main objective of this research is costs reduction and maximum coverage with regard to distribution of existing concentrations in different scenarios. In this study, considering all the different scenarios for simulation of pollution distribution using CFD simulations has been challenging due to extent of parking and number of cars available. To solve this problem, some scenarios have been selected based on random method. Then, maximum concentrations of scenarios are chosen for performing optimization. CFD simulation outputs are inserted as input in the optimization model using genetic algorithm. The obtained results stated optimal number and location of sensors.

  20. Optimization strategy for actuator and sensor placement in active structural acoustic control

    NARCIS (Netherlands)

    Oude nijhuis, M.H.H.; de Boer, Andries

    2003-01-01

    In active structural acoustic control the goal is to reduce the sound radiation of a structure by means of changing the vibrational behaviour of that structure. The performance of such an active control system is to a large extent determined by the locations of the actuators and sensors. In this

  1. Short-Term Distribution System State Forecast Based on Optimal Synchrophasor Sensor Placement and Extreme Learning Machine

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Huaiguang; Zhang, Yingchen

    2016-11-14

    This paper proposes an approach for distribution system state forecasting, which aims to provide an accurate and high speed state forecasting with an optimal synchrophasor sensor placement (OSSP) based state estimator and an extreme learning machine (ELM) based forecaster. Specifically, considering the sensor installation cost and measurement error, an OSSP algorithm is proposed to reduce the number of synchrophasor sensor and keep the whole distribution system numerically and topologically observable. Then, the weighted least square (WLS) based system state estimator is used to produce the training data for the proposed forecaster. Traditionally, the artificial neural network (ANN) and support vector regression (SVR) are widely used in forecasting due to their nonlinear modeling capabilities. However, the ANN contains heavy computation load and the best parameters for SVR are difficult to obtain. In this paper, the ELM, which overcomes these drawbacks, is used to forecast the future system states with the historical system states. The proposed approach is effective and accurate based on the testing results.

  2. Optimal placement of combined heat and power scheme (cogeneration): application to an ethylbenzene plant

    International Nuclear Information System (INIS)

    Zainuddin Abd Manan; Lim Fang Yee

    2001-01-01

    Combined heat and power (CHP) scheme, also known as cogeneration is widely accepted as a highly efficient energy saving measure, particularly in medium to large scale chemical process plants. To date, CHP application is well established in the developed countries. The advantage of a CHP scheme for a chemical plant is two-fold: (i) drastically cut down on the electricity bill from on-site power generation (ii) to save the fuel bills through recovery of the quality waste heat from power generation for process heating. In order to be effective, a CHP scheme must be placed at the right temperature level in the context of the overall process. Failure to do so might render a CHP venture worthless. This paper discusses the procedure for an effective implementation of a CHP scheme. An ethylbenzene process is used as a case study. A key visualization tool known as the grand composite curves is used to provide an overall picture of the process heat source and heat sink profiles. The grand composite curve, which is generated based on the first principles of Pinch Analysis enables the CHP scheme to be optimally placed within the overall process scenario. (Author)

  3. Optimal placement of range-only beacons for mobile robot localisation

    CSIR Research Space (South Africa)

    Burke, Michael G

    2011-11-01

    Full Text Available of the Euclidean distance between the position estimate (x; y; z) and the largest orthogo- nal standard deviations in 3D space, and is an improvement on the GDOP metric as it considers both off-diagonal covariance terms and individual beacon noise. The beacon... is the Euclidean distance between an accessed node and a goal. This results in the shortest route to a goal being determined. The cost of traversing to a node is typically the distance between nodes. However, by replacing this cost with the uncertainty metric...

  4. Validation study of two-microphone acoustic reflectometry for determination of breathing tube placement in 200 adult patients.

    Science.gov (United States)

    Raphael, David T; Benbassat, Maxim; Arnaudov, Dimiter; Bohorquez, Alex; Nasseri, Bita

    2002-12-01

    Acoustic reflectometry allows the construction of a one-dimensional image of a cavity, such as the airway or the esophagus. The reflectometric area-distance profile consists of a constant cross-sectional area segment (length of endotracheal tube), followed either by a rapid increase in the area beyond the carina (tracheal intubation) or by an immediate decrease in the area (esophageal intubation). Two hundred adult patients were induced and intubated, without restrictions on anesthetic agents or airway adjunct devices. A two-microphone acoustic reflectometer was used to determine whether the breathing tube was placed in the trachea or esophagus. A blinded reflectometer operator, seated a distance away from the patient, interpreted the acoustic area-distance profile alone to decide where the tube was placed. Capnography was used as the gold standard. Of 200 tracheal intubations confirmed by capnography, the reflectometer operator correctly identified 198 (99% correct tracheal intubation identification rate). In two patients there were false-negative results, patients with a tracheal intubation were interpreted as having an esophageal intubation. A total of 14 esophageal intubations resulted, all correctly identified by reflectometry, for a 100% esophageal intubation identification rate. Acoustic reflectometry is a rapid, noninvasive method by which to determine whether breathing tube placement is correct (tracheal) or incorrect (esophageal). Reflectometry determination of tube placement may be useful in airway emergencies, particularly in cases where visualization of the glottic area is not possible and capnography may fail, as in patients with cardiac arrest.

  5. Evaluation of four steering wheels to determine driver hand placement in a static environment.

    Science.gov (United States)

    Mossey, Mary E; Xi, Yubin; McConomy, Shayne K; Brooks, Johnell O; Rosopa, Patrick J; Venhovens, Paul J

    2014-07-01

    While much research exists on occupant packaging both proprietary and in the literature, more detailed research regarding user preferences for subjective ratings of steering wheel designs is sparse in published literature. This study aimed to explore the driver interactions with production steering wheels in four vehicles by using anthropometric data, driver hand placement, and driver grip design preferences for Generation-Y and Baby Boomers. In this study, participants selected their preferred grip diameter, responded to a series of questions about the steering wheel grip as they sat in four vehicles, and rank ordered their preferred grip design. Thirty-two male participants (16 Baby Boomers between ages 47 and 65 and 16 Generation-Y between ages 18 and 29) participated in the study. Drivers demonstrated different gripping behavior between vehicles and between groups. Recommendations for future work in steering wheel grip design and naturalistic driver hand positioning are discussed. Copyright © 2014. Published by Elsevier Ltd.

  6. Detector placement optimization for cargo containers using deterministic adjoint transport examination for SNM detection

    International Nuclear Information System (INIS)

    McLaughlin, Trevor D.; Sjoden, Glenn E.; Manalo, Kevin L.

    2011-01-01

    With growing concerns over port security and the potential for illicit trafficking of SNM through portable cargo shipping containers, efforts are ongoing to reduce the threat via container monitoring. This paper focuses on answering an important question of how many detectors are necessary for adequate coverage of a cargo container considering the detection of neutrons and gamma rays. Deterministic adjoint transport calculations are performed with compressed helium- 3 polyethylene moderated neutron detectors and sodium activated cesium-iodide gamma-ray scintillation detectors on partial and full container models. Results indicate that the detector capability is dependent on source strength and potential shielding. Using a surrogate weapons grade plutonium leakage source, it was determined that for a 20 foot ISO container, five neutron detectors and three gamma detectors are necessary for adequate coverage. While a large CsI(Na) gamma detector has the potential to monitor the entire height of the container for SNM, the He-3 neutron detector is limited to roughly 1.25 m in depth. Detector blind spots are unavoidable inside the container volume unless additional measures are taken for adequate coverage. (author)

  7. Keyword: Placement

    Science.gov (United States)

    Cassuto, Leonard

    2012-01-01

    The practical goal of graduate education is placement of graduates. But what does "placement" mean? Academics use the word without thinking much about it. "Placement" is a great keyword for the graduate-school enterprise. For one thing, its meaning certainly gives a purpose to graduate education. Furthermore, the word is a portal into the way of…

  8. Optimal damper placement research

    Science.gov (United States)

    Smirnov, Vladimir; Kuzhin, Bulat

    2017-10-01

    Nowadays increased noise and vibration pollution on technopark and research laboratories territories, which is negatively influencing on production of high-precision measuring instruments. The problem is actual for transport hubs, which experience influence of machines, vehicles, trains and planes. Energy efficiency is one of major functions in modern road transport development. The problem of environmental pollution, lack of energy resources and energy efficiency requires research, production and implementation of energy efficient materials that would be the foundation of environmentally sustainable transport infrastructure in road traffic. Improving the efficiency of energy use is a leading option to gain better energy security, improve industry profitability and competitiveness, and reduce the overall energy sector impacts on climate change. This paper has next indirect goals. Research impact of vibration on constructions, such as bus and train stations, terminals, which are mostly exposed to oscillation. Extend the buildings operation by decreasing the negative influence. Reduce expenses on maintenance and repair works. It is important not to forget about seismic protection, which is actual nowadays, when the safety stands first. Analysis of devastating earthquakes for last few years proves reasonableness of application such systems. The article is dedicated to learning dependence of damper location on natural frequency. As a model for analyze was simulated concrete construction with variable profile. We used program complex Patran for analyzing the model.

  9. Optimal Path Determination for Flying Vehicle to Search an Object

    Science.gov (United States)

    Heru Tjahjana, R.; Heri Soelistyo U, R.; Ratnasari, L.; Irawanto, B.

    2018-01-01

    In this paper, a method to determine optimal path for flying vehicle to search an object is proposed. Background of the paper is controlling air vehicle to search an object. Optimal path determination is one of the most popular problem in optimization. This paper describe model of control design for a flying vehicle to search an object, and focus on the optimal path that used to search an object. In this paper, optimal control model is used to control flying vehicle to make the vehicle move in optimal path. If the vehicle move in optimal path, then the path to reach the searched object also optimal. The cost Functional is one of the most important things in optimal control design, in this paper the cost functional make the air vehicle can move as soon as possible to reach the object. The axis reference of flying vehicle uses N-E-D (North-East-Down) coordinate system. The result of this paper are the theorems which say that the cost functional make the control optimal and make the vehicle move in optimal path are proved analytically. The other result of this paper also shows the cost functional which used is convex. The convexity of the cost functional is use for guarantee the existence of optimal control. This paper also expose some simulations to show an optimal path for flying vehicle to search an object. The optimization method which used to find the optimal control and optimal path vehicle in this paper is Pontryagin Minimum Principle.

  10. Virtual sensors for active noise control in acoustic-structural coupled enclosures using structural sensing: part II--Optimization of structural sensor placement.

    Science.gov (United States)

    Halim, Dunant; Cheng, Li; Su, Zhongqing

    2011-04-01

    The work proposed an optimization approach for structural sensor placement to improve the performance of vibro-acoustic virtual sensor for active noise control applications. The vibro-acoustic virtual sensor was designed to estimate the interior sound pressure of an acoustic-structural coupled enclosure using structural sensors. A spectral-spatial performance metric was proposed, which was used to quantify the averaged structural sensor output energy of a vibro-acoustic system excited by a spatially varying point source. It was shown that (i) the overall virtual sensing error energy was contributed additively by the modal virtual sensing error and the measurement noise energy; (ii) each of the modal virtual sensing error system was contributed by both the modal observability levels for the structural sensing and the target acoustic virtual sensing; and further (iii) the strength of each modal observability level was influenced by the modal coupling and resonance frequencies of the associated uncoupled structural/cavity modes. An optimal design of structural sensor placement was proposed to achieve sufficiently high modal observability levels for certain important panel- and cavity-controlled modes. Numerical analysis on a panel-cavity system demonstrated the importance of structural sensor placement on virtual sensing and active noise control performance, particularly for cavity-controlled modes.

  11. Optimal placement, sizing, and daily charge/discharge of battery energy storage in low voltage distribution network with high photovoltaic penetration

    DEFF Research Database (Denmark)

    Jannesar, Mohammad Rasol; Sedighi, Alireza; Savaghebi, Mehdi

    2018-01-01

    when photovoltaic penetration is increased in low voltage distribution network. Local battery energy storage system can mitigate these disadvantages and as a result, improve the system operation. For this purpose, battery energy storage system is charged when production of photovoltaic is more than...... consumers’ demands and discharged when consumers’ demands are increased. Since the price of battery energy storage system is high, economic, environmental, and technical objectives should be considered together for its placement and sizing. In this paper, optimal placement, sizing, and daily (24 h) charge......Proper installation of rooftop photovoltaic generation in distribution networks can improve voltage profile, reduce energy losses, and enhance the reliability. But, on the other hand, some problems regarding harmonic distortion, voltage magnitude, reverse power flow, and energy losses can arise...

  12. A Method for Determining Optimal Residential Energy Efficiency Packages

    Energy Technology Data Exchange (ETDEWEB)

    Polly, B. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Gestwick, M. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Bianchi, M. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Anderson, R. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Horowitz, S. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Christensen, C. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Judkoff, R. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2011-04-01

    This report describes an analysis method for determining optimal residential energy efficiency retrofit packages and, as an illustrative example, applies the analysis method to a 1960s-era home in eight U.S. cities covering a range of International Energy Conservation Code (IECC) climate regions. The method uses an optimization scheme that considers average energy use (determined from building energy simulations) and equivalent annual cost to recommend optimal retrofit packages specific to the building, occupants, and location.

  13. Use of sulfur hexafluoride airflow studies to determine the appropriate number and placement of air monitors in an alpha inhalation exposure laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Newton, G.J.; Hoover, M.D.

    1995-12-01

    Determination of the appropriate number and placement of air monitors in the workplace is quite subjective and is generally one of the more difficult tasks in radiation protection. General guidance for determining the number and placement of air sampling and monitoring instruments has been provided by technical reports such as Mishima, J. These two documents and other published guidelines suggest that some insight into sampler placement can be obtained by conducting airflow studies involving the dilution and clearance of the relatively inert tracer gas sulfur hexafluoride (SF{sub 6}) in sampler placement studies and describes the results of a study done within the ITRI alpha inhalation exposure laboratories. The objectives of the study were to document an appropriate method for conducting SF{sub 6} dispersion studies, and to confirm the appropriate number and placement of air monitors and air samplers within a typical ITRI inhalation exposure laboratory. The results of this study have become part of the technical bases for air sampling and monitoring in the test room.

  14. A Robust Optimization Based Energy-Aware Virtual Network Function Placement Proposal for Small Cell 5G Networks with Mobile Edge Computing Capabilities

    Directory of Open Access Journals (Sweden)

    Bego Blanco

    2017-01-01

    Full Text Available In the context of cloud-enabled 5G radio access networks with network function virtualization capabilities, we focus on the virtual network function placement problem for a multitenant cluster of small cells that provide mobile edge computing services. Under an emerging distributed network architecture and hardware infrastructure, we employ cloud-enabled small cells that integrate microservers for virtualization execution, equipped with additional hardware appliances. We develop an energy-aware placement solution using a robust optimization approach based on service demand uncertainty in order to minimize the power consumption in the system constrained by network service latency requirements and infrastructure terms. Then, we discuss the results of the proposed placement mechanism in 5G scenarios that combine several service flavours and robust protection values. Once the impact of the service flavour and robust protection on the global power consumption of the system is analyzed, numerical results indicate that our proposal succeeds in efficiently placing the virtual network functions that compose the network services in the available hardware infrastructure while fulfilling service constraints.

  15. Delayed ischemic cecal perforation despite optimal decompression after placement of a self-expanding metal stent: report of a case

    DEFF Research Database (Denmark)

    Knop, Filip Krag; Pilsgaard, Bo; Meisner, Søren

    2004-01-01

    Endoscopic deployment of self-expanding metal stents offers an alternative to surgical intervention in rectocolonic obstructions. Reported clinical failures in the literature are all related to the site of stent placement. We report a case of serious intra-abdominal disease after technically...... and clinically successful stent deployment: a potentially dangerous situation of which the surgeon should be aware. A previously healthy 72-year-old female was referred to our department with symptoms of an obstructing colorectal tumor. Successful stent placement resulted in resolution of the obstructive......, probably caused by ischemic conditions developed before stent-decompression of the colon was revealed during the operation. The patient died in the postoperative course. We discuss the observation of patients treated with self-expanding metal stents based on the selection-strategy used to allocate patients...

  16. Delayed ischemic cecal perforation despite optimal decompression after placement of a self-expanding metal stent: report of a case

    DEFF Research Database (Denmark)

    Knop, Filip Krag; Pilsgaard, Bo; Meisner, Søren

    2004-01-01

    Endoscopic deployment of self-expanding metal stents offers an alternative to surgical intervention in rectocolonic obstructions. Reported clinical failures in the literature are all related to the site of stent placement. We report a case of serious intra-abdominal disease after technically...... and clinically successful stent deployment: a potentially dangerous situation of which the surgeon should be aware. A previously healthy 72-year-old female was referred to our department with symptoms of an obstructing colorectal tumor. Successful stent placement resulted in resolution of the obstructive...... condition. Three days after stent deployment, x-ray examinations revealed a small-bowel obstruction and emergency surgery was performed. Intraoperative findings demonstrated a segment of ileum fixated to the tumor in the small pelvis, resulting in the obstructive condition. Furthermore, a cecal perforation...

  17. A framework for determining optimal petroleum leasing

    International Nuclear Information System (INIS)

    Robinson, D.R.

    1991-01-01

    The techniques of auction theory and option theory are combined to allow valuation under both geologic and oil price uncertainty. The primary motivation for developing this framework is to understand the prevalence of leasing in transferring ownership of oil properties. Under a standard oil lease, the landowner sells an oil company the right to explore and develop a tract of land for a fixed period of time. If oil is found, a fraction of the revenues is reserved for the landowner. Compared to the outright sale of the minerals, leasing has the disadvantages of: (1) lowering total oil field value through alteration of investment incentives; (2) providing the seller with a more risk cash flow; and (3) increasing legal and administrative costs. It is demonstrated here that in lease sales as compared to full mineral interest sales, the relative disadvantages are offset by more effective value transfer to the seller. For the base-case parameters, the optimal lease in a bonus auction gives the seller 28% more value than the sale of the full mineral interest. There is a loss in the leasing process from distortion of development timing incentives

  18. Optimizing UV Index determination from broadband irradiances

    Science.gov (United States)

    Tereszchuk, Keith A.; Rochon, Yves J.; McLinden, Chris A.; Vaillancourt, Paul A.

    2018-03-01

    A study was undertaken to improve upon the prognosticative capability of Environment and Climate Change Canada's (ECCC) UV Index forecast model. An aspect of that work, and the topic of this communication, was to investigate the use of the four UV broadband surface irradiance fields generated by ECCC's Global Environmental Multiscale (GEM) numerical prediction model to determine the UV Index. The basis of the investigation involves the creation of a suite of routines which employ high-spectral-resolution radiative transfer code developed to calculate UV Index fields from GEM forecasts. These routines employ a modified version of the Cloud-J v7.4 radiative transfer model, which integrates GEM output to produce high-spectral-resolution surface irradiance fields. The output generated using the high-resolution radiative transfer code served to verify and calibrate GEM broadband surface irradiances under clear-sky conditions and their use in providing the UV Index. A subsequent comparison of irradiances and UV Index under cloudy conditions was also performed. Linear correlation agreement of surface irradiances from the two models for each of the two higher UV bands covering 310.70-330.0 and 330.03-400.00 nm is typically greater than 95 % for clear-sky conditions with associated root-mean-square relative errors of 6.4 and 4.0 %. However, underestimations of clear-sky GEM irradiances were found on the order of ˜ 30-50 % for the 294.12-310.70 nm band and by a factor of ˜ 30 for the 280.11-294.12 nm band. This underestimation can be significant for UV Index determination but would not impact weather forecasting. Corresponding empirical adjustments were applied to the broadband irradiances now giving a correlation coefficient of unity. From these, a least-squares fitting was derived for the calculation of the UV Index. The resultant differences in UV indices from the high-spectral-resolution irradiances and the resultant GEM broadband irradiances are typically within 0

  19. A conformal mapping based fractional order approach for sub-optimal tuning of PID controllers with guaranteed dominant pole placement

    Science.gov (United States)

    Saha, Suman; Das, Saptarshi; Das, Shantanu; Gupta, Amitava

    2012-09-01

    A novel conformal mapping based fractional order (FO) methodology is developed in this paper for tuning existing classical (Integer Order) Proportional Integral Derivative (PID) controllers especially for sluggish and oscillatory second order systems. The conventional pole placement tuning via Linear Quadratic Regulator (LQR) method is extended for open loop oscillatory systems as well. The locations of the open loop zeros of a fractional order PID (FOPID or PIλDμ) controller have been approximated in this paper vis-à-vis a LQR tuned conventional integer order PID controller, to achieve equivalent integer order PID control system. This approach eases the implementation of analog/digital realization of a FOPID controller with its integer order counterpart along with the advantages of fractional order controller preserved. It is shown here in the paper that decrease in the integro-differential operators of the FOPID/PIλDμ controller pushes the open loop zeros of the equivalent PID controller towards greater damping regions which gives a trajectory of the controller zeros and dominant closed loop poles. This trajectory is termed as "M-curve". This phenomena is used to design a two-stage tuning algorithm which reduces the existing PID controller's effort in a significant manner compared to that with a single stage LQR based pole placement method at a desired closed loop damping and frequency.

  20. Optimization of Phasor Measurement Unit (PMU Placement in Supervisory Control and Data Acquisition (SCADA-Based Power System for Better State-Estimation Performance

    Directory of Open Access Journals (Sweden)

    Mohammad Shoaib Shahriar

    2018-03-01

    Full Text Available Present-day power systems are mostly equipped with conventional meters and intended for the installation of highly accurate phasor measurement units (PMUs to ensure better protection, monitoring and control of the network. PMU is a deliberate choice due to its unique capacity in providing accurate phasor readings of bus voltages and currents. However, due to the high expense and a requirement for communication facilities, the installation of a limited number of PMUs in a network is common practice. This paper presents an optimal approach to selecting the locations of PMUs to be installed with the objective of ensuring maximum accuracy of the state estimation (SE. The optimization technique ensures that the critical locations of the system will be covered by PMU meters which lower the negative impact of bad data on state-estimation performance. One of the well-known intelligent optimization techniques, the genetic algorithm (GA, is used to search for the optimal set of PMUs. The proposed technique is compared with a heuristic approach of PMU placement. The weighted least square (WLS, with a modified Jacobian to deal with the phasor quantities, is used to compute the estimation accuracy. IEEE 30-bus and 118-bus systems are used to demonstrate the suggested technique.

  1. Method for Determining Optimal Residential Energy Efficiency Retrofit Packages

    Energy Technology Data Exchange (ETDEWEB)

    Polly, B.; Gestwick, M.; Bianchi, M.; Anderson, R.; Horowitz, S.; Christensen, C.; Judkoff, R.

    2011-04-01

    Businesses, government agencies, consumers, policy makers, and utilities currently have limited access to occupant-, building-, and location-specific recommendations for optimal energy retrofit packages, as defined by estimated costs and energy savings. This report describes an analysis method for determining optimal residential energy efficiency retrofit packages and, as an illustrative example, applies the analysis method to a 1960s-era home in eight U.S. cities covering a range of International Energy Conservation Code (IECC) climate regions. The method uses an optimization scheme that considers average energy use (determined from building energy simulations) and equivalent annual cost to recommend optimal retrofit packages specific to the building, occupants, and location. Energy savings and incremental costs are calculated relative to a minimum upgrade reference scenario, which accounts for efficiency upgrades that would occur in the absence of a retrofit because of equipment wear-out and replacement with current minimum standards.

  2. 42 CFR 136.414 - How does the IHS determine eligibility for placement or retention of individuals in positions...

    Science.gov (United States)

    2010-10-01

    ... prostitution; crimes against persons; or offenses committed against children. (f) After an opportunity has been... placement or retention of individuals in positions involving regular contact with Indian children? 136.414... SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES INDIAN HEALTH Indian Child Protection and Family Violence...

  3. STUDENT PLACEMENT

    African Journals Online (AJOL)

    User

    students express lack of interest in the field they are placed, it ... be highly motivated to learn than students placed in a department ... the following research questions. Research Questions. •. Did the criteria used by Mekelle. University for placement of students into different departments affect the academic performance of ...

  4. Placement Optimization of Wind Farm Based on Niche Genetic Algorithm%基于小生境遗传算法的风电场布局优化

    Institute of Scientific and Technical Information of China (English)

    田琳琳; 赵宁; 钟伟; 胡偶

    2011-01-01

    The placement of wind turbines in a wind farm is optimized based on niche genetic algorithm. Two simplified oncoming flow models of the unidirectional uniform wind and the non-uniform wind are considered with variable wind directions. In order to predict a more realistic power produced by the wind farm, the modified Jensen wake model is employed to investigate the behavior of wake interactions among the wind turbines. The niche genetic algorithm is used in optimization to minimize the cost of en-ergy (COE). In addition to optimal configurations, the results include number of turbines, total power output, objective functions and efficiency of output power for each configuration. Compared with earlier studies, the present work provides more improved results, and it is suitable for the optimization of the wind turbine placement in wind farms.%基于小生境遗传算法对风电场内风力机机组的布局进行优化.在优化过程中,考虑等风速同风向和变风速变风向两种简化的入流模式,采用修正的Jensen尾流模型模拟机组之间尾流的相互干扰效应,以单位发电量所消耗的成本最低为目标,使用小生境遗传算法优化风电场机组的排布.文中给出了优化后的风电场布局轮廓图、风电场机组台数、总发电量、目标函数值以及风电场的效率.通过与以前的相关研究对比分析,表明本文的方法取得了较优的结果,可为将来真实风场的风力机排布提供参考依据.

  5. Determining the optimal spacing of deepening of vertical mine

    Energy Technology Data Exchange (ETDEWEB)

    Durov, Ye.M.

    1983-01-01

    Light is shed on a technique for determining the optimal spacing of shafts for deepening for the examined parameters of operational and deepening operations. The presented results of studies may be used in designing new shafts, in preparing levels and in reconstruction of existing shafts with slanted and steep stratum bedding.

  6. Optimal placement of horizontal - and vertical - axis wind turbines in a wind farm for maximum power generation using a genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Xiaomin; Agarwal, Ramesh [Department of Mechanical Engineering & Materials Science, Washington University in St. Louis, Jolley Hall, Campus Box 1185, One Brookings Drive, St. Louis, Missouri, 63130 (United States)

    2012-07-01

    In this paper, we consider the Wind Farm layout optimization problem using a genetic algorithm. Both the Horizontal –Axis Wind Turbines (HAWT) and Vertical-Axis Wind Turbines (VAWT) are considered. The goal of the optimization problem is to optimally position the turbines within the wind farm such that the wake effects are minimized and the power production is maximized. The reasonably accurate modeling of the turbine wake is critical in determination of the optimal layout of the turbines and the power generated. For HAWT, two wake models are considered; both are found to give similar answers. For VAWT, a very simple wake model is employed.

  7. A Monte-Carlo-Based Method for the Optimal Placement and Operation Scheduling of Sewer Mining Units in Urban Wastewater Networks

    Directory of Open Access Journals (Sweden)

    Eleftheria Psarrou

    2018-02-01

    Full Text Available Pressures on water resources, which have increased significantly nowadays mainly due to rapid urbanization, population growth and climate change impacts, necessitate the development of innovative wastewater treatment and reuse technologies. In this context, a mid-scale decentralized technology concerning wastewater reuse is that of sewer mining. It is based on extracting wastewater from a wastewater system, treating it on-site and producing recycled water applicable for non-potable uses. Despite the technology’s considerable benefits, several challenges hinder its implementation. Sewer mining disturbs biochemical processes inside sewers and affects hydrogen sulfide build-up, resulting in odor, corrosion and health-related problems. In this study, a tool for optimal sewer mining unit placement aiming to minimize hydrogen sulfide production is presented. The Monte-Carlo method coupled with the Environmental Protection Agency’s Storm Water Management Model (SWMM is used to conduct multiple simulations of the network. The network’s response when sewage is extracted from it is also examined. Additionally, the study deals with optimal pumping scheduling. The overall methodology is applied in a sewer network in Greece providing useful results. It can therefore assist in selecting appropriate locations for sewer mining implementation, with the focus on eliminating hydrogen sulfide-associated problems while simultaneously ensuring that higher water needs are satisfied.

  8. Product Placement in Cartoons

    Directory of Open Access Journals (Sweden)

    Irena Oroz Štancl

    2014-06-01

    Full Text Available Product placement is a marketing approach for integrating products or services into selected media content. Studies have shown that the impact of advertising on children and youth are large, and that it can affect their preferences and attitudes. The aim of this article is to determine the existing level of product placement in cartoons that are broadcast on Croatian television stations. Content analysis of cartoons in a period of one month gave the following results: in 30% of cartoons product placement was found; most product placement were visual ads, in 89%, however, auditory product placement and plot connection was also found. Most ads were related to toys and it is significant that even 65% of cartoons are accompanied by a large amount of products available on the Croatian market. This is the result of two sales strategies: brand licensing (selling popular cartoon characters to toys, food or clothing companies and cartoon production based on existing line of toys with the sole aim of making their sales more effective.

  9. Problems in determining the optimal use of road safety measures

    DEFF Research Database (Denmark)

    Elvik, Rune

    2014-01-01

    for intervention that ensures maximum safety benefits. The third problem is how to develop policy options to minimise the risk of indivisibilities and irreversible choices. The fourth problem is how to account for interaction effects between road safety measures when determining their optimal use. The fifth......This paper discusses some problems in determining the optimal use of road safety measures. The first of these problems is how best to define the baseline option, i.e. what will happen if no new safety measures are introduced. The second problem concerns choice of a method for selection of targets...... problem is how to obtain the best mix of short-term and long-term measures in a safety programme. The sixth problem is how fixed parameters for analysis, including the monetary valuation of road safety, influence the results of analyses. It is concluded that it is at present not possible to determine...

  10. On the application of artificial bee colony (ABC algorithm for optimization of well placements in fractured reservoirs; efficiency comparison with the particle swarm optimization (PSO methodology

    Directory of Open Access Journals (Sweden)

    Behzad Nozohour-leilabady

    2016-03-01

    Full Text Available The application of a recent optimization technique, the artificial bee colony (ABC, was investigated in the context of finding the optimal well locations. The ABC performance was compared with the corresponding results from the particle swarm optimization (PSO algorithm, under essentially similar conditions. Treatment of out-of-boundary solution vectors was accomplished via the Periodic boundary condition (PBC, which presumably accelerates convergence towards the global optimum. Stochastic searches were initiated from several random staring points, to minimize starting-point dependency in the established results. The optimizations were aimed at maximizing the Net Present Value (NPV objective function over the considered oilfield production durations. To deal with the issue of reservoir heterogeneity, random permeability was applied via normal/uniform distribution functions. In addition, the issue of increased number of optimization parameters was address, by considering scenarios with multiple injector and producer wells, and cases with deviated wells in a real reservoir model. The typical results prove ABC to excel PSO (in the cases studied after relatively short optimization cycles, indicating the great premise of ABC methodology to be used for well-optimization purposes.

  11. A hybrid of ant colony optimization and artificial bee colony algorithm for probabilistic optimal placement and sizing of distributed energy resources

    International Nuclear Information System (INIS)

    Kefayat, M.; Lashkar Ara, A.; Nabavi Niaki, S.A.

    2015-01-01

    Highlights: • A probabilistic optimization framework incorporated with uncertainty is proposed. • A hybrid optimization approach combining ACO and ABC algorithms is proposed. • The problem is to deal with technical, environmental and economical aspects. • A fuzzy interactive approach is incorporated to solve the multi-objective problem. • Several strategies are implemented to compare with literature methods. - Abstract: In this paper, a hybrid configuration of ant colony optimization (ACO) with artificial bee colony (ABC) algorithm called hybrid ACO–ABC algorithm is presented for optimal location and sizing of distributed energy resources (DERs) (i.e., gas turbine, fuel cell, and wind energy) on distribution systems. The proposed algorithm is a combined strategy based on the discrete (location optimization) and continuous (size optimization) structures to achieve advantages of the global and local search ability of ABC and ACO algorithms, respectively. Also, in the proposed algorithm, a multi-objective ABC is used to produce a set of non-dominated solutions which store in the external archive. The objectives consist of minimizing power losses, total emissions produced by substation and resources, total electrical energy cost, and improving the voltage stability. In order to investigate the impact of the uncertainty in the output of the wind energy and load demands, a probabilistic load flow is necessary. In this study, an efficient point estimate method (PEM) is employed to solve the optimization problem in a stochastic environment. The proposed algorithm is tested on the IEEE 33- and 69-bus distribution systems. The results demonstrate the potential and effectiveness of the proposed algorithm in comparison with those of other evolutionary optimization methods

  12. Energy group structure determination using particle swarm optimization

    International Nuclear Information System (INIS)

    Yi, Ce; Sjoden, Glenn

    2013-01-01

    Highlights: ► Particle swarm optimization is applied to determine broad group structure. ► A graph representation of the broad group structure problem is introduced. ► The approach is tested on a fuel-pin model. - Abstract: Multi-group theory is widely applied for the energy domain discretization when solving the Linear Boltzmann Equation. To reduce the computational cost, fine group cross libraries are often down-sampled into broad group cross section libraries. Cross section data collapsing generally involves two steps: Firstly, the broad group structure has to be determined; secondly, a weighting scheme is used to evaluate the broad cross section library based on the fine group cross section data and the broad group structure. A common scheme is to average the fine group cross section weighted by the fine group flux. Cross section collapsing techniques have been intensively researched. However, most studies use a pre-determined group structure, open based on experience, to divide the neutron energy spectrum into thermal, epi-thermal, fast, etc. energy range. In this paper, a swarm intelligence algorithm, particle swarm optimization (PSO), is applied to optimize the broad group structure. A graph representation of the broad group structure determination problem is introduced. And the swarm intelligence algorithm is used to solve the graph model. The effectiveness of the approach is demonstrated using a fuel-pin model

  13. Preoperative optimization of multi-organ failure following acute myocardial infarction and ischemic mitral regurgitation by placement of a transthoracic intra-aortic balloon pump.

    Science.gov (United States)

    Umakanthan, Ramanan; Dubose, Robert; Byrne, John G; Ahmad, Rashid M

    2010-10-01

    The management of acute myocardial infarction with resultant acute ischemic mitral regurgitation and acute multi-organ failure can prove to be a very challenging scenario. The presence of concomitant vascular disease can only serve to further compromise the complexity of the situation. We demonstrate a new indication for the transthoracic intra-aortic balloon pump as a preoperative means of unloading the heart and improving clinical outcome in such high-risk patients with severe vascular disease. We present the case of a 75-year-old man with a history of severe vascular disease who was transferred emergently to Vanderbilt University Medical Center with an acute inferolateral wall myocardial infarction resulting in severe acute ischemic mitral regurgitation and acute multi-organ failure. He presented with shock liver (serum glutamic-oxaloacetic transaminase [SGOT] of 958), renal failure (creatinine of 3.0), and respiratory failure with a pH of 7.18. Emergent cardiac catheterization revealed 100% occlusion of the left circumflex artery as well as severe ileofemoral disease. The advanced nature of his ileofemoral disease was such that the arterial access catheter occluded the right femoral artery. The duration of time that the catheter was in the artery led to transient limb ischemia with an elevation of his creatine phosphokinase (CPK) to 10,809. Balloon angioplasty followed by stent placement was successfully performed, which restored flow to the coronary vessel. Given the grave nature of the patient's condition, we were very concerned that immediate operative intervention for his condition would entail prohibitively high risk. In fact, the Society of Thoracic Surgeons predicted risk adjusted mortality was calculated to be 56%. In order to minimize patient mortality and morbidity, it was critical to help restore perfusion and organ recovery. Therefore, we decided that the chances for this patient's survival would improve if his condition could be optimized by

  14. Optimal Sizing and Placement of Battery Energy Storage in Distribution System Based on Solar Size for Voltage Regulation

    Energy Technology Data Exchange (ETDEWEB)

    Nazaripouya, Hamidreza [Univ. of California, Los Angeles, CA (United States); Wang, Yubo [Univ. of California, Los Angeles, CA (United States); Chu, Peter [Univ. of California, Los Angeles, CA (United States); Pota, Hemanshu R. [Univ. of California, Los Angeles, CA (United States); Gadh, Rajit [Univ. of California, Los Angeles, CA (United States)

    2016-07-26

    This paper proposes a new strategy to achieve voltage regulation in distributed power systems in the presence of solar energy sources and battery storage systems. The goal is to find the minimum size of battery storage and its corresponding location in the network based on the size and place of the integrated solar generation. The proposed method formulates the problem by employing the network impedance matrix to obtain an analytical solution instead of using a recursive algorithm such as power flow. The required modifications for modeling the slack and PV buses (generator buses) are utilized to increase the accuracy of the approach. The use of reactive power control to regulate the voltage regulation is not always an optimal solution as in distribution systems R/X is large. In this paper the minimum size and the best place of battery storage is achieved by optimizing the amount of both active and reactive power exchanged by battery storage and its gridtie inverter (GTI) based on the network topology and R/X ratios in the distribution system. Simulation results for the IEEE 14-bus system verify the effectiveness of the proposed approach.

  15. Particle swarm optimization for determining shortest distance to voltage collapse

    Energy Technology Data Exchange (ETDEWEB)

    Arya, L.D.; Choube, S.C. [Electrical Engineering Department, S.G.S.I.T.S. Indore, MP 452 003 (India); Shrivastava, M. [Electrical Engineering Department, Government Engineering College Ujjain, MP 456 010 (India); Kothari, D.P. [Centre for Energy Studies, Indian Institute of Technology, Delhi (India)

    2007-12-15

    This paper describes an algorithm for computing shortest distance to voltage collapse or determination of CSNBP using PSO technique. A direction along CSNBP gives conservative results from voltage security view point. This information is useful to the operator to steer the system away from this point by taking corrective actions. The distance to a closest bifurcation is a minimum of the loadability given a slack bus or participation factors for increasing generation as the load increases. CSNBP determination has been formulated as an optimization problem to be used in PSO technique. PSO is a new evolutionary algorithm (EA) which is population based inspired by the social behavior of animals such as fish schooling and birds flocking. It can handle optimization problems with any complexity since mechanization is simple with few parameters to be tuned. The developed algorithm has been implemented on two standard test systems. (author)

  16. METHODS FOR DETERMINATION AND OPTIMIZATION OF LOGISTICS COSTS

    Directory of Open Access Journals (Sweden)

    Mihaela STET

    2016-12-01

    Full Text Available The paper is dealing with the problems of logistics costs, highlighting some methods for estimation and determination of specific costs for different transport modes in freight distribution. There are highlighted, besides costs of transports, the other costs in supply chain, as well as costing methods used in logistics activities. In this context, there are also revealed some optimization means of transport costs in logistics chain.

  17. METHODS FOR DETERMINATION AND OPTIMIZATION OF LOGISTICS COSTS

    OpenAIRE

    Mihaela STET

    2016-01-01

    The paper is dealing with the problems of logistics costs, highlighting some methods for estimation and determination of specific costs for different transport modes in freight distribution. There are highlighted, besides costs of transports, the other costs in supply chain, as well as costing methods used in logistics activities. In this context, there are also revealed some optimization means of transport costs in logistics chain.

  18. A PROCEDURE FOR DETERMINING OPTIMAL FACILITY LOCATION AND SUB-OPTIMAL POSITIONS

    Directory of Open Access Journals (Sweden)

    P.K. Dan

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: This research presents a methodology for determining the optimal location of a new facility, having physical flow interaction of various degrees with other existing facilities in the presence of barriers impeding the shortest flow-path as well as the sub-optimal iso-cost positions. It also determines sub-optimal iso-cost positions with additional cost or penalty for not being able to site it at the computed optimal point. The proposed methodology considers all types of quadrilateral barrier or forbidden region configurations to generalize and by-pass such impenetrable obstacles, and adopts a scheme of searching through the vertices of the quadrilaterals to determine the alternative shortest flow-path. This procedure of obstacle avoidance is novel. Software has been developed to facilitate computations for the search algorithm to determine the optimal and iso-cost co-ordinates. The test results are presented.

    AFRIKAANSE OPSOMMING: Die navorsing behandel ‘n procedure vir die bepaling van optimum stigtingsposisie vir ‘n onderneming met vloei vanaf ander bestaande fasiliteite in die teenwoordigheid van ‘n verskeidenheid van randvoorwaardes. Die prodedure lewer as resultaat suboptimale isokoste-stigtingsplekke met bekendmaking van die koste wat onstaan a.g.v. afwyking van die randvoorwaardlose optimum oplossingskoste, die prosedure maak gebruik van ‘n vindingryke soekmetode wat toegepas word op niersydige meerkundige voorstellings vir die bepaling van korste roetes wat versperring omseil. Die prosedure word onderskei deur programmatuur. Toetsresultate word voorgehou.

  19. User Manual and Supporting Information for Library of Codes for Centroidal Voronoi Point Placement and Associated Zeroth, First, and Second Moment Determination; TOPICAL

    International Nuclear Information System (INIS)

    BURKARDT, JOHN; GUNZBURGER, MAX; PETERSON, JANET; BRANNON, REBECCA M.

    2002-01-01

    The theory, numerical algorithm, and user documentation are provided for a new ''Centroidal Voronoi Tessellation (CVT)'' method of filling a region of space (2D or 3D) with particles at any desired particle density. ''Clumping'' is entirely avoided and the boundary is optimally resolved. This particle placement capability is needed for any so-called ''mesh-free'' method in which physical fields are discretized via arbitrary-connectivity discrete points. CVT exploits efficient statistical methods to avoid expensive generation of Voronoi diagrams. Nevertheless, if a CVT particle's Voronoi cell were to be explicitly computed, then it would have a centroid that coincides with the particle itself and a minimized rotational moment. The CVT code provides each particle's volume and centroid, and also the rotational moment matrix needed to approximate a particle by an ellipsoid (instead of a simple sphere). DIATOM region specification is supported

  20. Optimization of automation: III. Development of optimization method for determining automation rate in nuclear power plants

    International Nuclear Information System (INIS)

    Lee, Seung Min; Kim, Jong Hyun; Kim, Man Cheol; Seong, Poong Hyun

    2016-01-01

    Highlights: • We propose an appropriate automation rate that enables the best human performance. • We analyze the shortest working time considering Situation Awareness Recovery (SAR). • The optimized automation rate is estimated by integrating the automation and ostracism rate estimation methods. • The process to derive the optimized automation rate is demonstrated through case studies. - Abstract: Automation has been introduced in various industries, including the nuclear field, because it is commonly believed that automation promises greater efficiency, lower workloads, and fewer operator errors through reducing operator errors and enhancing operator and system performance. However, the excessive introduction of automation has deteriorated operator performance due to the side effects of automation, which are referred to as Out-of-the-Loop (OOTL), and this is critical issue that must be resolved. Thus, in order to determine the optimal level of automation introduction that assures the best human operator performance, a quantitative method of optimizing the automation is proposed in this paper. In order to propose the optimization method for determining appropriate automation levels that enable the best human performance, the automation rate and ostracism rate, which are estimation methods that quantitatively analyze the positive and negative effects of automation, respectively, are integrated. The integration was conducted in order to derive the shortest working time through considering the concept of situation awareness recovery (SAR), which states that the automation rate with the shortest working time assures the best human performance. The process to derive the optimized automation rate is demonstrated through an emergency operation scenario-based case study. In this case study, four types of procedures are assumed through redesigning the original emergency operating procedure according to the introduced automation and ostracism levels. Using the

  1. An Indirect Simulation-Optimization Model for Determining Optimal TMDL Allocation under Uncertainty

    Directory of Open Access Journals (Sweden)

    Feng Zhou

    2015-11-01

    Full Text Available An indirect simulation-optimization model framework with enhanced computational efficiency and risk-based decision-making capability was developed to determine optimal total maximum daily load (TMDL allocation under uncertainty. To convert the traditional direct simulation-optimization model into our indirect equivalent model framework, we proposed a two-step strategy: (1 application of interval regression equations derived by a Bayesian recursive regression tree (BRRT v2 algorithm, which approximates the original hydrodynamic and water-quality simulation models and accurately quantifies the inherent nonlinear relationship between nutrient load reductions and the credible interval of algal biomass with a given confidence interval; and (2 incorporation of the calibrated interval regression equations into an uncertain optimization framework, which is further converted to our indirect equivalent framework by the enhanced-interval linear programming (EILP method and provides approximate-optimal solutions at various risk levels. The proposed strategy was applied to the Swift Creek Reservoir’s nutrient TMDL allocation (Chesterfield County, VA to identify the minimum nutrient load allocations required from eight sub-watersheds to ensure compliance with user-specified chlorophyll criteria. Our results indicated that the BRRT-EILP model could identify critical sub-watersheds faster than the traditional one and requires lower reduction of nutrient loadings compared to traditional stochastic simulation and trial-and-error (TAE approaches. This suggests that our proposed framework performs better in optimal TMDL development compared to the traditional simulation-optimization models and provides extreme and non-extreme tradeoff analysis under uncertainty for risk-based decision making.

  2. Perturbing engine performance measurements to determine optimal engine control settings

    Science.gov (United States)

    Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna

    2014-12-30

    Methods and systems for optimizing a performance of a vehicle engine are provided. The method includes determining an initial value for a first engine control parameter based on one or more detected operating conditions of the vehicle engine, determining a value of an engine performance variable, and artificially perturbing the determined value of the engine performance variable. The initial value for the first engine control parameter is then adjusted based on the perturbed engine performance variable causing the engine performance variable to approach a target engine performance variable. Operation of the vehicle engine is controlled based on the adjusted initial value for the first engine control parameter. These acts are repeated until the engine performance variable approaches the target engine performance variable.

  3. Private placements

    International Nuclear Information System (INIS)

    Bugeaud, G. J. R.

    1998-01-01

    The principles underlying private placements in Alberta, and the nature of the processes employed by the Alberta Securities Commission in handling such transactions were discussed. The Alberta Securities Commission's mode of operation was demonstrated by the inclusion of various documents issued by the Commission concerning (1) special warrant transactions prior to listing, (2) a decision by the Executive Director refusing to issue a receipt for the final prospectus for a distribution of securities of a company and the reasons for the refusal, (3) the Commission's decision to interfere with the Executive Director's decision not to issue a receipt for the final prospectus, with full citation of the Commission's reasons for its decision, (4) and a series of proposed rules and companion policy statements regarding trades and distributions outside and in Alberta. Text of a sample 'short form prospectus' was also included

  4. Spectroscopic determination of optimal hydration time of zircon surface

    Energy Technology Data Exchange (ETDEWEB)

    Ordonez R, E. [ININ, Departamento de Quimica, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico); Garcia R, G. [Instituto Tecnologico de Toluca, Division de Estudios del Posgrado, Av. Tecnologico s/n, Ex-Rancho La Virgen, 52140 Metepec, Estado de Mexico (Mexico); Garcia G, N., E-mail: eduardo.ordonez@inin.gob.m [Universidad Autonoma del Estado de Mexico, Facultad de Quimica, Av. Colon y Av. Tollocan, 50180 Toluca, Estado de Mexico (Mexico)

    2010-07-01

    When a mineral surface is immersed in an aqueous solution, it develops and electric charge produced by the amphoteric dissociation of hydroxyl groups created by the hydration of the solid surface. This is one influential surface property. The complete hydration process takes a time which is specific for each mineral species. The knowledge of the aqueous solution contact time for complete surface hydration is mandatory for further surface phenomena studies. This study deals with the optimal hydration time of the raw zircon (ZrSiO{sub 4}) surface comparing the classical potentiometric titrations with a fluorescence spectroscopy technique. The latter is easy and rea liable as it demands only one sample batch to determine the optimal time to ensure a total hydration of the zircon surface. The analytical results of neutron activation analysis showed the presence of trace quantities of Dy{sup 3+}, Eu{sup 3+} and Er{sup 3} in the bulk of zircon. The Dy{sup 3+} is structured in the zircon crystalline lattice and undergoes the same chemical reactions as zircon. Furthermore, the Dy{sup 3+} has a good fluorescent response whose intensity is enhanced by hydration molecules. The results show that, according to the potentiometric analysis, the hydration process for each batch (at least 8 sample batches) takes around 2 h, while the spectrometric method indicates only 5 minutes from only one batch. Both methods showed that the zircon surface have a 16 h optimal hydration time. (Author)

  5. Spectroscopic determination of optimal hydration time of zircon surface

    International Nuclear Information System (INIS)

    Ordonez R, E.; Garcia R, G.; Garcia G, N.

    2010-01-01

    When a mineral surface is immersed in an aqueous solution, it develops and electric charge produced by the amphoteric dissociation of hydroxyl groups created by the hydration of the solid surface. This is one influential surface property. The complete hydration process takes a time which is specific for each mineral species. The knowledge of the aqueous solution contact time for complete surface hydration is mandatory for further surface phenomena studies. This study deals with the optimal hydration time of the raw zircon (ZrSiO 4 ) surface comparing the classical potentiometric titrations with a fluorescence spectroscopy technique. The latter is easy and rea liable as it demands only one sample batch to determine the optimal time to ensure a total hydration of the zircon surface. The analytical results of neutron activation analysis showed the presence of trace quantities of Dy 3+ , Eu 3+ and Er 3 in the bulk of zircon. The Dy 3+ is structured in the zircon crystalline lattice and undergoes the same chemical reactions as zircon. Furthermore, the Dy 3+ has a good fluorescent response whose intensity is enhanced by hydration molecules. The results show that, according to the potentiometric analysis, the hydration process for each batch (at least 8 sample batches) takes around 2 h, while the spectrometric method indicates only 5 minutes from only one batch. Both methods showed that the zircon surface have a 16 h optimal hydration time. (Author)

  6. Optimization in multi-implant placement for immediate loading in edentulous arches using a modified surgical template and prototyping: a case report.

    Science.gov (United States)

    Jayme, Sérgio J; Muglia, Valdir A; de Oliveira, Rafael R; Novaes, Arthur B Júnior

    2008-01-01

    Immediate loading of dental implants shortens the treatment time and makes it possible to give the patient an esthetic appearance throughout the treatment period. Placement of dental implants requires precise planning that accounts for anatomic limitations and restorative goals. Diagnosis can be made with the assistance of computerized tomographic scanning, but transfer of planning to the surgical field is limited. Recently, novel CAD/CAM techniques such as stereolithographic rapid prototyping have been developed to build surgical guides in an attempt to improve precision of implant placement. The aim of this case report was to show a modified surgical template used throughout implant placement as an alternative to a conventional surgical guide.

  7. Optimal task mapping in safety-critical real-time parallel systems; Placement optimal de taches pour les systemes paralleles temps-reel critiques

    Energy Technology Data Exchange (ETDEWEB)

    Aussagues, Ch

    1998-12-11

    This PhD thesis is dealing with the correct design of safety-critical real-time parallel systems. Such systems constitutes a fundamental part of high-performance systems for command and control that can be found in the nuclear domain or more generally in parallel embedded systems. The verification of their temporal correctness is the core of this thesis. our contribution is mainly in the following three points: the analysis and extension of a programming model for such real-time parallel systems; the proposal of an original method based on a new operator of synchronized product of state machines task-graphs; the validation of the approach by its implementation and evaluation. The work addresses particularly the main problem of optimal task mapping on a parallel architecture, such that the temporal constraints are globally guaranteed, i.e. the timeliness property is valid. The results incorporate also optimally criteria for the sizing and correct dimensioning of a parallel system, for instance in the number of processing elements. These criteria are connected with operational constraints of the application domain. Our approach is based on the off-line analysis of the feasibility of the deadline-driven dynamic scheduling that is used to schedule tasks inside one processor. This leads us to define the synchronized-product, a system of linear, constraints is automatically generated and then allows to calculate a maximum load of a group of tasks and then to verify their timeliness constraints. The communications, their timeliness verification and incorporation to the mapping problem is the second main contribution of this thesis. FInally, the global solving technique dealing with both task and communication aspects has been implemented and evaluated in the framework of the OASIS project in the LETI research center at the CEA/Saclay. (author) 96 refs.

  8. Determination of optimal conditions of oxytetracyclin production from streptomyces rimosus

    International Nuclear Information System (INIS)

    Zouaghi, Atef

    2007-01-01

    Streptomyces rimosus is an oxytetracycline (OTC) antibiotic producing bacteria that exhibited activities against gram positive and negative bacteria. OTC is used widely not only in medicine but also in production industry. The antibiotic production of streptomyces covers a very wide range of condition. However, antibiotic producers are particularly fastidious cultivated by proper selection of media such as carbon source. In present study we have optimised conditions of OTC production (Composition of production media, p H, shaking and temperature). The results have been shown that bran barley is the optimal media for OTC production at 28C pH5.8 at 150rpm for 5 days. For antibiotic determination, OTC was extracted with different organic solvent. Thin-layer chromatography system was used for separation and identification of OTC antibiotic. High performance liquid chromatographic (HPLC) method with ultraviolet detection for the analysis of OTC is applied to the determination of OTC purification. (Author). 24 refs

  9. Sediment Placement Areas 2012

    Data.gov (United States)

    California Department of Resources — Dredge material placement sites (DMPS), including active, inactive, proposed and historical placement sites. Dataset covers US Army Corps of Engineers San Francisco...

  10. Sediment Placement Areas 2012

    Data.gov (United States)

    California Natural Resource Agency — Dredge material placement sites (DMPS), including active, inactive, proposed and historical placement sites. Dataset covers US Army Corps of Engineers San Francisco...

  11. Optimum distributed generation placement with voltage sag effect minimization

    International Nuclear Information System (INIS)

    Biswas, Soma; Goswami, Swapan Kumar; Chatterjee, Amitava

    2012-01-01

    Highlights: ► A new optimal distributed generation placement algorithm is proposed. ► Optimal number, sizes and locations of the DGs are determined. ► Technical factors like loss, voltage sag problem are minimized. ► The percentage savings are optimized. - Abstract: The present paper proposes a new formulation for the optimum distributed generator (DG) placement problem which considers a hybrid combination of technical factors, like minimization of the line loss, reduction in the voltage sag problem, etc., and economical factors, like installation and maintenance cost of the DGs. The new formulation proposed is inspired by the idea that the optimum placement of the DGs can help in reducing and mitigating voltage dips in low voltage distribution networks. The problem is configured as a multi-objective, constrained optimization problem, where the optimal number of DGs, along with their sizes and bus locations, are simultaneously obtained. This problem has been solved using genetic algorithm, a traditionally popular stochastic optimization algorithm. A few benchmark systems radial and networked (like 34-bus radial distribution system, 30 bus loop distribution system and IEEE 14 bus system) are considered as the case study where the effectiveness of the proposed algorithm is aptly demonstrated.

  12. Heuristic Optimization Techniques for Determining Optimal Reserve Structure of Power Generating Systems

    DEFF Research Database (Denmark)

    Ding, Yi; Goel, Lalit; Wang, Peng

    2012-01-01

    cost of the system will also increase. The reserve structure of a MSS should be determined based on striking a balance between the required reliability and the reserve cost. The objective of reserve management for a MSS is to schedule the reserve at the minimum system reserve cost while maintaining......Electric power generating systems are typical examples of multi-state systems (MSS). Sufficient reserve is critically important for maintaining generating system reliabilities. The reliability of a system can be increased by increasing the reserve capacity, noting that at the same time the reserve...... the required level of supply reliability to its customers. In previous research, Genetic Algorithm (GA) has been used to solve most reliability optimization problems. However, the GA is not very computationally efficient in some cases. In this chapter a new heuristic optimization technique—the particle swarm...

  13. The State Fiscal Policy: Determinants and Optimization of Financial Flows

    Directory of Open Access Journals (Sweden)

    Sitash Tetiana D.

    2017-03-01

    Full Text Available The article outlines the determinants of the state fiscal policy at the present stage of global transformations. Using the principles of financial science it is determined that regulation of financial flows within the fiscal sphere, namely centralization and redistribution of the GDP, which results in the regulation of the financial capacity of economic agents, is of importance. It is emphasized that the urgent measure for improving the tax model is re-considering the provision of fiscal incentives, which are used to stimulate the accumulation of capital, investment activity, innovation, increase of the competitiveness of national products, expansion of exports, increase of the level of the population employment. The necessity of applying the instruments of fiscal regulation of financial flows, which should take place on the basis of institutional economics emphasizing the analysis of institutional changes, the evolution of institutions and their impact on the behavior of participants of economic relations. At the same time it is determined that the maximum effect of fiscal regulation of financial flows is ensured when application of fiscal instruments is aimed not only at achieving the target values of parameters of financial flows but at overcoming institutional deformations as well. It is determined that the optimal movement of financial flows enables creating favorable conditions for development and maintenance of financial balance in the society and achievement of the necessary level of competitiveness of the national economy.

  14. Climate, duration, and N placement determine N2 O emissions in reduced tillage systems: a meta-analysis.

    Science.gov (United States)

    van Kessel, Chris; Venterea, Rodney; Six, Johan; Adviento-Borbe, Maria Arlene; Linquist, Bruce; van Groenigen, Kees Jan

    2013-01-01

    No-tillage and reduced tillage (NT/RT) management practices are being promoted in agroecosystems to reduce erosion, sequester additional soil C and reduce production costs. The impact of NT/RT on N2 O emissions, however, has been variable with both increases and decreases in emissions reported. Herein, we quantitatively synthesize studies on the short- and long-term impact of NT/RT on N2 O emissions in humid and dry climatic zones with emissions expressed on both an area- and crop yield-scaled basis. A meta-analysis was conducted on 239 direct comparisons between conventional tillage (CT) and NT/RT. In contrast to earlier studies, averaged across all comparisons, NT/RT did not alter N2 O emissions compared with CT. However, NT/RT significantly reduced N2 O emissions in experiments >10 years, especially in dry climates. No significant correlation was found between soil texture and the effect of NT/RT on N2 O emissions. When fertilizer-N was placed at ≥5 cm depth, NT/RT significantly reduced area-scaled N2 O emissions, in particular under humid climatic conditions. Compared to CT under dry climatic conditions, yield-scaled N2 O increased significantly (57%) when NT/RT was implemented <10 years, but decreased significantly (27%) after ≥10 years of NT/RT. There was a significant decrease in yield-scaled N2 O emissions in humid climates when fertilizer-N was placed at ≥5 cm depth. Therefore, in humid climates, deep placement of fertilizer-N is recommended when implementing NT/RT. In addition, NT/RT practices need to be sustained for a prolonged time, particularly in dry climates, to become an effective mitigation strategy for reducing N2 O emissions. © 2012 Blackwell Publishing Ltd.

  15. Overconfidence, Managerial Optimism, and the Determinants of Capital Structure

    Directory of Open Access Journals (Sweden)

    Alexandre di Miceli da Silveira

    2008-12-01

    Full Text Available This research examines the determinants of the capital structure of firms introducing a behavioral perspective that has received little attention in corporate finance literature. The following central hypothesis emerges from a set of recently developed theories: firms managed by optimistic and/or overconfident people will choose more levered financing structures than others, ceteris paribus. We propose different proxies for optimism/overconfidence, based on the manager’s status as an entrepreneur or non-entrepreneur, an idea that is supported by theories and solid empirical evidence, as well as on the pattern of ownership of the firm’s shares by its manager. The study also includes potential determinants of capital structure used in earlier research. We use a sample of Brazilian firms listed in the Sao Paulo Stock Exchange (Bovespa in the years 1998 to 2003. The empirical analysis suggests that the proxies for the referred cognitive biases are important determinants of capital structure. We also found as relevant explanatory variables: profitability, size, dividend payment and tangibility, as well as some indicators that capture the firms’ corporate governance standards. These results suggest that behavioral approaches based on human psychology research can offer relevant contributions to the understanding of corporate decision making.

  16. Determining of the Optimal Device Lifetime using Mathematical Renewal Models

    Directory of Open Access Journals (Sweden)

    Knežo Dušan

    2016-05-01

    Full Text Available Paper deals with the operations and equipment of the machine in the process of organizing production. During operation machines require maintenance and repairs, while in case of failure or machine wears it is necessary to replace them with new ones. For the process of replacement of old machines with new ones the term renewal is used. Qualitative aspects of the renewal process observe renewal theory, which is mainly based on the theory of probability and mathematical statistics. Devices lifetimes are closely related to the renewal of the devices. Presented article is focused on mathematical deduction of mathematical renewal models and determining optimal lifetime of the devices from the aspect of expenditures on renewal process.

  17. A risk-based sensor placement methodology

    International Nuclear Information System (INIS)

    Lee, Ronald W.; Kulesz, James J.

    2008-01-01

    A risk-based sensor placement methodology is proposed to solve the problem of optimal location of sensors to protect population against the exposure to, and effects of, known and/or postulated chemical, biological, and/or radiological threats. Risk is calculated as a quantitative value representing population at risk from exposure at standard exposure levels. Historical meteorological data are used to characterize weather conditions as the frequency of wind speed and direction pairs. The meteorological data drive atmospheric transport and dispersion modeling of the threats, the results of which are used to calculate risk values. Sensor locations are determined via an iterative dynamic programming algorithm whereby threats detected by sensors placed in prior iterations are removed from consideration in subsequent iterations. In addition to the risk-based placement algorithm, the proposed methodology provides a quantification of the marginal utility of each additional sensor. This is the fraction of the total risk accounted for by placement of the sensor. Thus, the criteria for halting the iterative process can be the number of sensors available, a threshold marginal utility value, and/or a minimum cumulative utility achieved with all sensors

  18. A Monte Carlo simulation technique to determine the optimal portfolio

    Directory of Open Access Journals (Sweden)

    Hassan Ghodrati

    2014-03-01

    Full Text Available During the past few years, there have been several studies for portfolio management. One of the primary concerns on any stock market is to detect the risk associated with various assets. One of the recognized methods in order to measure, to forecast, and to manage the existing risk is associated with Value at Risk (VaR, which draws much attention by financial institutions in recent years. VaR is a method for recognizing and evaluating of risk, which uses the standard statistical techniques and the method has been used in other fields, increasingly. The present study has measured the value at risk of 26 companies from chemical industry in Tehran Stock Exchange over the period 2009-2011 using the simulation technique of Monte Carlo with 95% confidence level. The used variability in the present study has been the daily return resulted from the stock daily price change. Moreover, the weight of optimal investment has been determined using a hybrid model called Markowitz and Winker model in each determined stocks. The results showed that the maximum loss would not exceed from 1259432 Rials at 95% confidence level in future day.

  19. Geometric leaf placement strategies

    International Nuclear Information System (INIS)

    Fenwick, J D; Temple, S W P; Clements, R W; Lawrence, G P; Mayles, H M O; Mayles, W P M

    2004-01-01

    Geometric leaf placement strategies for multileaf collimators (MLCs) typically involve the expansion of the beam's-eye-view contour of a target by a uniform MLC margin, followed by movement of the leaves until some point on each leaf end touches the expanded contour. Film-based dose-distribution measurements have been made to determine appropriate MLC margins-characterized through an index d 90 -for multileaves set using one particular strategy to straight lines lying at various angles to the direction of leaf travel. Simple trigonometric relationships exist between different geometric leaf placement strategies and are used to generalize the results of the film work into d 90 values for several different strategies. Measured d 90 values vary both with angle and leaf placement strategy. A model has been derived that explains and describes quite well the observed variations of d 90 with angle. The d 90 angular variations of the strategies studied differ substantially, and geometric and dosimetric reasoning suggests that the best strategy is the one with the least angular variation. Using this criterion, the best straightforwardly implementable strategy studied is a 'touch circle' approach for which semicircles are imagined to be inscribed within leaf ends, the leaves being moved until the semicircles just touch the expanded target outline

  20. Relay Placement for FSO Multihop DF Systems With Link Obstacles and Infeasible Regions

    KAUST Repository

    Zhu, Bingcheng; Cheng, Julian; Alouini, Mohamed-Slim; Wu, Lenan

    2015-01-01

    Optimal relay placement is studied for free-space optical multihop communication with link obstacles and infeasible regions. An optimal relay placement scheme is proposed to achieve the lowest outage probability, enable the links to bypass obstacles

  1. Boat boarding ladder placement

    Science.gov (United States)

    1998-04-01

    Presented in three volumes; 'Boat Boarding Ladder Placement,' which explores safety considerations including potential for human contact with a rotating propeller; 'Boat Handhold Placement,' which explores essential principles and methods of fall con...

  2. A Robust Optimization Based Energy-Aware Virtual Network Function Placement Proposal for Small Cell 5G Networks with Mobile Edge Computing Capabilities

    OpenAIRE

    Blanco, Bego; Taboada, Ianire; Fajardo, Jose Oscar; Liberal, Fidel

    2017-01-01

    In the context of cloud-enabled 5G radio access networks with network function virtualization capabilities, we focus on the virtual network function placement problem for a multitenant cluster of small cells that provide mobile edge computing services. Under an emerging distributed network architecture and hardware infrastructure, we employ cloud-enabled small cells that integrate microservers for virtualization execution, equipped with additional hardware appliances. We develop an energy-awa...

  3. Topologically determined optimal stochastic resonance responses of spatially embedded networks

    International Nuclear Information System (INIS)

    Gosak, Marko; Marhl, Marko; Korosak, Dean

    2011-01-01

    We have analyzed the stochastic resonance phenomenon on spatial networks of bistable and excitable oscillators, which are connected according to their location and the amplitude of external forcing. By smoothly altering the network topology from a scale-free (SF) network with dominating long-range connections to a network where principally only adjacent oscillators are connected, we reveal that besides an optimal noise intensity, there is also a most favorable interaction topology at which the best correlation between the response of the network and the imposed weak external forcing is achieved. For various distributions of the amplitudes of external forcing, the optimal topology is always found in the intermediate regime between the highly heterogeneous SF network and the strong geometric regime. Our findings thus indicate that a suitable number of hubs and with that an optimal ratio between short- and long-range connections is necessary in order to obtain the best global response of a spatial network. Furthermore, we link the existence of the optimal interaction topology to a critical point indicating the transition from a long-range interactions-dominated network to a more lattice-like network structure.

  4. Use of Simplex Method in Determination of Optimal Rational ...

    African Journals Online (AJOL)

    The optimal rational composition was found to be: Nsu Clay = 47.8%, quartz = 33.7% and CaCO3 = 18.5%. The other clay from Ukpor was found unsuitable at the firing temperature (l000°C) used. It showed bending strength lower than the standard requirement for all compositions studied. To improve the strength an ...

  5. Determination of Pareto frontier in multi-objective maintenance optimization

    International Nuclear Information System (INIS)

    Certa, Antonella; Galante, Giacomo; Lupo, Toni; Passannanti, Gianfranco

    2011-01-01

    The objective of a maintenance policy generally is the global maintenance cost minimization that involves not only the direct costs for both the maintenance actions and the spare parts, but also those ones due to the system stop for preventive maintenance and the downtime for failure. For some operating systems, the failure event can be dangerous so that they are asked to operate assuring a very high reliability level between two consecutive fixed stops. The present paper attempts to individuate the set of elements on which performing maintenance actions so that the system can assure the required reliability level until the next fixed stop for maintenance, minimizing both the global maintenance cost and the total maintenance time. In order to solve the previous constrained multi-objective optimization problem, an effective approach is proposed to obtain the best solutions (that is the Pareto optimal frontier) among which the decision maker will choose the more suitable one. As well known, describing the whole Pareto optimal frontier generally is a troublesome task. The paper proposes an algorithm able to rapidly overcome this problem and its effectiveness is shown by an application to a case study regarding a complex series-parallel system.

  6. Model for determining and optimizing delivery performance in industrial systems

    Directory of Open Access Journals (Sweden)

    Fechete Flavia

    2017-01-01

    Full Text Available Performance means achieving organizational objectives regardless of their nature and variety, and even overcoming them. Improving performance is one of the major goals of any company. Achieving the global performance means not only obtaining the economic performance, it is a must to take into account other functions like: function of quality, delivery, costs and even the employees satisfaction. This paper aims to improve the delivery performance of an industrial system due to their very low results. The delivery performance took into account all categories of performance indicators, such as on time delivery, backlog efficiency or transport efficiency. The research was focused on optimizing the delivery performance of the industrial system, using linear programming. Modeling the delivery function using linear programming led to obtaining precise quantities to be produced and delivered each month by the industrial system in order to minimize their transport cost, satisfying their customers orders and to control their stock. The optimization led to a substantial improvement in all four performance indicators that concern deliveries.

  7. Determination and optimization of spatial samples for distributed measurements.

    Energy Technology Data Exchange (ETDEWEB)

    Huo, Xiaoming (Georgia Institute of Technology, Atlanta, GA); Tran, Hy D.; Shilling, Katherine Meghan; Kim, Heeyong (Georgia Institute of Technology, Atlanta, GA)

    2010-10-01

    There are no accepted standards for determining how many measurements to take during part inspection or where to take them, or for assessing confidence in the evaluation of acceptance based on these measurements. The goal of this work was to develop a standard method for determining the number of measurements, together with the spatial distribution of measurements and the associated risks for false acceptance and false rejection. Two paths have been taken to create a standard method for selecting sampling points. A wavelet-based model has been developed to select measurement points and to determine confidence in the measurement after the points are taken. An adaptive sampling strategy has been studied to determine implementation feasibility on commercial measurement equipment. Results using both real and simulated data are presented for each of the paths.

  8. Determinants of Optimal Adherence to Antiretroviral Therapy among ...

    African Journals Online (AJOL)

    SITWALA COMPUTERS

    medication side effects and adolescence were associated with non-adherence (p ... especially the social determinants of health surrounding ... irrespective of their CD4 cell count. ..... reported were cell phone alarm, radio news hour time, or a.

  9. Optimizing the radioimmunologic determination methods for cortisol and calcitonin

    International Nuclear Information System (INIS)

    Stalla, G.

    1981-01-01

    In order to build up a specific 125-iodine cortisol radioimmunoassay (RIA) pure cortisol-3(0-carbodxymethyl) oxim was synthesized for teh production of antigens and tracers. The cortisol was coupled with tyrosin methylester and then labelled with 125-iodine. For the antigen production the cortisol derivate was coupled with the same method to thyreoglobulin. The major part of the antisera, which were obtained like this, presented high titres. Apart from a high specificity for cortisol a high affinity was found in the acid pH-area and quantified with a particularly developed computer program. An extractive step in the cortisol RIA could be prevented by efforts. The assay was carried out with an optimized double antibody principle: The reaction time between the first and the second antiserum was considerably accelerated by the administration of polyaethylenglycol. The assay can be carried out automatically by applying a modular analysis system, which operates fast and provides a large capacity. The required quality and accuracy controls were done. The comparison of this assay with other cortisol-RIA showed good correlation. The RIA for human clacitonin was improved. For separating bound and freely mobile hormones the optimized double-antibody technique was applied. The antiserum was examined with respect to its affinity to calcitonin. For the 'zero serum' production the Florisil extraction method was used. The criteria of the quality and accuracy controls were complied. Significantly increased calcitonin concentrations were found in a patient group with medullar thyroid carcinoma and in two patients with an additional phaechromocytoma. (orig./MG) [de

  10. A Demonstration of Optimal Apodization Determination for Proper Lateral Modulation

    Science.gov (United States)

    Sumi, Chikayoshi; Komiya, Yuichi; Uga, Shinya

    2009-07-01

    We have realized effective ultrasound (US) beamformings by the steering of plural beams and apodizations for B-mode imaging with a high lateral resolution and accurate measurement of tissue or blood displacement vector and/or strain tensor using the multidimensional cross-spectrum phase gradient method (MCSPGM), or multidimensional autocorrelation or Doppler methods (MAM and MDM) using multidimensional analytic signals. For instance, the coherent superposition of the steered beams performed in the lateral cosine modulation method (LCM) has a higher potential for realizing a more accurate measurement of a displacement vector than the synthesis of the displacement vector using the accurately measured axial displacements obtained by the multidimensional synthetic aperture method (MDSAM), multidirectional transmission method (MTM) or the use of plural US transducers. Originally, the apodization function to be used for realizing a designed point spread function (PSF) was obtained by the Fraunhofer approximation (FA). However, to obtain the best approximated, designed PSF in the least-squares sense, we proposed a linear optimization (LO) method. Furthermore, on the basis of the knowledge about the losts of US energy during the propagation, we have recently developed a nonlinear optimization (NLO) method, in which the feet of the main lobes in apodization function are properly truncated. Thus, NLO also allows the decrease in the number of channels or the confinement of the effective aperture. In this study, to gain insight into the ideal shape of the PSF, the accuracies of the two-dimensional (2D) displacement vector measurements were compared for typical PSFs with distinct lateral envelope shapes, particularly, in terms of full width at half maximum (FWHM) and the length of the feet, i.e., the Gaussian function, Hanning window and parabolic function. It was confirmed that a PSF having a wide FWHM and short feet was ideal. Such a PSF yielded an echo with a high signal

  11. A projection method for under determined optimal experimental designs

    KAUST Repository

    Long, Quan; Scavino, Marco; Tempone, Raul; Wang, Suojin

    2014-01-01

    A new implementation, based on the Laplace approximation, was developed in (Long, Scavino, Tempone, & Wang 2013) to accelerate the estimation of the post–experimental expected information gains in the model parameters and predictive quantities of interest. A closed–form approximation of the inner integral and the order of the corresponding dominant error term were obtained in the cases where the parameters are determined by the experiment. In this work, we extend that method to the general cases where the model parameters could not be determined completely by the data from the proposed experiments. We carry out the Laplace approximations in the directions orthogonal to the null space of the corresponding Jacobian matrix, so that the information gain (Kullback–Leibler divergence) can be reduced to an integration against the marginal density of the transformed parameters which are not determined by the experiments. Furthermore, the expected information gain can be approximated by an integration over the prior, where the integrand is a function of the projected posterior covariance matrix. To deal with the issue of dimensionality in a complex problem, we use Monte Carlo sampling or sparse quadratures for the integration over the prior probability density function, depending on the regularity of the integrand function. We demonstrate the accuracy, efficiency and robustness of the proposed method via several nonlinear under determined numerical examples.

  12. A projection method for under determined optimal experimental designs

    KAUST Repository

    Long, Quan

    2014-01-09

    A new implementation, based on the Laplace approximation, was developed in (Long, Scavino, Tempone, & Wang 2013) to accelerate the estimation of the post–experimental expected information gains in the model parameters and predictive quantities of interest. A closed–form approximation of the inner integral and the order of the corresponding dominant error term were obtained in the cases where the parameters are determined by the experiment. In this work, we extend that method to the general cases where the model parameters could not be determined completely by the data from the proposed experiments. We carry out the Laplace approximations in the directions orthogonal to the null space of the corresponding Jacobian matrix, so that the information gain (Kullback–Leibler divergence) can be reduced to an integration against the marginal density of the transformed parameters which are not determined by the experiments. Furthermore, the expected information gain can be approximated by an integration over the prior, where the integrand is a function of the projected posterior covariance matrix. To deal with the issue of dimensionality in a complex problem, we use Monte Carlo sampling or sparse quadratures for the integration over the prior probability density function, depending on the regularity of the integrand function. We demonstrate the accuracy, efficiency and robustness of the proposed method via several nonlinear under determined numerical examples.

  13. Determining optimal pinger spacing for harbour porpoise bycatch mitigation

    DEFF Research Database (Denmark)

    Larsen, Finn; Krog, Carsten; Eigaard, Ole Ritzau

    2013-01-01

    A trial was conducted in the Danish North Sea hake gillnet fishery in July to September 2006 to determine whether the spacing of the Aquatec AQUAmark100 pinger could be increased without reducing the effectiveness of the pinger in mitigating harbour porpoise bycatch. The trial was designed as a c...

  14. Determining the optimal monetary policy instrument for Nigeria

    OpenAIRE

    Udom, Solomon I.; Yaaba, Baba N.

    2015-01-01

    It is considered inapt for central banks to adjust reserve money (quantity of money) and interest rate (price of money) at the same time. Thus, necessitates the need for a choice instrument. Enough evidence abounds in microeconomic theory on the undesirability of manipulating both price and quantity simultaneously in a free market structure. The market, in line with the consensus among economists, either controls the price and allows quantity to be determined by market forces, or influence qu...

  15. On the Determinants of Optimal Border Taxes for a Small Open Economy

    DEFF Research Database (Denmark)

    Munk, Knud Jørgen; Rasmussen, Bo Sandemann

    of the primary factor and domestic consumption of the export good cannot be taxed is nevertheless a constraint; this insight provides the key to understanding what determines the optimal tariff structure. The optimal border tax structure is derived for both exogenous and endogenous labour supply, and the results...... are interpreted in the spirit of the Corlett-Hague results for the optimal tax structure in a closed economy and compared with results from CGE models....

  16. Placement by thermodynamic simulated annealing

    International Nuclear Information System (INIS)

    Vicente, Juan de; Lanchares, Juan; Hermida, Roman

    2003-01-01

    Combinatorial optimization problems arise in different fields of science and engineering. There exist some general techniques coping with these problems such as simulated annealing (SA). In spite of SA success, it usually requires costly experimental studies in fine tuning the most suitable annealing schedule. In this Letter, the classical integrated circuit placement problem is faced by Thermodynamic Simulated Annealing (TSA). TSA provides a new annealing schedule derived from thermodynamic laws. Unlike SA, temperature in TSA is free to evolve and its value is continuously updated from the variation of state functions as the internal energy and entropy. Thereby, TSA achieves the high quality results of SA while providing interesting adaptive features

  17. OPTIMIZING THE PLACEMENT OF A WORK-PIECE AT A MULTI-POSITION ROTARY TABLE OF TRANSFER MACHINE WITH VERTICAL MULTI-SPINDLE HEAD

    Directory of Open Access Journals (Sweden)

    N. N. Guschinski

    2015-01-01

    Full Text Available The problem of minimizing the weight of transfer machine with a multi-position rotary table by placing of a work-piece at the table for processing of homogeneous batch of work-pieces is considered. To solve this problem the mathematical model and heuristic particle swarm optimization algorithm are proposed. The results of numerical experiments for two real problems of this type are given. The experiments revealed that the particle swarm optimization algorithm is more effective for the solution of the problem compared to the methods of random search and LP-search.

  18. Determination of an Optimal Control Strategy for a Generic Surface Vehicle

    Science.gov (United States)

    2014-06-18

    TERMS Autonomous Vehicles Boundary Value Problem Dynamic Programming Surface Vehicles Optimal Control Path Planning 16...to follow prescribed motion trajectories. In particular, for autonomous vehicles , this motion trajectory is given by the determination of the

  19. Geometry and Topology Optimization of Statically Determinate Beams under Fixed and Most Unfavorably Distributed Load

    Directory of Open Access Journals (Sweden)

    Agata Kozikowska

    Full Text Available Abstract The paper concerns topology and geometry optimization of statically determinate beams with an arbitrary number of pin supports. The beams are simultaneously exposed to uniform dead load and arbitrarily distributed live load and optimized for the absolute maximum bending moment. First, all the beams with fixed topology are subjected to geometrical optimization by genetic algorithm. Strict mathematical formulas for calculation of optimal geometrical parameters are found for all topologies and any ratio of dead to live load. Then beams with the same minimal values of the objective function and different topologies are classified into groups called topological classes. The detailed characteristics of these classes are described.

  20. Determination of Optimal Double Sampling Plan using Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Sampath Sundaram

    2012-03-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Designing double sampling plan requires identification of sample sizes and acceptance numbers. In this paper a genetic algorithm has been designed for the selection of optimal acceptance numbers and sample sizes for the specified producer’s risk and consumer’s risk. Implementation of the algorithm has been illustrated numerically for different choices of quantities involved in a double sampling plan   

  1. The Monetary Policy of the NBU and its Impact on the Placement of Households’ Savings

    Directory of Open Access Journals (Sweden)

    Perepolkina Olena О.

    2018-03-01

    Full Text Available The article researches the efficiency of implementation of monetary policy in Ukraine in the context of determining the optimal ways to place households’ savings. The prospects of making deposits in both national and foreign currency as the most common directions of savings placement are considered. The research has identified that the main risks in the placement of savings by households as deposits in the national currency are the likelihood of bankruptcy of financial institutions, imperfection of the functioning of deposit guarantee system, inflationary fluctuations, and devaluation processes of the national monetary unit. Significant deterrents to the placement of foreign currency deposits are low interest rates, a large number of restrictions in the currency regulation, and a general low level of trust in the banking system. The directions of increasing the efficiency of monetary policy are proposed, that not only will increase the attractiveness of deposits for households, but also will create the basis for macroeconomic stabilization in Ukraine.

  2. CATTLE FEEDER BEHAVIOR AND FEEDER CATTLE PLACEMENTS

    OpenAIRE

    Kastens, Terry L.; Schroeder, Ted C.

    1994-01-01

    Cattle feeders appear irrational when they place cattle on feed when projected profit is negative. Long futures positions appear to offer superior returns to cattle feeding investment. Cattle feeder behavior suggests that they believe a downward bias in live cattle futures persists and that cattle feeders use different expectations than the live cattle futures market price when making placement decisions. This study examines feeder cattle placement determinants, comparing performance of expec...

  3. ESL Placement and Schools

    Science.gov (United States)

    Callahan, Rebecca; Wilkinson, Lindsey; Muller, Chandra; Frisco, Michelle

    2010-01-01

    In this study, the authors explore English as a Second Language (ESL) placement as a measure of how schools label and process immigrant students. Using propensity score matching and data from the Adolescent Health and Academic Achievement Study and the National Longitudinal Study of Adolescent Health, the authors estimate the effect of ESL placement on immigrant achievement. In schools with more immigrant students, the authors find that ESL placement results in higher levels of academic performance; in schools with few immigrant students, the effect reverses. This is not to suggest a one-size-fits-all policy; many immigrant students, regardless of school composition, generational status, or ESL placement, struggle to achieve at levels sufficient for acceptance to a 4-year university. This study offers several factors to be taken into consideration as schools develop policies and practices to provide immigrant students opportunities to learn. PMID:20617111

  4. Spatially Modeling the Impact of Terrain on Wind Speed and Dry Particle Deposition Across Lake Perris in Southern California to Determine In Situ Sensor Placement

    Science.gov (United States)

    Brooks, A. N.

    2014-12-01

    While developed countries have implemented engineering techniques and sanitation technologies to keep water resources clean from runoff and ground contamination, air pollution and its contribution of harmful contaminants to our water resources has yet to be fully understood and managed. Due to the large spatial and temporal extent and subsequent computational intensity required to understand atmospheric deposition as a pollutant source, a geographic information system (GIS) was utilized. This project developed a multi-step workflow to better define the placement of in situ sensors on Lake Perris in Southern California. Utilizing a variety of technologies including ArcGIS 10.1 with 3D and Spatial Analyst extensions and WindNinja, the impact of terrain on wind speed and direction was simulated and the spatial distribution of contaminant deposition across Lake Perris was calculated as flux. Specifically, the flux of particulate matter (PM10) at the air - water interface of a lake surface was quantified by season for the year of 2009. Integrated Surface Hourly (ISH) wind speed and direction data and ground station air quality measurements from the California Air Resources Board were processed and integrated for use within ModelBuilder. Results indicate that surface areas nearest Alessandro Island and the dam of Lake Perris should be avoided when placing in situ sensors. Furthermore, the location of sensor placement is dependent on seasonal fluctuations of PM10 which can be modeled using the techniques used in this study.

  5. Optimization of photometric determination of U with arsenazo III for direct determination of U in steels, soils and waters

    International Nuclear Information System (INIS)

    Kosturiak, A.; Talanova, A.; Rurikova, D.; Kalavska, D.

    1984-01-01

    Conditions were optimized for the reaction of U(VI) with arsenazo III. Recommended as the optimal medium for photometric determination of uranium in the concentration range 0.5 to 50 μg U/ml was the glycine buffer with pH 1.2 to 2.2. The results of the suggested method have better reproducibility than those of the mineral acid procedure used so far. Complexone III should be added to mask the other cations accompanying uranium in steels, waters and rocks. (author)

  6. ECG-Based Detection of Early Myocardial Ischemia in a Computational Model: Impact of Additional Electrodes, Optimal Placement, and a New Feature for ST Deviation.

    Science.gov (United States)

    Loewe, Axel; Schulze, Walther H W; Jiang, Yuan; Wilhelms, Mathias; Luik, Armin; Dössel, Olaf; Seemann, Gunnar

    2015-01-01

    In case of chest pain, immediate diagnosis of myocardial ischemia is required to respond with an appropriate treatment. The diagnostic capability of the electrocardiogram (ECG), however, is strongly limited for ischemic events that do not lead to ST elevation. This computational study investigates the potential of different electrode setups in detecting early ischemia at 10 minutes after onset: standard 3-channel and 12-lead ECG as well as body surface potential maps (BSPMs). Further, it was assessed if an additional ECG electrode with optimized position or the right-sided Wilson leads can improve sensitivity of the standard 12-lead ECG. To this end, a simulation study was performed for 765 different locations and sizes of ischemia in the left ventricle. Improvements by adding a single, subject specifically optimized electrode were similar to those of the BSPM: 2-11% increased detection rate depending on the desired specificity. Adding right-sided Wilson leads had negligible effect. Absence of ST deviation could not be related to specific locations of the ischemic region or its transmurality. As alternative to the ST time integral as a feature of ST deviation, the K point deviation was introduced: the baseline deviation at the minimum of the ST-segment envelope signal, which increased 12-lead detection rate by 7% for a reasonable threshold.

  7. Optimal design and placement of serpentine heat exchangers for indirect heat withdrawal, inside flat plate integrated collector storage solar water heaters (ICSSWH)

    Energy Technology Data Exchange (ETDEWEB)

    Gertzos, K.P.; Caouris, Y.G.; Panidis, T. [Dept. of Mechanical Engineering and Aeronautics, University of Patras, 265 00 Patras (Greece)

    2010-08-15

    Parameters that affect the temperature at which service hot water (SHW) is offered by an immersed tube heat exchanger (HX), inside a flat plate Integrated Collector Storage Solar Water Heater (ICSSWH), are examined numerically, by means of Computational Fluid Dynamics (CFD) analysis. The storage water is not refreshed and serves for heat accumulation. Service hot water is drawn off indirectly, through an immersed serpentine heat exchanger. For the intensification of the heat transfer process, the storage water is agitated by recirculation through a pump, which goes on only when service water flows inside the heat exchanger. Three main factors, which influence the performance, are optimized: The position of the HX relative to tank walls, the HX length and the tube diameter. All three factors are explored so that to maximize the service water outlet temperature. The settling time of the optimum configuration is also computed. Various 3-D CFD models were developed using the FLUENT package. The heat transfer rate between the two circuits of the optimum configuration is maintained at high levels, leading to service water outlet temperatures by 1-7 C lower than tank water temperatures, for the examined SHW flow rates. The settling time is retained at sufficient law values, such as 20 s. The optimal position was found to lay the HX in contact with the front and back walls of the tank, with an optimum inner tube diameter of 16 mm, while an acceptable HX length was found to be about 21.5 m. (author)

  8. Human error considerations and annunciator effects in determining optimal test intervals for periodically inspected standby systems

    International Nuclear Information System (INIS)

    McWilliams, T.P.; Martz, H.F.

    1981-01-01

    This paper incorporates the effects of four types of human error in a model for determining the optimal time between periodic inspections which maximizes the steady state availability for standby safety systems. Such safety systems are characteristic of nuclear power plant operations. The system is modeled by means of an infinite state-space Markov chain. Purpose of the paper is to demonstrate techniques for computing steady-state availability A and the optimal periodic inspection interval tau* for the system. The model can be used to investigate the effects of human error probabilities on optimal availability, study the benefits of annunciating the standby-system, and to determine optimal inspection intervals. Several examples which are representative of nuclear power plant applications are presented

  9. OPTIMIZING CONDITIONS FOR SPECTROPHOTOMETRIC DETERMINATION OF TOTAL POLYPHENOLS IN WINES USING FOLIN-CIOCALTEU REAGENT

    Directory of Open Access Journals (Sweden)

    Daniel Bajčan

    2013-02-01

    Full Text Available Wine is a complex beverage that obtains its properties mainly due to synergistic effect of alcohol, organic acids, arbohydrates, as well as the phenolic and aromatic substances. At present days, we can observe an increased interest in the study of polyphenols in wines that have antioxidant, antimicrobial, anti-inflammatory, anti-cancer and many other beneficial effects. Moderate and regular consumption of the red wine especially, with a high content of phenolic compounds, has a beneficial effect on human health. The aim of this work was to optimize conditions for spectrophotometric determination of total polyphenols in winwas to optimize conditions for spectrophotometric determination of total polyphenols in winwas to optimize conditions for spectrophotometric determination of total polyphenols in winwas to optimize conditions for pectrophotometric determination of total polyphenols in wine using Folin-Ciocaulteu reagent. Based on several studies, in order to minimize chemical use and optimize analysis time, we have proposed a method for the determination of total polyphenols using 0.25 ml Folin-Ciocaulteu reagent, 3 ml of 20% Na2CO3 solution and time of coloring complex 1.5 hour. We f

  10. Take me where I want to go: Institutional prestige, advisor sponsorship, and academic career placement preferences.

    Directory of Open Access Journals (Sweden)

    Diogo L Pinheiro

    Full Text Available Placement in prestigious research institutions for STEM (science, technology, engineering, and mathematics PhD recipients is generally considered to be optimal. Yet some doctoral recipients are not interested in intensive research careers and instead seek alternative careers, outside but also within academe (for example teaching positions in Liberal Arts Schools. Recent attention to non-academic pathways has expanded our understanding of alternative PhD careers. However, career preferences and placements are also nuanced along the academic pathway. Existing research on academic careers (mostly research-centric has found that certain factors have a significant impact on the prestige of both the institutional placement and the salary of PhD recipients. We understand less, however, about the functioning of career preferences and related placements outside of the top academic research institutions. Our work builds on prior studies of academic career placement to explore the impact that prestige of PhD-granting institution, advisor involvement, and cultural capital have on the extent to which STEM PhDs are placed in their preferred academic institution types. What determines whether an individual with a preference for research oriented institutions works at a Research Extensive university? Or whether an individual with a preference for teaching works at a Liberal Arts college? Using survey data from a nationally representative sample of faculty in biology, biochemistry, civil engineering and mathematics at four different Carnegie Classified institution types (Research Extensive, Research Intensive, Master's I & II, and Liberal Arts Colleges, we examine the relative weight of different individual and institutional characteristics on institutional type placement. We find that doctoral institutional prestige plays a significant role in matching individuals with their preferred institutional type, but that advisor involvement only has an impact on those

  11. Take me where I want to go: Institutional prestige, advisor sponsorship, and academic career placement preferences.

    Science.gov (United States)

    Pinheiro, Diogo L; Melkers, Julia; Newton, Sunni

    2017-01-01

    Placement in prestigious research institutions for STEM (science, technology, engineering, and mathematics) PhD recipients is generally considered to be optimal. Yet some doctoral recipients are not interested in intensive research careers and instead seek alternative careers, outside but also within academe (for example teaching positions in Liberal Arts Schools). Recent attention to non-academic pathways has expanded our understanding of alternative PhD careers. However, career preferences and placements are also nuanced along the academic pathway. Existing research on academic careers (mostly research-centric) has found that certain factors have a significant impact on the prestige of both the institutional placement and the salary of PhD recipients. We understand less, however, about the functioning of career preferences and related placements outside of the top academic research institutions. Our work builds on prior studies of academic career placement to explore the impact that prestige of PhD-granting institution, advisor involvement, and cultural capital have on the extent to which STEM PhDs are placed in their preferred academic institution types. What determines whether an individual with a preference for research oriented institutions works at a Research Extensive university? Or whether an individual with a preference for teaching works at a Liberal Arts college? Using survey data from a nationally representative sample of faculty in biology, biochemistry, civil engineering and mathematics at four different Carnegie Classified institution types (Research Extensive, Research Intensive, Master's I & II, and Liberal Arts Colleges), we examine the relative weight of different individual and institutional characteristics on institutional type placement. We find that doctoral institutional prestige plays a significant role in matching individuals with their preferred institutional type, but that advisor involvement only has an impact on those with a

  12. Optimal moment determination in POME-copula based hydrometeorological dependence modelling

    Science.gov (United States)

    Liu, Dengfeng; Wang, Dong; Singh, Vijay P.; Wang, Yuankun; Wu, Jichun; Wang, Lachun; Zou, Xinqing; Chen, Yuanfang; Chen, Xi

    2017-07-01

    Copula has been commonly applied in multivariate modelling in various fields where marginal distribution inference is a key element. To develop a flexible, unbiased mathematical inference framework in hydrometeorological multivariate applications, the principle of maximum entropy (POME) is being increasingly coupled with copula. However, in previous POME-based studies, determination of optimal moment constraints has generally not been considered. The main contribution of this study is the determination of optimal moments for POME for developing a coupled optimal moment-POME-copula framework to model hydrometeorological multivariate events. In this framework, margins (marginals, or marginal distributions) are derived with the use of POME, subject to optimal moment constraints. Then, various candidate copulas are constructed according to the derived margins, and finally the most probable one is determined, based on goodness-of-fit statistics. This optimal moment-POME-copula framework is applied to model the dependence patterns of three types of hydrometeorological events: (i) single-site streamflow-water level; (ii) multi-site streamflow; and (iii) multi-site precipitation, with data collected from Yichang and Hankou in the Yangtze River basin, China. Results indicate that the optimal-moment POME is more accurate in margin fitting and the corresponding copulas reflect a good statistical performance in correlation simulation. Also, the derived copulas, capturing more patterns which traditional correlation coefficients cannot reflect, provide an efficient way in other applied scenarios concerning hydrometeorological multivariate modelling.

  13. DETERMINATION OF BRAKING OPTIMAL MODE OF CONTROLLED CUT OF DESIGN GROUP

    Directory of Open Access Journals (Sweden)

    A. S. Dorosh

    2015-06-01

    Full Text Available Purpose. The application of automation systems of breaking up process on the gravity hump is the efficiency improvement of their operation, absolute provision of trains breaking up safety demands, as well as improvement of hump staff working conditions. One of the main tasks of the indicated systems is the assurance of cuts reliable separation at all elements of their rolling route to the classification track. This task is a sophisticated optimization problem and has not received a final decision. Therefore, the task of determining the cuts braking mode is quite relevant. The purpose of this research is to find the optimal braking mode of control cut of design group. Methodology. In order to achieve the purpose is offered to use the direct search methods in the work, namely the Box complex method. This method does not require smoothness of the objective function, takes into account its limitations and does not require calculation of the function derivatives, and uses only its value. Findings. Using the Box method was developed iterative procedure for determining the control cut optimal braking mode of design group. The procedure maximizes the smallest controlled time interval in the group. To evaluate the effectiveness of designed procedure the series of simulation experiments of determining the control cut braking mode of design group was performed. The results confirmed the efficiency of the developed optimization procedure. Originality. The author formalized the task of optimizing control cut braking mode of design group, taking into account the cuts separation of design group at all elements (switches, retarders during cuts rolling to the classification track. The problem of determining the optimal control cut braking mode of design group was solved. The developed braking mode ensures cuts reliable separation of the group not only at the switches but at the retarders of brake position. Practical value. The developed procedure can be

  14. Use of multilevel modeling for determining optimal parameters of heat supply systems

    Science.gov (United States)

    Stennikov, V. A.; Barakhtenko, E. A.; Sokolov, D. V.

    2017-07-01

    The problem of finding optimal parameters of a heat-supply system (HSS) is in ensuring the required throughput capacity of a heat network by determining pipeline diameters and characteristics and location of pumping stations. Effective methods for solving this problem, i.e., the method of stepwise optimization based on the concept of dynamic programming and the method of multicircuit optimization, were proposed in the context of the hydraulic circuit theory developed at Melentiev Energy Systems Institute (Siberian Branch, Russian Academy of Sciences). These methods enable us to determine optimal parameters of various types of piping systems due to flexible adaptability of the calculation procedure to intricate nonlinear mathematical models describing features of used equipment items and methods of their construction and operation. The new and most significant results achieved in developing methodological support and software for finding optimal parameters of complex heat supply systems are presented: a new procedure for solving the problem based on multilevel decomposition of a heat network model that makes it possible to proceed from the initial problem to a set of interrelated, less cumbersome subproblems with reduced dimensionality; a new algorithm implementing the method of multicircuit optimization and focused on the calculation of a hierarchical model of a heat supply system; the SOSNA software system for determining optimum parameters of intricate heat-supply systems and implementing the developed methodological foundation. The proposed procedure and algorithm enable us to solve engineering problems of finding the optimal parameters of multicircuit heat supply systems having large (real) dimensionality, and are applied in solving urgent problems related to the optimal development and reconstruction of these systems. The developed methodological foundation and software can be used for designing heat supply systems in the Central and the Admiralty regions in

  15. Optimized scalar promotion with load and splat SIMD instructions

    Science.gov (United States)

    Eichenberger, Alexander E; Gschwind, Michael K; Gunnels, John A

    2013-10-29

    Mechanisms for optimizing scalar code executed on a single instruction multiple data (SIMD) engine are provided. Placement of vector operation-splat operations may be determined based on an identification of scalar and SIMD operations in an original code representation. The original code representation may be modified to insert the vector operation-splat operations based on the determined placement of vector operation-splat operations to generate a first modified code representation. Placement of separate splat operations may be determined based on identification of scalar and SIMD operations in the first modified code representation. The first modified code representation may be modified to insert or delete separate splat operations based on the determined placement of the separate splat operations to generate a second modified code representation. SIMD code may be output based on the second modified code representation for execution by the SIMD engine.

  16. DETERMINATION ОF DRESS ROLL OPTIMAL RADIUS WHILE PRODUCING PARTS WITH TROCHOIDAL PROFILE

    Directory of Open Access Journals (Sweden)

    E. N. Yankevich

    2008-01-01

    Full Text Available The paper considers determination of the dress roll optimal radius while producing parts having trohoidal profile with the help of grinding method that presupposes application of grinding disk. In this case disk profile has been cut-in by diamond dressing. Two methods for determination of calculation of the dress roll optimal radius have been proposed in the paper. On the basis of the satellite gear of the planetary pin reducer whose profile presents a trochoid it has been shown that the obtained results pertaining to two proposed methods conform with each other.

  17. Automated Fiber Placement of PEEK/IM7 Composites with Film Interleaf Layers

    Science.gov (United States)

    Hulcher, A. Bruce; Banks, William I., III; Pipes, R. Byron; Tiwari, Surendra N.; Cano, Roberto J.; Johnston, Norman J.; Clinton, R. G., Jr. (Technical Monitor)

    2001-01-01

    The incorporation of thin discrete layers of resin between plies (interleafing) has been shown to improve fatigue and impact properties of structural composite materials. Furthermore, interleafing could be used to increase the barrier properties of composites used as structural materials for cryogenic propellant storage. In this work, robotic heated-head tape placement of PEEK/IM7 composites containing a PEEK polymer film interleaf was investigated. These experiments were carried out at the NASA Langley Research Center automated fiber placement facility. Using the robotic equipment, an optimal fabrication process was developed for the composite without the interleaf. Preliminary interleaf processing trials indicated that a two-stage process was necessary; the film had to be tacked to the partially-placed laminate then fully melted in a separate operation. Screening experiments determined the relative influence of the various robotic process variables on the peel strength of the film-composite interface. Optimization studies were performed in which peel specimens were fabricated at various compaction loads and roller temperatures at each of three film melt processing rates. The resulting data were fitted with quadratic response surfaces. Additional specimens were fabricated at placement parameters predicted by the response surface models to yield high peel strength in an attempt to gage the accuracy of the predicted response and assess the repeatability of the process. The overall results indicate that quality PEEK/lM7 laminates having film interleaves can be successfully and repeatability fabricated by heated head automated fiber placement.

  18. Optimal siting and sizing of wind farms

    NARCIS (Netherlands)

    Cetinay-Iyicil, H.; Kuipers, F.A.; Guven, A. Nezih

    2017-01-01

    In this paper, we propose a novel technique to determine the optimal placement of wind farms, thereby taking into account wind characteristics and electrical grid constraints. We model the long-term variability of wind speed using a Weibull distribution according to wind direction intervals, and

  19. Radiologic placement of Hickman catheters

    International Nuclear Information System (INIS)

    Robertson, L.J.; Mauro, M.A.; Jaques, P.F.

    1988-01-01

    Hickman catheter inserter has previously been predominantly accomplished surgically by means of venous cutdown or percutaneous placement in the operating room. The authors describe their method and results for 55 consecutive percutaneous placements of Hickman catheters in the interventional radiology suite. Complication rates were comparable to those for surgical techniques. Radiologic placement resulted in increased convenience, decreased time and cost of insertion, and super fluoroscopic control of catheter placement and any special manipulations. Modern angiographic materials provide safer access to the subclavian vein than traditional methods. The authors conclude that radiologic placement of Hickman catheters offers significant advantages over traditional surgical placement

  20. Placement Design of Changeable Message Signs on Curved Roadways

    Directory of Open Access Journals (Sweden)

    Zhongren Wang, Ph.D. P.E. T.E.

    2015-01-01

    Full Text Available This paper presented a fundamental framework for Changeable Message Sign (CMS placement design along roadways with horizontal curves. This analytical framework determines the available distance for motorists to read and react to CMS messages based on CMS character height, driver's cone of vision, CMS pixel's cone of legibility, roadway horizontal curve radius, and CMS lateral and vertical placement. Sample design charts were developed to illustrate how the analytical framework may facilitate CMS placement design.

  1. Determining Optimal Replacement Policy with an Availability Constraint via Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Shengliang Zong

    2017-01-01

    Full Text Available We develop a model and a genetic algorithm for determining an optimal replacement policy for power equipment subject to Poisson shocks. If the time interval of two consecutive shocks is less than a threshold value, the failed equipment can be repaired. We assume that the operating time after repair is stochastically nonincreasing and the repair time is exponentially distributed with a geometric increasing mean. Our objective is to minimize the expected average cost under an availability requirement. Based on this average cost function, we propose the genetic algorithm to locate the optimal replacement policy N to minimize the average cost rate. The results show that the GA is effective and efficient in finding the optimal solutions. The availability of equipment has significance effect on the optimal replacement policy. Many practical systems fit the model developed in this paper.

  2. A Self-Adaptive Fuzzy c-Means Algorithm for Determining the Optimal Number of Clusters

    Science.gov (United States)

    Wang, Zhihao; Yi, Jing

    2016-01-01

    For the shortcoming of fuzzy c-means algorithm (FCM) needing to know the number of clusters in advance, this paper proposed a new self-adaptive method to determine the optimal number of clusters. Firstly, a density-based algorithm was put forward. The algorithm, according to the characteristics of the dataset, automatically determined the possible maximum number of clusters instead of using the empirical rule n and obtained the optimal initial cluster centroids, improving the limitation of FCM that randomly selected cluster centroids lead the convergence result to the local minimum. Secondly, this paper, by introducing a penalty function, proposed a new fuzzy clustering validity index based on fuzzy compactness and separation, which ensured that when the number of clusters verged on that of objects in the dataset, the value of clustering validity index did not monotonically decrease and was close to zero, so that the optimal number of clusters lost robustness and decision function. Then, based on these studies, a self-adaptive FCM algorithm was put forward to estimate the optimal number of clusters by the iterative trial-and-error process. At last, experiments were done on the UCI, KDD Cup 1999, and synthetic datasets, which showed that the method not only effectively determined the optimal number of clusters, but also reduced the iteration of FCM with the stable clustering result. PMID:28042291

  3. HIV Treatment and Prevention: A Simple Model to Determine Optimal Investment.

    Science.gov (United States)

    Juusola, Jessie L; Brandeau, Margaret L

    2016-04-01

    To create a simple model to help public health decision makers determine how to best invest limited resources in HIV treatment scale-up and prevention. A linear model was developed for determining the optimal mix of investment in HIV treatment and prevention, given a fixed budget. The model incorporates estimates of secondary health benefits accruing from HIV treatment and prevention and allows for diseconomies of scale in program costs and subadditive benefits from concurrent program implementation. Data sources were published literature. The target population was individuals infected with HIV or at risk of acquiring it. Illustrative examples of interventions include preexposure prophylaxis (PrEP), community-based education (CBE), and antiretroviral therapy (ART) for men who have sex with men (MSM) in the US. Outcome measures were incremental cost, quality-adjusted life-years gained, and HIV infections averted. Base case analysis indicated that it is optimal to invest in ART before PrEP and to invest in CBE before scaling up ART. Diseconomies of scale reduced the optimal investment level. Subadditivity of benefits did not affect the optimal allocation for relatively low implementation levels. The sensitivity analysis indicated that investment in ART before PrEP was optimal in all scenarios tested. Investment in ART before CBE became optimal when CBE reduced risky behavior by 4% or less. Limitations of the study are that dynamic effects are approximated with a static model. Our model provides a simple yet accurate means of determining optimal investment in HIV prevention and treatment. For MSM in the US, HIV control funds should be prioritized on inexpensive, effective programs like CBE, then on ART scale-up, with only minimal investment in PrEP. © The Author(s) 2015.

  4. Determination and optimization of the ζ potential in boron electrophoretic deposition on aluminium substrates

    International Nuclear Information System (INIS)

    Oliveira Sampa, M.H. de; Vinhas, L.A.; Pino, E.S.

    1991-05-01

    In this work we present an introduction of the electrophoretic process followed by a detailed experimental treatment of the technique used in the determination and optimization of the ζ-potential, mainly as a function of the electrolyte concentration, in a high purity boron electrophoretics deposition on aluminium substrates used as electrodes in neutron detectors. (author)

  5. Optimization in Activation Analysis by Means of Epithermal Neutrons. Determination of Molybdenum in Steel

    Energy Technology Data Exchange (ETDEWEB)

    Brune, D; Jirlow, J

    1963-12-15

    Optimization in activation analysis by means of selective activation with epithermal neutrons is discussed. This method was applied to the determination of molybdenum in a steel alloy without recourse to radiochemical separations. The sensitivity for this determination is estimated to be 10 ppm. With the common form of activation by means of thermal neutrons, the sensitivity would be about one-tenth of this. The sensitivity estimations are based on evaluation of the photo peak ratios of Mo-99/Fe-59.

  6. On the complexity of determining tolerances for ->e--optimal solutions to min-max combinatorial optimization problems

    NARCIS (Netherlands)

    Ghosh, D.; Sierksma, G.

    2000-01-01

    Sensitivity analysis of e-optimal solutions is the problem of calculating the range within which a problem parameter may lie so that the given solution re-mains e-optimal. In this paper we study the sensitivity analysis problem for e-optimal solutions tocombinatorial optimization problems with

  7. A New Method for Determining Optimal Regularization Parameter in Near-Field Acoustic Holography

    Directory of Open Access Journals (Sweden)

    Yue Xiao

    2018-01-01

    Full Text Available Tikhonov regularization method is effective in stabilizing reconstruction process of the near-field acoustic holography (NAH based on the equivalent source method (ESM, and the selection of the optimal regularization parameter is a key problem that determines the regularization effect. In this work, a new method for determining the optimal regularization parameter is proposed. The transfer matrix relating the source strengths of the equivalent sources to the measured pressures on the hologram surface is augmented by adding a fictitious point source with zero strength. The minimization of the norm of this fictitious point source strength is as the criterion for choosing the optimal regularization parameter since the reconstructed value should tend to zero. The original inverse problem in calculating the source strengths is converted into a univariate optimization problem which is solved by a one-dimensional search technique. Two numerical simulations with a point driven simply supported plate and a pulsating sphere are investigated to validate the performance of the proposed method by comparison with the L-curve method. The results demonstrate that the proposed method can determine the regularization parameter correctly and effectively for the reconstruction in NAH.

  8. Determination of optimal electrode positions for transcranial direct current stimulation (tDCS)

    International Nuclear Information System (INIS)

    Im, Chang-Hwan; Jung, Hui-Hun; Choi, Jung-Do; Lee, Soo Yeol; Jung, Ki-Young

    2008-01-01

    The present study introduces a new approach to determining optimal electrode positions in transcranial direct current stimulation (tDCS). Electric field and 3D conduction current density were analyzed using 3D finite element method (FEM) formulated for a dc conduction problem. The electrode positions for minimal current injection were optimized by changing the Cartesian coordinate system into the spherical coordinate system and applying the (2+6) evolution strategy (ES) algorithm. Preliminary simulation studies applied to a standard three-layer head model demonstrated that the proposed approach is promising in enhancing the performance of tDCS. (note)

  9. Determination of optimal electrode positions for transcranial direct current stimulation (tDCS)

    Energy Technology Data Exchange (ETDEWEB)

    Im, Chang-Hwan; Jung, Hui-Hun; Choi, Jung-Do [Department of Biomedical Engineering, Yonsei University, Wonju, 220-710 (Korea, Republic of); Lee, Soo Yeol [Department of Biomedical Engineering, Kyung Hee University, Suwon (Korea, Republic of); Jung, Ki-Young [Korea University Medical Center, Korea University College of Medicine, Seoul (Korea, Republic of)], E-mail: ich@yonsei.ac.kr

    2008-06-07

    The present study introduces a new approach to determining optimal electrode positions in transcranial direct current stimulation (tDCS). Electric field and 3D conduction current density were analyzed using 3D finite element method (FEM) formulated for a dc conduction problem. The electrode positions for minimal current injection were optimized by changing the Cartesian coordinate system into the spherical coordinate system and applying the (2+6) evolution strategy (ES) algorithm. Preliminary simulation studies applied to a standard three-layer head model demonstrated that the proposed approach is promising in enhancing the performance of tDCS. (note)

  10. Experimental determination of optimal clamping torque for AB-PEM Fuel cell

    Directory of Open Access Journals (Sweden)

    Noor Ul Hassan

    2016-04-01

    Full Text Available Polymer electrolyte Membrane (PEM fuel cell is an electrochemical device producing electricity by the reaction of hydrogen and oxygen without combustion. PEM fuel cell stack is provided with an appropriate clamping torque to prevent leakage of reactant gases and to minimize the contact resistance between gas diffusion media (GDL and bipolar plates. GDL porous structure and gas permeability is directly affected by the compaction pressure which, consequently, drastically change the fuel cell performance. Various efforts were made to determine the optimal compaction pressure and pressure distributions through simulations and experimentation. Lower compaction pressure results in increase of contact resistance and also chances of leakage. On the other hand, higher compaction pressure decreases the contact resistance but also narrows down the diffusion path for mass transfer from gas channels to the catalyst layers, consequently, lowering cell performance. The optimal cell performance is related to the gasket thickness and compression pressure on GDL. Every stack has a unique assembly pressure due to differences in fuel cell components material and stack design. Therefore, there is still need to determine the optimal torque value for getting the optimal cell performance. This study has been carried out in continuation of deve­lopment of Air breathing PEM fuel cell for small Unmanned Aerial Vehicle (UAV application. Compaction pressure at minimum contact resistance was determined and clamping torque value was calcu­la­ted accordingly. Single cell performance tests were performed at five different clamping torque values i.e 0.5, 1.0, 1.5, 2.0 and 2.5 N m, for achieving optimal cell per­formance. Clamping pressure distribution tests were also performed at these torque values to verify uniform pressure distribution at optimal torque value. Experimental and theoretical results were compared for making inferences about optimal cell perfor­man­ce. A

  11. Optimal Fluorescence Waveband Determination for Detecting Defective Cherry Tomatoes Using a Fluorescence Excitation-Emission Matrix

    Directory of Open Access Journals (Sweden)

    In-Suck Baek

    2014-11-01

    Full Text Available A multi-spectral fluorescence imaging technique was used to detect defective cherry tomatoes. The fluorescence excitation and emission matrix was used to measure for defects, sound surface and stem areas to determine the optimal fluorescence excitation and emission wavelengths for discrimination. Two-way ANOVA revealed the optimal excitation wavelength for detecting defect areas was 410 nm. Principal component analysis (PCA was applied to the fluorescence emission spectra of all regions at 410 nm excitation to determine the emission wavelengths for defect detection. The major emission wavelengths were 688 nm and 506 nm for the detection. Fluorescence images combined with the determined emission wavebands demonstrated the feasibility of detecting defective cherry tomatoes with >98% accuracy. Multi-spectral fluorescence imaging has potential utility in non-destructive quality sorting of cherry tomatoes.

  12. Optimization of determination of 126Sn by ion exchange chromatography method (presentation)

    International Nuclear Information System (INIS)

    Pasteka, L.; Dulanska, S.

    2013-01-01

    The aim of the work is to optimize the uptake of tin on anion exchange resins and application of this knowledge for the analysis of samples of radioactive waste from the device of Jaslovske Bohunice and Mochovce in determining of 126 Sn. First to be optimized a method for the separation of tin on ion exchange sorbent Anion Exchange Resin (1-X8, Chloride Form) from Eichrom Technologies. Model sample was prepared in 7 mol dm -3 HCl, because in that environment a sorbent effectively captures the tin, which is bounded complexly with chloride anions as SnCl 6 2- . The radiochemical separation yield was monitored by gamma spectrometric measurements on high purity germanium detector HPGe (E = 391 keV) by adding isotope 113 Sn to each model solution. The method of tin separation was optimized on model samples.

  13. College Math Assessment: SAT Scores vs. College Math Placement Scores

    Science.gov (United States)

    Foley-Peres, Kathleen; Poirier, Dawn

    2008-01-01

    Many colleges and university's use SAT math scores or math placement tests to place students in the appropriate math course. This study compares the use of math placement scores and SAT scores for 188 freshman students. The student's grades and faculty observations were analyzed to determine if the SAT scores and/or college math assessment scores…

  14. A Case for Faculty Involvement in EAP Placement Testing

    Science.gov (United States)

    James, Cindy; Templeman, Elizabeth

    2009-01-01

    The EAP placement procedure at Thompson Rivers University (TRU) involves multiple measures to assess the language skills of incoming students, some of which are facilitated and all of which are assessed by ESL faculty. In order to determine the effectiveness of this comprehensive EAP placement process and the effect of the faculty factor, a…

  15. PEG Tube Placement

    Directory of Open Access Journals (Sweden)

    Saptarshi Biswas

    2014-01-01

    Full Text Available Percutaneous endoscopic gastrostomy (PEG has been used for providing enteral access to patients who require long-term enteral nutrition for years. Although generally considered safe, PEG tube placement can be associated with many immediate and delayed complications. Buried bumper syndrome (BBS is one of the uncommon and late complications of percutaneous endoscopic gastrostomy (PEG placement. It occurs when the internal bumper of the PEG tube erodes into the gastric wall and lodges itself between the gastric wall and skin. This can lead to a variety of additional complications such as wound infection, peritonitis, and necrotizing fasciitis. We present here a case of buried bumper syndrome which caused extensive necrosis of the anterior abdominal wall.

  16. Determining the optimal number of Kanban in multi-products supply chain system

    Science.gov (United States)

    Widyadana, G. A.; Wee, H. M.; Chang, Jer-Yuan

    2010-02-01

    Kanban, a key element of just-in-time system, is a re-order card or signboard giving instruction or triggering the pull system to manufacture or supply a component based on actual usage of material. There are two types of Kanban: production Kanban and withdrawal Kanban. This study uses optimal and meta-heuristic methods to determine the Kanban quantity and withdrawal lot sizes in a supply chain system. Although the mix integer programming method gives an optimal solution, it is not time efficient. For this reason, the meta-heuristic methods are suggested. In this study, a genetic algorithm (GA) and a hybrid of genetic algorithm and simulated annealing (GASA) are used. The study compares the performance of GA and GASA with that of the optimal method using MIP. The given problems show that both GA and GASA result in a near optimal solution, and they outdo the optimal method in term of run time. In addition, the GASA heuristic method gives a better performance than the GA heuristic method.

  17. Optimization of Passive Coherent Receiver System Placement

    Science.gov (United States)

    2013-09-01

    spheroid object with a constant radar cross section (RCS). Additionally, the receiver and transmitters are assumed to be notional isotropic antennae...software- defined radio for equatorial plasma instability studies,” Radio Science, vol. 48, pp. 1–11. Aug. 2013. [2] P. C. Zhang and B. Y. Li, “Passive

  18. Optimizing Restriction Site Placement for Synthetic Genomes

    Science.gov (United States)

    Montes, Pablo; Memelli, Heraldo; Ward, Charles; Kim, Joondong; Mitchell, Joseph S. B.; Skiena, Steven

    Restriction enzymes are the workhorses of molecular biology. We introduce a new problem that arises in the course of our project to design virus variants to serve as potential vaccines: we wish to modify virus-length genomes to introduce large numbers of unique restriction enzyme recognition sites while preserving wild-type function by substitution of synonymous codons. We show that the resulting problem is NP-Complete, give an exponential-time algorithm, and propose effective heuristics, which we show give excellent results for five sample viral genomes. Our resulting modified genomes have several times more unique restriction sites and reduce the maximum gap between adjacent sites by three to nine-fold.

  19. Pragmatic Approach for Multistage Phasor Measurement Unit Placement

    DEFF Research Database (Denmark)

    Rather, Zakir Hussain; Chen, Zhe; Thoegersen, Poul

    2016-01-01

    Effective phasor measurement unit (PMU) placement is a key to the implementation of efficient and economically feasible wide area measurement systems in modern power systems. This paper proposes a pragmatic approach for cost-effective stage-wise deployment of PMUs while considering realistic...... constraints. Inspired from a real world experience, the proposed approach optimally allocates PMU placement in a stage-wise manner. The proposed approach also considers large-scale wind integration for effective grid state monitoring of wind generation dynamics. The proposed approach is implemented...... on the Danish power system projected for the year 2040. Furthermore, practical experience learnt from an optimal PMU placement project aimed at PMU placement in the Danish power system is presented, which is expected to provide insight of practical challenges at ground level that could be considered by PMU...

  20. Ubicación óptima de generación distribuida en sistemas de energía eléctrica Optimal placement of distributed generation in electric power system

    Directory of Open Access Journals (Sweden)

    Jesús María López–Lezama

    2009-06-01

    Full Text Available En este artículo se presenta una metodología para la ubicación óptima de generación distribuida en sistemas de energía eléctrica. Las barras candidatas para ubicar la generación distribuida son identificadas basándose en los precios marginales locales. Estos precios son obtenidos al resolver un flujo de potencia óptimo (OPF y corresponden a los multiplicadores de Lagrange de las ecuaciones de balance de potencia activa en cada una de las barras del sistema. Para incluir la generación distribuida en el OPF, ésta se ha modelado como una inyección negativa de potencia activa. La metodología consiste en un proceso no lineal iterativo en donde la generación distribuida es ubicada en la barra con el mayor precio marginal. Se consideraron tres tipos de generación distribuida: 1 motores de combustión interna, 2 turbinas a gas y 3 microturbinas. La metodología propuesta es evaluada en el sistema IEEE de 30 barras. Los resultados obtenidos muestran que la generación distribuida contribuye a la disminución de los precios nodales y puede ayudar a solucionar problemas de congestión en la red de transmisión.This paper presents a methodology for optimal placement of distributed generation (DG in electric power system. The candidate buses for DG placementare identified on the bases of locational marginal prices. These prices are obtained by solving an optimal power flow (OPF and correspond to the Lagrange multipliers of the active power balance equations in every bus of the system.In order to consider the distributed generation in the OPF model, the DG was modeled as a negative injection of active power. The methodology consists ofa nonlinear iterative process in which DG is allocated in the bus with the highest locational marginal price. Three types of DG were considered in the model: 1 internal combustion engines, 2 gas turbines and 3 microturbines.The proposed methodology is tested on the IEEE 30 bus test system. The results obtained

  1. Optimal selection and placement of green infrastructure to reduce impacts of land use change and climate change on hydrology and water quality: An application to the Trail Creek Watershed, Indiana.

    Science.gov (United States)

    Liu, Yaoze; Theller, Lawrence O; Pijanowski, Bryan C; Engel, Bernard A

    2016-05-15

    The adverse impacts of urbanization and climate change on hydrology and water quality can be mitigated by applying green infrastructure practices. In this study, the impacts of land use change and climate change on hydrology and water quality in the 153.2 km(2) Trail Creek watershed located in northwest Indiana were estimated using the Long-Term Hydrologic Impact Assessment-Low Impact Development 2.1 (L-THIA-LID 2.1) model for the following environmental concerns: runoff volume, Total Suspended Solids (TSS), Total Phosphorous (TP), Total Kjeldahl Nitrogen (TKN), and Nitrate+Nitrite (NOx). Using a recent 2001 land use map and 2050 land use forecasts, we found that land use change resulted in increased runoff volume and pollutant loads (8.0% to 17.9% increase). Climate change reduced runoff and nonpoint source pollutant loads (5.6% to 10.2% reduction). The 2050 forecasted land use with current rainfall resulted in the largest runoff volume and pollutant loads. The optimal selection and placement of green infrastructure practices using L-THIA-LID 2.1 model were conducted. Costs of applying green infrastructure were estimated using the L-THIA-LID 2.1 model considering construction, maintenance, and opportunity costs. To attain the same runoff volume and pollutant loads as in 2001 land uses for 2050 land uses, the runoff volume, TSS, TP, TKN, and NOx for 2050 needed to be reduced by 10.8%, 14.4%, 13.1%, 15.2%, and 9.0%, respectively. The corresponding annual costs of implementing green infrastructure to achieve the goals were $2.1, $0.8, $1.6, $1.9, and $0.8 million, respectively. Annual costs of reducing 2050 runoff volume/pollutant loads were estimated, and results show green infrastructure annual cost greatly increased for larger reductions in runoff volume and pollutant loads. During optimization, the most cost-efficient green infrastructure practices were selected and implementation levels increased for greater reductions of runoff and nonpoint source pollutants

  2. Determining optimal interconnection capacity on the basis of hourly demand and supply functions of electricity

    International Nuclear Information System (INIS)

    Keppler, Jan Horst; Meunier, William; Coquentin, Alexandre

    2017-01-01

    Interconnections for cross-border electricity flows are at the heart of the project to create a common European electricity market. At the time, increase in production from variable renewables clustered during a limited numbers of hours reduces the availability of existing transport infrastructures. This calls for higher levels of optimal interconnection capacity than in the past. In complement to existing scenario-building exercises such as the TYNDP that respond to the challenge of determining optimal levels of infrastructure provision, the present paper proposes a new empirically-based methodology to perform Cost-Benefit analysis for the determination of optimal interconnection capacity, using as an example the French-German cross-border trade. Using a very fine dataset of hourly supply and demand curves (aggregated auction curves) for the year 2014 from the EPEX Spot market, it constructs linearized net export (NEC) and net import demand curves (NIDC) for both countries. This allows assessing hour by hour the welfare impacts for incremental increases in interconnection capacity. Summing these welfare increases over the 8 760 hours of the year, this provides the annual total for each step increase of interconnection capacity. Confronting welfare benefits with the annual cost of augmenting interconnection capacity indicated the socially optimal increase in interconnection capacity between France and Germany on the basis of empirical market micro-data. (authors)

  3. Method to determine the optimal constitutive model from spherical indentation tests

    Directory of Open Access Journals (Sweden)

    Tairui Zhang

    2018-03-01

    Full Text Available The limitation of current indentation theories was investigated and a method to determine the optimal constitutive model through spherical indentation tests was proposed. Two constitutive models, the Power-law and the Linear-law, were used in Finite Element (FE calculations, and then a set of indentation governing equations was established for each model. The load-depth data from the normal indentation depth was used to fit the best parameters in each constitutive model while the data from the further loading part was compared with those from FE calculations, and the model that better predicted the further deformation was considered the optimal one. Moreover, a Yang’s modulus calculation model which took the previous plastic deformation and the phenomenon of pile-up (or sink-in into consideration was also proposed to revise the original Sneddon-Pharr-Oliver model. The indentation results on six materials, 304, 321, SA508, SA533, 15CrMoR, and Fv520B, were compared with tensile ones, which validated the reliability of the revised E calculation model and the optimal constitutive model determination method in this study. Keywords: Optimal constitutive model, Spherical indentation test, Finite Element calculations, Yang’s modulus

  4. Impacted material placement plans

    International Nuclear Information System (INIS)

    Hickey, M.J.

    1997-01-01

    Impacted material placement plans (IMPP) are documents identifying the essential elements in placing remediation wastes into disposal facilities. Remediation wastes or impacted material(s) are those components used in the construction of the disposal facility exclusive of the liners and caps. The components might include soils, concrete, rubble, debris, and other regulatory approved materials. The IMPP provides the details necessary for interested parties to understand the management and construction practices at the disposal facility. The IMPP should identify the regulatory requirements from applicable DOE Orders, the ROD(s) (where a part of a CERCLA remedy), closure plans, or any other relevant agreements or regulations. Also, how the impacted material will be tracked should be described. Finally, detailed descriptions of what will be placed and how it will be placed should be included. The placement of impacted material into approved on-site disposal facilities (OSDF) is an integral part of gaining regulatory approval. To obtain this approval, a detailed plan (Impacted Material Placement Plan [IMPP]) was developed for the Fernald OSDF. The IMPP provides detailed information for the DOE, site generators, the stakeholders, regulatory community, and the construction subcontractor placing various types of impacted material within the disposal facility

  5. Sensor Placement for Modal Parameter Subset Estimation

    DEFF Research Database (Denmark)

    Ulriksen, Martin Dalgaard; Bernal, Dionisio; Damkilde, Lars

    2016-01-01

    The present paper proposes an approach for deciding on sensor placements in the context of modal parameter estimation from vibration measurements. The approach is based on placing sensors, of which the amount is determined a priori, such that the minimum Fisher information that the frequency resp...

  6. Patterning control strategies for minimum edge placement error in logic devices

    Science.gov (United States)

    Mulkens, Jan; Hanna, Michael; Slachter, Bram; Tel, Wim; Kubis, Michael; Maslow, Mark; Spence, Chris; Timoshkov, Vadim

    2017-03-01

    In this paper we discuss the edge placement error (EPE) for multi-patterning semiconductor manufacturing. In a multi-patterning scheme the creation of the final pattern is the result of a sequence of lithography and etching steps, and consequently the contour of the final pattern contains error sources of the different process steps. We describe the fidelity of the final pattern in terms of EPE, which is defined as the relative displacement of the edges of two features from their intended target position. We discuss our holistic patterning optimization approach to understand and minimize the EPE of the final pattern. As an experimental test vehicle we use the 7-nm logic device patterning process flow as developed by IMEC. This patterning process is based on Self-Aligned-Quadruple-Patterning (SAQP) using ArF lithography, combined with line cut exposures using EUV lithography. The computational metrology method to determine EPE is explained. It will be shown that ArF to EUV overlay, CDU from the individual process steps, and local CD and placement of the individual pattern features, are the important contributors. Based on the error budget, we developed an optimization strategy for each individual step and for the final pattern. Solutions include overlay and CD metrology based on angle resolved scatterometry, scanner actuator control to enable high order overlay corrections and computational lithography optimization to minimize imaging induced pattern placement errors of devices and metrology targets.

  7. DETERMINATION OF THE OPTIMAL CAPITAL INVESTMENTS TO ENSURE THE SUSTAINABLE DEVELOPMENT OF THE RAILWAY

    Directory of Open Access Journals (Sweden)

    O. I. Kharchenko

    2015-04-01

    Full Text Available Purpose. Every year more attention is paid for the theoretical and practical issue of sustainable development of railway transport. But today the mechanisms of financial support of this development are poorly understood. Therefore, the aim of this article is to determine the optimal investment allocation to ensure sustainable development of the railway transport on the example of State Enterprise «Prydniprovsk Railway» and the creation of preconditions for the mathematical model development. Methodology. The ensuring task for sustainable development of railway transport is solved on the basis of the integral indicator of sustainable development effectiveness and defined as the maximization of this criterion. The optimization of measures technological and technical characters are proposed to carry out for increasing values of the integral performance measure components. To the optimization activities of technological nature that enhance the performance criteria belongs: optimization of the number of train and shunting locomotives, optimization of power handling mechanisms at the stations, optimization of routes of train flows. The activities related to the technical nature include: modernization of railways in the direction of their electrification and modernization of the running gear and coupler drawbars of rolling stock, as well as means of separators mechanization at stations to reduce noise impacts on the environment. Findings. The work resulted in the optimal allocation of investments to ensure the sustainable development of railway transportation of State Enterprise «Prydniprovsk Railway». This allows providing such kind of railway development when functioning of State Enterprise «Prydniprovsk Railway» is characterized by a maximum value of the integral indicator of efficiency. Originality. The work was reviewed and the new approach was proposed to determine the optimal allocation of capital investments to ensure sustainable

  8. Determination of gallic acid with rhodanine by reverse flow injection analysis using simplex optimization.

    Science.gov (United States)

    Phakthong, Wilaiwan; Liawruangrath, Boonsom; Liawruangrath, Saisunee

    2014-12-01

    A reversed flow injection (rFI) system was designed and constructed for gallic acid determination. Gallic acid was determined based on the formation of chromogen between gallic acid and rhodanine, resulting in a colored product with a λmax at 520 nm. The optimum conditions for determining gallic acid were also investigated. Optimizations of the experimental conditions were carried out based on the so-call univariate method. The conditions obtained were 0.6% (w/v) rhodanine, 70% (v/v) ethanol, 0.9 mol L(-1) NaOH, 2.0 mL min(-1) flow rate, 75 μL injection loop and 600 cm mixing tubing length, respectively. Comparative optimizations of the experimental conditions were also carried out by multivariate or simplex optimization method. The conditions obtained were 1.2% (w/v) rhodanine, 70% (v/v) ethanol, 1.2 mol L(-1) NaOH, flow rate 2.5 mL min(-1), 75 μL injection loop and 600 cm mixing tubing length, respectively. It was found that the optimum conditions obtained by the former optimization method were mostly similar to those obtained by the latter method. The linear relationship between peak height and the concentration of gallic acid was obtained over the range of 0.1-35.0 mg L(-1) with the detection limit 0.081 mg L(-1). The relative standard deviations were found to be in the ranges 0.46-1.96% for 1, 10, 30 mg L(-1) of gallic acid (n=11). The method has the advantages of simplicity extremely high selectivity and high precision. The proposed method was successfully applied to the determination of gallic acid in longan samples without interferent effects from other common phenolic compounds that might be present in the longan samples collected in northern Thailand. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Optimization of experimental conditions in uranium trace determination using laser time-resolved fluorimetry

    International Nuclear Information System (INIS)

    Baly, L.; Garcia, M.A.

    1996-01-01

    At the present paper a new sample excitation geometry is presented for the uranium trace determination in aqueous solutions by the Time-Resolved Laser-Induced Fluorescence. This new design introduces the laser radiation through the top side of the cell allowing the use of cells with two quartz sides, less expensive than commonly used at this experimental set. Optimization of the excitation conditions, temporal discrimination and spectral selection are presented

  10. Determining Optimal Replacement Policy with an Availability Constraint via Genetic Algorithms

    OpenAIRE

    Zong, Shengliang; Chai, Guorong; Su, Yana

    2017-01-01

    We develop a model and a genetic algorithm for determining an optimal replacement policy for power equipment subject to Poisson shocks. If the time interval of two consecutive shocks is less than a threshold value, the failed equipment can be repaired. We assume that the operating time after repair is stochastically nonincreasing and the repair time is exponentially distributed with a geometric increasing mean. Our objective is to minimize the expected average cost under an availability requi...

  11. Determination of optimal geometry for cylindrical sources for gamma radiation measurements; Odredjivanje optimalne geometrije za mjerenje gama zracenja cilindrichnih izvora

    Energy Technology Data Exchange (ETDEWEB)

    Sinjeri, Lj; Kulisic, P [Elektra - Zagreb, Zagreb (Yugoslavia)

    1990-07-01

    Low radioactive sources were used for experimental determination of optimal dimensions for cylindrical source using coaxial Ge(Li) detector. Then, calculational procedure is used to find optimal dimensions of cylindrical source. The results from calculational procedure confirm with experimental results. In such way the verification of calculational procedure is done and it can be used for determination of optimal geometry for low radioactive cylindrical sources. (author)

  12. A new power mapping method based on ordinary kriging and determination of optimal detector location strategy

    International Nuclear Information System (INIS)

    Peng, Xingjie; Wang, Kan; Li, Qing

    2014-01-01

    Highlights: • A new power mapping method based on Ordinary Kriging (OK) is proposed. • Measurements from DayaBay Unit 1 PWR are used to verify the OK method. • The OK method performs better than the CECOR method. • An optimal neutron detector location strategy based on ordinary kriging and simulated annealing is proposed. - Abstract: The Ordinary Kriging (OK) method is presented that is designed for a core power mapping calculation of pressurized water reactors (PWRs). Measurements from DayaBay Unit 1 PWR are used to verify the accuracy of the OK method. The root mean square (RMS) reconstruction errors are kept at less than 0.35%, and the maximum reconstruction relative errors (RE) are kept at less than 1.02% for the entire operating cycle. The reconstructed assembly power distribution results show that the OK method is fit for core power distribution monitoring. The quality of power distribution obtained by the OK method is partly determined by the neutron detector locations, and the OK method is also applied to solve the optimal neutron detector location problem. The spatially averaged ordinary kriging variance (AOKV) is minimized using simulated annealing, and then, the optimal in-core neutron detector locations are obtained. The result shows that the current neutron detector location of DayaBay Unit 1 reactor is near-optimal

  13. Determination of radial profile of ICF hot spot's state by multi-objective parameters optimization

    International Nuclear Information System (INIS)

    Dong Jianjun; Deng Bo; Cao Zhurong; Ding Yongkun; Jiang Shaoen

    2014-01-01

    A method using multi-objective parameters optimization is presented to determine the radial profile of hot spot temperature and density. And a parameter space which contain five variables: the temperatures at center and the interface of fuel and remain ablator, the maximum model density of remain ablator, the mass ratio of remain ablator to initial ablator and the position of interface between fuel and the remain ablator, is used to described the hot spot radial temperature and density. Two objective functions are set as the variances of normalized intensity profile from experiment X-ray images and the theory calculation. Another objective function is set as the variance of experiment average temperature of hot spot and the average temperature calculated by theoretical model. The optimized parameters are obtained by multi-objective genetic algorithm searching for the five dimension parameter space, thereby the optimized radial temperature and density profiles can be determined. The radial temperature and density profiles of hot spot by experiment data measured by KB microscope cooperating with X-ray film are presented. It is observed that the temperature profile is strongly correlated to the objective functions. (authors)

  14. HPLC/DAD determination of rosmarinic acid in Salvia officinalis: sample preparation optimization by factorial design

    International Nuclear Information System (INIS)

    Oliveira, Karina B. de; Oliveira, Bras H. de

    2013-01-01

    Sage (Salvia officinalis) contains high amounts of the biologically active rosmarinic acid (RA) and other polyphenolic compounds. RA is easily oxidized, and may undergo degradation during sample preparation for analysis. The objective of this work was to develop and validate an analytical procedure for determination of RA in sage, using factorial design of experiments for optimizing sample preparation. The statistically significant variables for improving RA extraction yield were determined initially and then used in the optimization step, using central composite design (CCD). The analytical method was then fully validated, and used for the analysis of commercial samples of sage. The optimized procedure involved extraction with aqueous methanol (40%) containing an antioxidant mixture (ascorbic acid and ethylenediaminetetraacetic acid (EDTA)), with sonication at 45 deg C for 20 min. The samples were then injected in a system containing a C 18 column, using methanol (A) and 0.1% phosphoric acid in water (B) in step gradient mode (45A:55B, 0-5 min; 80A:20B, 5-10 min) with flow rate of 1.0 mL min−1 and detection at 330 nm. Using this conditions, RA concentrations were 50% higher when compared to extractions without antioxidants (98.94 ± 1.07% recovery). Auto-oxidation of RA during sample extraction was prevented by the use of antioxidants resulting in more reliable analytical results. The method was then used for the analysis of commercial samples of sage. (author)

  15. HPLC/DAD determination of rosmarinic acid in Salvia officinalis: sample preparation optimization by factorial design

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Karina B. de [Universidade Federal do Parana (UFPR), Curitiba, PR (Brazil). Dept. de Farmacia; Oliveira, Bras H. de, E-mail: bho@ufpr.br [Universidade Federal do Parana (UFPR), Curitiba, PR (Brazil). Dept. de Quimica

    2013-01-15

    Sage (Salvia officinalis) contains high amounts of the biologically active rosmarinic acid (RA) and other polyphenolic compounds. RA is easily oxidized, and may undergo degradation during sample preparation for analysis. The objective of this work was to develop and validate an analytical procedure for determination of RA in sage, using factorial design of experiments for optimizing sample preparation. The statistically significant variables for improving RA extraction yield were determined initially and then used in the optimization step, using central composite design (CCD). The analytical method was then fully validated, and used for the analysis of commercial samples of sage. The optimized procedure involved extraction with aqueous methanol (40%) containing an antioxidant mixture (ascorbic acid and ethylenediaminetetraacetic acid (EDTA)), with sonication at 45 deg C for 20 min. The samples were then injected in a system containing a C{sub 18} column, using methanol (A) and 0.1% phosphoric acid in water (B) in step gradient mode (45A:55B, 0-5 min; 80A:20B, 5-10 min) with flow rate of 1.0 mL min-1 and detection at 330 nm. Using this conditions, RA concentrations were 50% higher when compared to extractions without antioxidants (98.94 {+-} 1.07% recovery). Auto-oxidation of RA during sample extraction was prevented by the use of antioxidants resulting in more reliable analytical results. The method was then used for the analysis of commercial samples of sage. (author)

  16. Method to determine the optimal constitutive model from spherical indentation tests

    Science.gov (United States)

    Zhang, Tairui; Wang, Shang; Wang, Weiqiang

    2018-03-01

    The limitation of current indentation theories was investigated and a method to determine the optimal constitutive model through spherical indentation tests was proposed. Two constitutive models, the Power-law and the Linear-law, were used in Finite Element (FE) calculations, and then a set of indentation governing equations was established for each model. The load-depth data from the normal indentation depth was used to fit the best parameters in each constitutive model while the data from the further loading part was compared with those from FE calculations, and the model that better predicted the further deformation was considered the optimal one. Moreover, a Yang's modulus calculation model which took the previous plastic deformation and the phenomenon of pile-up (or sink-in) into consideration was also proposed to revise the original Sneddon-Pharr-Oliver model. The indentation results on six materials, 304, 321, SA508, SA533, 15CrMoR, and Fv520B, were compared with tensile ones, which validated the reliability of the revised E calculation model and the optimal constitutive model determination method in this study.

  17. Evaluation of sample preparation methods and optimization of nickel determination in vegetable tissues

    Directory of Open Access Journals (Sweden)

    Rodrigo Fernando dos Santos Salazar

    2011-02-01

    Full Text Available Nickel, although essential to plants, may be toxic to plants and animals. It is mainly assimilated by food ingestion. However, information about the average levels of elements (including Ni in edible vegetables from different regions is still scarce in Brazil. The objectives of this study were to: (a evaluate and optimize a method for preparation of vegetable tissue samples for Ni determination; (b optimize the analytical procedures for determination by Flame Atomic Absorption Spectrometry (FAAS and by Electrothermal Atomic Absorption (ETAAS in vegetable samples and (c determine the Ni concentration in vegetables consumed in the cities of Lorena and Taubaté in the Vale do Paraíba, State of São Paulo, Brazil. By means of the analytical technique for determination by ETAAS or FAAS, the results were validated by the test of analyte addition and recovery. The most viable method tested for quantification of this element was HClO4-HNO3 wet digestion. All samples but carrot tissue collected in Lorena contained Ni levels above the permitted by the Brazilian Ministry of Health. The most disturbing results, requiring more detailed studies, were the Ni concentrations measured in carrot samples from Taubaté, where levels were five times higher than permitted by Brazilian regulations.

  18. FPGA Congestion-Driven Placement Refinement

    Energy Technology Data Exchange (ETDEWEB)

    Vicente de, J.

    2005-07-01

    The routing congestion usually limits the complete proficiency of the FPGA logic resources. A key question can be formulated regarding the benefits of estimating the congestion at placement stage. In the last years, it is gaining acceptance the idea of a detailed placement taking into account congestion. In this paper, we resort to the Thermodynamic Simulated Annealing (TSA) algorithm to perform a congestion-driven placement refinement on the top of the common Bounding-Box pre optimized solution. The adaptive properties of TSA allow the search to preserve the solution quality of the pre optimized solution while improving other fine-grain objectives. Regarding the cost function two approaches have been considered. In the first one Expected Occupation (EO), a detailed probabilistic model to account for channel congestion is evaluated. We show that in spite of the minute detail of EO, the inherent uncertainty of this probabilistic model impedes to relieve congestion beyond the sole application of the Bounding-Box cost function. In the second approach we resort to the fast Rectilinear Steiner Regions algorithm to perform not an estimation but a measurement of the global routing congestion. This second strategy allows us to successfully reduce the requested channel width for a set of benchmark circuits with respect to the widespread Versatile Place and Route (VPR) tool. (Author) 31 refs.

  19. Observability-Based Guidance and Sensor Placement

    Science.gov (United States)

    Hinson, Brian T.

    Control system performance is highly dependent on the quality of sensor information available. In a growing number of applications, however, the control task must be accomplished with limited sensing capabilities. This thesis addresses these types of problems from a control-theoretic point-of-view, leveraging system nonlinearities to improve sensing performance. Using measures of observability as an information quality metric, guidance trajectories and sensor distributions are designed to improve the quality of sensor information. An observability-based sensor placement algorithm is developed to compute optimal sensor configurations for a general nonlinear system. The algorithm utilizes a simulation of the nonlinear system as the source of input data, and convex optimization provides a scalable solution method. The sensor placement algorithm is applied to a study of gyroscopic sensing in insect wings. The sensor placement algorithm reveals information-rich areas on flexible insect wings, and a comparison to biological data suggests that insect wings are capable of acting as gyroscopic sensors. An observability-based guidance framework is developed for robotic navigation with limited inertial sensing. Guidance trajectories and algorithms are developed for range-only and bearing-only navigation that improve navigation accuracy. Simulations and experiments with an underwater vehicle demonstrate that the observability measure allows tuning of the navigation uncertainty.

  20. Mathematics Course Placement Using Holistic Measures: Possibilities for Community College Students

    Science.gov (United States)

    Ngo, Federick; Chi, W. Edward; Park, Elizabeth So Yun

    2018-01-01

    Background/Context: Most community colleges across the country use a placement test to determine students' readiness for college-level coursework, yet these tests are admittedly imperfect instruments. Researchers have documented significant problems stemming from overreliance on placement testing, including placement error and misdiagnosis of…

  1. A New Method for Optimal Regularization Parameter Determination in the Inverse Problem of Load Identification

    Directory of Open Access Journals (Sweden)

    Wei Gao

    2016-01-01

    Full Text Available According to the regularization method in the inverse problem of load identification, a new method for determining the optimal regularization parameter is proposed. Firstly, quotient function (QF is defined by utilizing the regularization parameter as a variable based on the least squares solution of the minimization problem. Secondly, the quotient function method (QFM is proposed to select the optimal regularization parameter based on the quadratic programming theory. For employing the QFM, the characteristics of the values of QF with respect to the different regularization parameters are taken into consideration. Finally, numerical and experimental examples are utilized to validate the performance of the QFM. Furthermore, the Generalized Cross-Validation (GCV method and the L-curve method are taken as the comparison methods. The results indicate that the proposed QFM is adaptive to different measuring points, noise levels, and types of dynamic load.

  2. Determination of proline in honey: comparison between official methods, optimization and validation of the analytical methodology.

    Science.gov (United States)

    Truzzi, Cristina; Annibaldi, Anna; Illuminati, Silvia; Finale, Carolina; Scarponi, Giuseppe

    2014-05-01

    The study compares official spectrophotometric methods for the determination of proline content in honey - those of the International Honey Commission (IHC) and the Association of Official Analytical Chemists (AOAC) - with the original Ough method. Results show that the extra time-consuming treatment stages added by the IHC method with respect to the Ough method are pointless. We demonstrate that the AOACs method proves to be the best in terms of accuracy and time saving. The optimized waiting time for the absorbance recording is set at 35min from the removal of reaction tubes from the boiling bath used in the sample treatment. The optimized method was validated in the matrix: linearity up to 1800mgL(-1), limit of detection 20mgL(-1), limit of quantification 61mgL(-1). The method was applied to 43 unifloral honey samples from the Marche region, Italy. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. Parametrical Method for Determining Optimal Ship Carrying Capacity and Performance of Handling Equipment

    Directory of Open Access Journals (Sweden)

    Michalski Jan P.

    2016-04-01

    Full Text Available The paper presents a method of evaluating the optimal value of the cargo ships deadweight and the coupled optimal value of cargo handling capacity. The method may be useful at the stage of establishing the main owners requirements concerning the ship design parameters as well as for choosing a proper second hand ship for a given transportation task. The deadweight and the capacity are determined on the basis of a selected economic measure of the transport effectiveness of ship – the Required Freight Rate. The mathematical model of the problem is of a deterministic character and the simplifying assumptions are justified for ships operating in the liner trade. The assumptions are so selected that solution of the problem is obtained in analytical closed form. The presented method can be useful for application in the preliminary ship design or in the simulation of pre-investment transportation task studies.

  4. Determination of the Optimal Exchange Rate Via Control of the Domestic Interest Rate in Nigeria

    Directory of Open Access Journals (Sweden)

    Virtue U. Ekhosuehi

    2014-01-01

    Full Text Available An economic scenario has been considered where the government seeks to achieve a favourable balance-of-payments over a fixed planning horizon through exchange rate policy and control of the domestic interest rate. The dynamics of such an economy was considered in terms of a bounded optimal control problem where the exchange rate is the state variable and the domestic interest rate is the control variable. The idea of balance-of-payments was used as a theoretical underpinning to specify the objective function. By assuming that, changes in exchange rates were induced by two effects: the impact of the domestic interest rate on the exchange rate and the exchange rate system adopted by the government. Instances for both fixed and flexible optimal exchange rate regimes have been determined. The use of the approach has been illustrated employing data obtained from the Central Bank of Nigeria (CBN statistical bulletin. (original abstract

  5. Determination of the Optimal Tilt Angle for Solar Photovoltaic Panel in Ilorin, Nigeria

    Directory of Open Access Journals (Sweden)

    K.R. Ajao

    2013-06-01

    Full Text Available The optimal tilt angle of solar photovoltaic panel in Ilorin, Nigeria was determined. The solar panel was first mounted at 0o to the horizontal and after ten minutes, the voltage and current generated with the corresponding atmospheric temperature were recorded. The same procedure was repeated for 2o to 30o at a succession of 2o at ten minutes time interval over the entire measurement period. The result obtained shows that the average optimal tilt angle at which a solar panel will be mounted for maximum power performance at fixed position in Ilorin is 22o. This optimum angle of tilt of the solar panel and the orientation are dependent on the month of the year and the location of the site of study.

  6. Determining an energy-optimal thermal management strategy for electric driven vehicles

    Energy Technology Data Exchange (ETDEWEB)

    Suchaneck, Andre; Probst, Tobias; Puente Leon, Fernando [Karlsruher Institut fuer Technology (KIT), Karlsruhe (Germany). Inst. of Industrial Information Technology (IIIT)

    2012-11-01

    In electric, hybrid electric and fuel cell vehicles, thermal management may have a significant impact on vehicle range. Therefore, optimal thermal management strategies are required. In this paper a method for determining an energy-optimal control strategy for thermal power generation in electric driven vehicles is presented considering all controlled devices (pumps, valves, fans, and the like) as well as influences like ambient temperature, vehicle speed, motor and battery and cooling cycle temperatures. The method is designed to be generic to increase the thermal management development process speed and to achieve the maximal energy reduction for any electric driven vehicle (e.g., by waste heat utilization). Based on simulations of a prototype electric vehicle with an advanced cooling cycle structure, the potential of the method is shown. (orig.)

  7. Determination of optimal environmental policy for reclamation of land unearthed in lignite mines - Strategy and tactics

    Science.gov (United States)

    Batzias, Dimitris F.; Pollalis, Yannis A.

    2012-12-01

    In this paper, optimal environmental policy for reclamation of land unearthed in lignite mines is defined as a strategic target. The tactics concerning the achievement of this target, includes estimation of optimal time lag between each lignite site (which is a segment of the whole lignite field) complete exploitation and its reclamation. Subsidizing of reclamation has been determined as a function of this time lag and relevant implementation is presented for parameter values valid for the Greek economy. We proved that the methodology we have developed gives reasonable quantitative results within the norms imposed by legislation. Moreover, the interconnection between strategy and tactics becomes evident, since the former causes the latter by deduction and the latter revises the former by induction in the time course of land reclamation.

  8. Combustion characteristics and optimal factors determination with Taguchi method for diesel engines port-injecting hydrogen

    International Nuclear Information System (INIS)

    Wu, Horng-Wen; Wu, Zhan-Yi

    2012-01-01

    This study applies the L 9 orthogonal array of the Taguchi method to find out the best hydrogen injection timing, hydrogen-energy-share ratio, and the percentage of exhaust gas circulation (EGR) in a single DI diesel engine. The injection timing is controlled by an electronic control unit (ECU) and the quantity of hydrogen is controlled by hydrogen flow controller. For various engine loads, the authors determine the optimal operating factors for low BSFC (brake specific fuel consumption), NO X , and smoke. Moreover, net heat-release rate involving variable specific heat ratio is computed from the experimental in-cylinder pressure. In-cylinder pressure, net heat-release rate, A/F ratios, COV (coefficient of variations) of IMEP (indicated mean effective pressure), NO X , and smoke using the optimum condition factors are compared with those by original baseline diesel engine. The predictions made using Taguchi's parameter design technique agreed with the confirmation results on 95% confidence interval. At 45% and 60% loads the optimum factor combination compared with the original baseline diesel engine reduces 14.52% for BSFC, 60.5% for NO X and for 42.28% smoke and improves combustion performance such as peak in-cylinder pressure and net heat-release rate. Adding hydrogen and EGR would not generate unstable combustion due to lower COV of IMEP. -- Highlights: ► We use hydrogen injector controlled by ECU and cooled EGR system in a diesel engine. ► Optimal factors by Taguchi method are determined for low BSFC, NO X and smoke. ► The COV of IMEP is lower than 10% so it will not cause the unstable combustion. ► We improve A/F ratio, in-cylinder pressure, and heat-release at optimized engine. ► Decrease is 14.5% for BSFC, 60.5% for NO X , and 42.28% for smoke at optimized engine.

  9. A Novel Scheme for Optimal Control of a Nonlinear Delay Differential Equations Model to Determine Effective and Optimal Administrating Chemotherapy Agents in Breast Cancer.

    Science.gov (United States)

    Ramezanpour, H R; Setayeshi, S; Akbari, M E

    2011-01-01

    Determining the optimal and effective scheme for administrating the chemotherapy agents in breast cancer is the main goal of this scientific research. The most important issue here is the amount of drug or radiation administrated in chemotherapy and radiotherapy for increasing patient's survival. This is because in these cases, the therapy not only kills the tumor cells, but also kills some of the healthy tissues and causes serious damages. In this paper we investigate optimal drug scheduling effect for breast cancer model which consist of nonlinear ordinary differential time-delay equations. In this paper, a mathematical model of breast cancer tumors is discussed and then optimal control theory is applied to find out the optimal drug adjustment as an input control of system. Finally we use Sensitivity Approach (SA) to solve the optimal control problem. The goal of this paper is to determine optimal and effective scheme for administering the chemotherapy agent, so that the tumor is eradicated, while the immune systems remains above a suitable level. Simulation results confirm the effectiveness of our proposed procedure. In this paper a new scheme is proposed to design a therapy protocol for chemotherapy in Breast Cancer. In contrast to traditional pulse drug delivery, a continuous process is offered and optimized, according to the optimal control theory for time-delay systems.

  10. The optimal condition of performing MTT assay for the determination of radiation sensitivity

    International Nuclear Information System (INIS)

    Hong, Semie; Kim, Il Han

    2001-01-01

    The measurement of radiation survival using a clonogenic assay, the established standard, can be difficult and time consuming. In this study, We have used the MTT assay, based on the reduction of a tetrazolium salt to a purple formazan precipitate by living cells, as a substitution for clonogenic assay and have examined the optimal condition for performing this assay in determination of radiation sensitivity. Four human cancer cell lines - PCI-1, SNU-1066, NCI-H63O and RKO cells have been used. For each cell line, a clonogenic assay and a MTT assay using Premix WST-1 solution, which is one of the tetrazolium salts and does not require washing or solubilization of the precipitate were carried out after irradiation of 0, 2, 4, 6, 8, 10 Gy, For clonogenic assay, cells in 25 cm 2 flasks were irradiated after overnight incubation and the resultant colonies containing more than 50 cells were scored after culturing the cells for 10-14 days, For MTT assay, the relationship between absorbance and cell number, optimal seeding cell number, and optimal timing of assay was determined. Then, MTT assay was performed when the irradiated cells had regained exponential growth or when the non-irradiated cells had undergone four or more doubling times. There was minimal variation in the values gained from these two methods with the standard deviation generally less than 5%, and there were no statistically significant differences between two methods according to t-test in low radiation dose (below 6 Gy). The regression analyses showed high linear correlation with the R 2 value of 0.975-0.992 between data from the two different methods. The optimal cell numbers for MTT assay were found to be dependent on plating efficiency of used cell line. Less than 300 cells/well were appropriate for cells with high plating efficiency (more than 30%). For cells with low plating efficiency (less than 30%), 500 cells/well or more were appropriate for assay. The optimal time for MTT assay was alter 6

  11. Simplex optimization of the variables influencing the determination of pefloxacin by time-resolved chemiluminescence

    Science.gov (United States)

    Murillo Pulgarín, José A.; Alañón Molina, Aurelia; Jiménez García, Elisa

    2018-03-01

    A new chemiluminescence (CL) detection system combined with flow injection analysis (FIA) for the determination of Pefloxacin is proposed. The determination is based on an energy transfer from Pefloxacin to terbium (III). The metal ion enhances the weak CL signal produced by the KMnO4/H2SO3/Pefloxacin system. A modified simplex method was used to optimize chemical and instrumental variables. The influence of the interaction of the permanganate, Tb (III), sodium sulphite and sulphuric acid concentrations, flow rate and injected sample volume was thoroughly investigated by using a modified simplex optimization procedure. The results revealed a strong direct relationship between flow rate and CL intensity throughout the studied range that was confirmed by a gamma test. The response factor for the CL emission intensity was used to assess performance in order to identify the optimum conditions for maximization of the response. Under such conditions, the CL response was proportional to the Pefloxacin concentration over a wide range. The detection limit as calculated according to Clayton's criterion 13.7 μg L- 1. The analyte was successfully determined in milk samples with an average recovery of 100.6 ± 9.8%.

  12. Determination of optimal self-drive tourism route using the orienteering problem method

    Science.gov (United States)

    Hashim, Zakiah; Ismail, Wan Rosmanira; Ahmad, Norfaieqah

    2013-04-01

    This paper was conducted to determine the optimal travel routes for self-drive tourism based on the allocation of time and expense by maximizing the amount of attraction scores assigned to each city involved. Self-drive tourism represents a type of tourism where tourists hire or travel by their own vehicle. It only involves a tourist destination which can be linked with a network of roads. Normally, the traveling salesman problem (TSP) and multiple traveling salesman problems (MTSP) method were used in the minimization problem such as determination the shortest time or distance traveled. This paper involved an alternative approach for maximization method which is maximize the attraction scores and tested on tourism data for ten cities in Kedah. A set of priority scores are used to set the attraction score at each city. The classical approach of the orienteering problem was used to determine the optimal travel route. This approach is extended to the team orienteering problem and the two methods were compared. These two models have been solved by using LINGO12.0 software. The results indicate that the model involving the team orienteering problem provides a more appropriate solution compared to the orienteering problem model.

  13. Optimized determination of the radiological inventory during different phases of decommissioning

    International Nuclear Information System (INIS)

    Hillberg, Matthias; Beltz, Detlef; Karschnick, Oliver

    2012-01-01

    The decommissioning of nuclear facilities comprises a lot of activities such as decontamination, dismantling and demolition of equipment and structures. For these activities the aspects of health and safety of the operational personnel and of the general public as well as the minimization of radioactive waste have to be taken into account. An optimized, comprehensible and verifiable determination of the radiological inventory is essential for the decommissioning management with respect to safety, time, and costs. For example: right from the start of the post operational phase, the radiological characterization has to enable the decision whether to perform a system decontamination or not. Furthermore it is necessary, e.g. to determine the relevant nuclides and their composition (nuclide vector) for the release of material and for sustaining the radiological health and safety at work (e. g. minimizing the risk of incorporation). Our contribution will focus on the optimization of the radiological characterization with respect to the requisite extent and the best instant of time during the decommissioning process. For example: which additional information, besides the history of operation, is essential for an adequate amount of sampling and measurements needed in order to determine the relevant nuclides and their compositions? Furthermore, the characterization of buildings requires a kind of a graded approach during the decommissioning process. At the beginning of decommissioning, only a rough estimate of the expected radioactive waste due to the necessary decontamination of the building structures is sufficient. With ongoing decommissioning, a more precise radiological characterization of buildings is needed in order to guarantee an optimized, comprehensible and verifiable decontamination, dismantling and trouble-free clearance. These and other examples will be discussed on the background of and with reference to different decommissioning projects involving direct

  14. Product code optimization for determinate state LDPC decoding in robust image transmission.

    Science.gov (United States)

    Thomos, Nikolaos; Boulgouris, Nikolaos V; Strintzis, Michael G

    2006-08-01

    We propose a novel scheme for error-resilient image transmission. The proposed scheme employs a product coder consisting of low-density parity check (LDPC) codes and Reed-Solomon codes in order to deal effectively with bit errors. The efficiency of the proposed scheme is based on the exploitation of determinate symbols in Tanner graph decoding of LDPC codes and a novel product code optimization technique based on error estimation. Experimental evaluation demonstrates the superiority of the proposed system in comparison to recent state-of-the-art techniques for image transmission.

  15. Optimization of the Determination Method for Dissolved Cyanobacterial Toxin BMAA in Natural Water.

    Science.gov (United States)

    Yan, Boyin; Liu, Zhiquan; Huang, Rui; Xu, Yongpeng; Liu, Dongmei; Lin, Tsair-Fuh; Cui, Fuyi

    2017-10-17

    There is a serious dispute on the existence of β-N-methylamino-l-alanine (BMAA) in water, which is a neurotoxin that may cause amyotrophic lateral sclerosis/Parkinson's disease (ALS/PDC) and Alzheimer' disease. It is believed that a reliable and sensitive analytical method for the determination of BMAA is urgently required to resolve this dispute. In the present study, the solid phase extraction (SPE) procedure and the analytical method for dissolved BMAA in water were investigated and optimized. The results showed both derivatized and underivatized methods were qualified for the measurement of BMAA and its isomer in natural water, and the limit of detection and the precision of the two methods were comparable. Cartridge characteristics and SPE conditions could greatly affect the SPE performance, and the competition of natural organic matter is the primary factor causing the low recovery of BMAA, which was reduced from approximately 90% in pure water to 38.11% in natural water. The optimized SPE method for BMAA was a combination of rinsed SPE cartridges, controlled loading/elution rates and elution solution, evaporation at 55 °C, reconstitution of a solution mixture, and filtration by polyvinylidene fluoride membrane. This optimized method achieved > 88% recovery of BMAA in both algal solution and river water. The developed method can provide an efficient way to evaluate the actual concentration levels of BMAA in actual water environments and drinking water systems.

  16. Optimization method to determine mass transfer variables in a PWR crud deposition risk assessment tool

    International Nuclear Information System (INIS)

    Do, Chuong; Hussey, Dennis; Wells, Daniel M.; Epperson, Kenny

    2016-01-01

    Optimization numerical method was implemented to determine several mass transfer coefficients in a crud-induced power shift risk assessment code. The approach was to utilize a multilevel strategy that targets different model parameters that first changes the major order variables, mass transfer inputs, then calibrates the minor order variables, crud source terms, according to available plant data. In this manner, the mass transfer inputs are effectively simplified as 'dependent' on the crud source terms. Two optimization studies were performed using DAKOTA, a design and analysis toolkit, with the difference between the runs, being the number of model runs using BOA, allowed for adjusting the crud source terms, therefore, reducing the uncertainty with calibration. The result of the first case showed that the current best estimated values for the mass transfer coefficients, which were derived from first principle analysis, can be considered an optimized set. When the run limit of BOA was increased for the second case, an improvement in the prediction was obtained with the results deviating slightly from the best estimated values. (author)

  17. Optimization of the n-type HPGe detector parameters to theoretical determination of efficiency curves

    International Nuclear Information System (INIS)

    Rodriguez-Rodriguez, A.; Correa-Alfonso, C.M.; Lopez-Pino, N.; Padilla-Cabal, F.; D'Alessandro, K.; Corrales, Y.; Garcia-Alvarez, J. A.; Perez-Mellor, A.; Baly-Gil, L.; Machado, A.

    2011-01-01

    A highly detailed characterization of a 130 cm 3 n-type HPGe detector, employed in low - background gamma spectrometry measurements, was done. Precise measured data and several Monte Carlo (MC) calculations have been combined to optimize the detector parameters. HPGe crystal location inside the Aluminum end-cap as well as its dimensions, including the borehole radius and height, were determined from frontal and lateral scans. Additionally, X-ray radiography and Computed Axial Tomography (CT) studies were carried out to complement the information about detector features. Using seven calibrated point sources ( 241 Am, 133 Ba, 57,60 Co, 137 Cs, 22 Na and 152 Eu), photo-peak efficiency curves at three different source - detector distances (SDD) were obtained. Taking into account the experimental values, an optimization procedure by means of MC simulations (MCNPX 2.6 code) were performed. MC efficiency curves were calculated specifying the optimized detector parameters in the MCNPX input files. Efficiency calculation results agree with empirical data, showing relative deviations lesser 10%. (Author)

  18. Optimization and determination of polycyclic aromatic hydrocarbons in biochar-based fertilizers.

    Science.gov (United States)

    Chen, Ping; Zhou, Hui; Gan, Jay; Sun, Mingxing; Shang, Guofeng; Liu, Liang; Shen, Guoqing

    2015-03-01

    The agronomic benefit of biochar has attracted widespread attention to biochar-based fertilizers. However, the inevitable presence of polycyclic aromatic hydrocarbons in biochar is a matter of concern because of the health and ecological risks of these compounds. The strong adsorption of polycyclic aromatic hydrocarbons to biochar complicates their analysis and extraction from biochar-based fertilizers. In this study, we optimized and validated a method for determining the 16 priority polycyclic aromatic hydrocarbons in biochar-based fertilizers. Results showed that accelerated solvent extraction exhibited high extraction efficiency. Based on a Box-Behnken design with a triplicate central point, accelerated solvent extraction was used under the following optimal operational conditions: extraction temperature of 78°C, extraction time of 17 min, and two static cycles. The optimized method was validated by assessing the linearity of analysis, limit of detection, limit of quantification, recovery, and application to real samples. The results showed that the 16 polycyclic aromatic hydrocarbons exhibited good linearity, with a correlation coefficient of 0.996. The limits of detection varied between 0.001 (phenanthrene) and 0.021 mg/g (benzo[ghi]perylene), and the limits of quantification varied between 0.004 (phenanthrene) and 0.069 mg/g (benzo[ghi]perylene). The relative recoveries of the 16 polycyclic aromatic hydrocarbons were 70.26-102.99%. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Virtual haptic system for intuitive planning of bone fixation plate placement

    Directory of Open Access Journals (Sweden)

    Kup-Sze Choi

    2017-01-01

    Full Text Available Placement of pre-contoured fixation plate is a common treatment for bone fracture. Fitting of fixation plates on fractured bone can be preoperatively planned and evaluated in 3D virtual environment using virtual reality technology. However, conventional systems usually employ 2D mouse and virtual trackball as the user interface, which makes the process inconvenient and inefficient. In the paper, a preoperative planning system equipped with 3D haptic user interface is proposed to allow users to manipulate the virtual fixation plate intuitively to determine the optimal position for placement on distal medial tibia. The system provides interactive feedback forces and visual guidance based on the geometric requirements. Creation of 3D models from medical imaging data, collision detection, dynamics simulation and haptic rendering are discussed. The system was evaluated by 22 subjects. Results show that the time to achieve optimal placement using the proposed system was shorter than that by using 2D mouse and virtual trackball, and the satisfaction rating was also higher. The system shows potential to facilitate the process of fitting fixation plates on fractured bones as well as interactive fixation plate design.

  20. Determination of the optimal tolerance for MLC positioning in sliding window and VMAT techniques

    International Nuclear Information System (INIS)

    Hernandez, V.; Abella, R.; Calvo, J. F.; Jurado-Bruggemann, D.; Sancho, I.; Carrasco, P.

    2015-01-01

    Purpose: Several authors have recommended a 2 mm tolerance for multileaf collimator (MLC) positioning in sliding window treatments. In volumetric modulated arc therapy (VMAT) treatments, however, the optimal tolerance for MLC positioning remains unknown. In this paper, the authors present the results of a multicenter study to determine the optimal tolerance for both techniques. Methods: The procedure used is based on dynalog file analysis. The study was carried out using seven Varian linear accelerators from five different centers. Dynalogs were collected from over 100 000 clinical treatments and in-house software was used to compute the number of tolerance faults as a function of the user-defined tolerance. Thus, the optimal value for this tolerance, defined as the lowest achievable value, was investigated. Results: Dynalog files accurately predict the number of tolerance faults as a function of the tolerance value, especially for low fault incidences. All MLCs behaved similarly and the Millennium120 and the HD120 models yielded comparable results. In sliding window techniques, the number of beams with an incidence of hold-offs >1% rapidly decreases for a tolerance of 1.5 mm. In VMAT techniques, the number of tolerance faults sharply drops for tolerances around 2 mm. For a tolerance of 2.5 mm, less than 0.1% of the VMAT arcs presented tolerance faults. Conclusions: Dynalog analysis provides a feasible method for investigating the optimal tolerance for MLC positioning in dynamic fields. In sliding window treatments, the tolerance of 2 mm was found to be adequate, although it can be reduced to 1.5 mm. In VMAT treatments, the typically used 5 mm tolerance is excessively high. Instead, a tolerance of 2.5 mm is recommended

  1. Extraction optimization and UHPLC method development for determination of the 20-hydroxyecdysone in Sida tuberculata leaves.

    Science.gov (United States)

    da Rosa, Hemerson S; Koetz, Mariana; Santos, Marí Castro; Jandrey, Elisa Helena Farias; Folmer, Vanderlei; Henriques, Amélia Teresinha; Mendez, Andreas Sebastian Loureiro

    2018-04-01

    Sida tuberculata (ST) is a Malvaceae species widely distributed in Southern Brazil. In traditional medicine, ST has been employed as hypoglycemic, hypocholesterolemic, anti-inflammatory and antimicrobial. Additionally, this species is chemically characterized by flavonoids, alkaloids and phytoecdysteroids mainly. The present work aimed to optimize the extractive technique and to validate an UHPLC method for the determination of 20-hydroxyecdsone (20HE) in the ST leaves. Box-Behnken Design (BBD) was used in method optimization. The extractive methods tested were: static and dynamic maceration, ultrasound, ultra-turrax and reflux. In the Box-Behnken three parameters were evaluated in three levels (-1, 0, +1), particle size, time and plant:solvent ratio. In validation method, the parameters of selectivity, specificity, linearity, limits of detection and quantification (LOD, LOQ), precision, accuracy and robustness were evaluated. The results indicate static maceration as better technique to obtain 20HE peak area in ST extract. The optimal extraction from surface response methodology was achieved with the parameters granulometry of 710 nm, 9 days of maceration and plant:solvent ratio 1:54 (w/v). The UHPLC-PDA analytical developed method showed full viability of performance, proving to be selective, linear, precise, accurate and robust for 20HE detection in ST leaves. The average content of 20HE was 0.56% per dry extract. Thus, the optimization of extractive method in ST leaves increased the concentration of 20HE in crude extract, and a reliable method was successfully developed according to validation requirements and in agreement with current legislation. Copyright © 2018 Elsevier Inc. All rights reserved.

  2. Determination of the optimal number of components in independent components analysis.

    Science.gov (United States)

    Kassouf, Amine; Jouan-Rimbaud Bouveresse, Delphine; Rutledge, Douglas N

    2018-03-01

    Independent components analysis (ICA) may be considered as one of the most established blind source separation techniques for the treatment of complex data sets in analytical chemistry. Like other similar methods, the determination of the optimal number of latent variables, in this case, independent components (ICs), is a crucial step before any modeling. Therefore, validation methods are required in order to decide about the optimal number of ICs to be used in the computation of the final model. In this paper, three new validation methods are formally presented. The first one, called Random_ICA, is a generalization of the ICA_by_blocks method. Its specificity resides in the random way of splitting the initial data matrix into two blocks, and then repeating this procedure several times, giving a broader perspective for the selection of the optimal number of ICs. The second method, called KMO_ICA_Residuals is based on the computation of the Kaiser-Meyer-Olkin (KMO) index of the transposed residual matrices obtained after progressive extraction of ICs. The third method, called ICA_corr_y, helps to select the optimal number of ICs by computing the correlations between calculated proportions and known physico-chemical information about samples, generally concentrations, or between a source signal known to be present in the mixture and the signals extracted by ICA. These three methods were tested using varied simulated and experimental data sets and compared, when necessary, to ICA_by_blocks. Results were relevant and in line with expected ones, proving the reliability of the three proposed methods. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Flow injection analysis simulations and diffusion coefficient determination by stochastic and deterministic optimization methods.

    Science.gov (United States)

    Kucza, Witold

    2013-07-25

    Stochastic and deterministic simulations of dispersion in cylindrical channels on the Poiseuille flow have been presented. The random walk (stochastic) and the uniform dispersion (deterministic) models have been used for computations of flow injection analysis responses. These methods coupled with the genetic algorithm and the Levenberg-Marquardt optimization methods, respectively, have been applied for determination of diffusion coefficients. The diffusion coefficients of fluorescein sodium, potassium hexacyanoferrate and potassium dichromate have been determined by means of the presented methods and FIA responses that are available in literature. The best-fit results agree with each other and with experimental data thus validating both presented approaches. Copyright © 2013 The Author. Published by Elsevier B.V. All rights reserved.

  4. Product Placement and Brand Equity

    OpenAIRE

    Corniani, Margherita

    2003-01-01

    Product placement is the planned insertion of a brand within a movie, a fiction, etc. It can be used with other communication tools (i.e. advertising, sales promotions, etc.) in order to disseminate brand awareness and characterize brand image, developing brand equity. In global markets, product placement is particularly useful for improving brand equity of brands with a well established brand awareness.

  5. Placement of acid spoil materials

    Energy Technology Data Exchange (ETDEWEB)

    Pionke, H B; Rogowski, A S

    1982-06-01

    Potentially there are several chemical and hydrologic problems associated with placement of acid spoil materials. The rationale for a deep placement well below the soil surface, and preferably below a water table, is to prevent or minimize oxidation of pyrite to sulfuric acid and associated salts by reducing the supply of oxygen. If, however, substantial sulfuric acid or associated salts are already contained within the spoil because of present or previous mining, handling and reclamation operations (or if large supplies of indigenous salts exist, placement below a water table) may actually increase the rate of acid and salt leaching. Specific placement of acid- and salt-containing spoil should be aimed at preventing contact with percolating water or rising water tables. We recommend placement based on chemical and physical spoil properties that may affect water percolation O/sub 2/ diffusion rates in the profile. Both the deeper placement of acid spoil and coarser particle size can substantially reduce the amount of acid drainage. Placement above the water table with emphasis on percolate control may be better for high sulfate spoils, while placement below the non-fluctuating water table may be better for pyritic spoils.

  6. Optimization of digital image processing to determine quantum dots' height and density from atomic force microscopy.

    Science.gov (United States)

    Ruiz, J E; Paciornik, S; Pinto, L D; Ptak, F; Pires, M P; Souza, P L

    2018-01-01

    An optimized method of digital image processing to interpret quantum dots' height measurements obtained by atomic force microscopy is presented. The method was developed by combining well-known digital image processing techniques and particle recognition algorithms. The properties of quantum dot structures strongly depend on dots' height, among other features. Determination of their height is sensitive to small variations in their digital image processing parameters, which can generate misleading results. Comparing the results obtained with two image processing techniques - a conventional method and the new method proposed herein - with the data obtained by determining the height of quantum dots one by one within a fixed area, showed that the optimized method leads to more accurate results. Moreover, the log-normal distribution, which is often used to represent natural processes, shows a better fit to the quantum dots' height histogram obtained with the proposed method. Finally, the quantum dots' height obtained were used to calculate the predicted photoluminescence peak energies which were compared with the experimental data. Again, a better match was observed when using the proposed method to evaluate the quantum dots' height. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Algorithm for selection of optimized EPR distance restraints for de novo protein structure determination

    Science.gov (United States)

    Kazmier, Kelli; Alexander, Nathan S.; Meiler, Jens; Mchaourab, Hassane S.

    2010-01-01

    A hybrid protein structure determination approach combining sparse Electron Paramagnetic Resonance (EPR) distance restraints and Rosetta de novo protein folding has been previously demonstrated to yield high quality models (Alexander et al., 2008). However, widespread application of this methodology to proteins of unknown structures is hindered by the lack of a general strategy to place spin label pairs in the primary sequence. In this work, we report the development of an algorithm that optimally selects spin labeling positions for the purpose of distance measurements by EPR. For the α-helical subdomain of T4 lysozyme (T4L), simulated restraints that maximize sequence separation between the two spin labels while simultaneously ensuring pairwise connectivity of secondary structure elements yielded vastly improved models by Rosetta folding. 50% of all these models have the correct fold compared to only 21% and 8% correctly folded models when randomly placed restraints or no restraints are used, respectively. Moreover, the improvements in model quality require a limited number of optimized restraints, the number of which is determined by the pairwise connectivities of T4L α-helices. The predicted improvement in Rosetta model quality was verified by experimental determination of distances between spin labels pairs selected by the algorithm. Overall, our results reinforce the rationale for the combined use of sparse EPR distance restraints and de novo folding. By alleviating the experimental bottleneck associated with restraint selection, this algorithm sets the stage for extending computational structure determination to larger, traditionally elusive protein topologies of critical structural and biochemical importance. PMID:21074624

  8. A format for phylogenetic placements.

    Directory of Open Access Journals (Sweden)

    Frederick A Matsen

    Full Text Available We have developed a unified format for phylogenetic placements, that is, mappings of environmental sequence data (e.g., short reads into a phylogenetic tree. We are motivated to do so by the growing number of tools for computing and post-processing phylogenetic placements, and the lack of an established standard for storing them. The format is lightweight, versatile, extensible, and is based on the JSON format, which can be parsed by most modern programming languages. Our format is already implemented in several tools for computing and post-processing parsimony- and likelihood-based phylogenetic placements and has worked well in practice. We believe that establishing a standard format for analyzing read placements at this early stage will lead to a more efficient development of powerful and portable post-analysis tools for the growing applications of phylogenetic placement.

  9. Optimization of wet digestion procedure of blood and tissue for selenium determination by means of 75Se tracer

    International Nuclear Information System (INIS)

    Holynska, B.; Lipinska, K.

    1977-01-01

    Selenium-75 tracer has been used for optimization of analytical procedure of selenium determination in blood and tissue. Wet digestion procedure and reduction of selenium to its elemental form with tellurium as coprecipitant have been tested. Recovery of selenium obtained with the use of optimized analytical procedure amounts up 95% and precision is equal to 4.2%. (author)

  10. Determination of optimal diagnostic criteria for purulent vaginal discharge and cytological endometritis in dairy cows.

    Science.gov (United States)

    Denis-Robichaud, J; Dubuc, J

    2015-10-01

    The objectives of this observational study were to identify the optimal diagnostic criteria for purulent vaginal discharge (PVD) and cytological endometritis (ENDO) using vaginal discharge, endometrial cytology, and leukocyte esterase (LE) tests, and to quantify their effect on subsequent reproductive performance. Data generated from 1,099 untreated Holstein cows (28 herds) enrolled in a randomized clinical trial were used in this study. Cows were examined at 35 (± 7) d in milk for PVD using vaginal discharge scoring and for ENDO using endometrial cytology and LE testing. Optimal combinations of diagnostic criteria were determined based on the lowest Akaike information criterion (AIC) to predict pregnancy status at first service. Once identified, these criteria were used to quantify the effect of PVD and ENDO on pregnancy risk at first service and on pregnancy hazard until 200 d in milk (survival analysis). Predicting ability of these diagnostic criteria was determined using area under the curve (AUC) values. The prevalence of PVD and ENDO was calculated as well as the agreement between endometrial cytology and LE. The optimal diagnostic criteria (lowest AIC) identified in this study were purulent vaginal discharge or worse (≥ 4), ≥ 6% polymorphonuclear leukocytes (PMNL) by endometrial cytology, and small amounts of leukocytes or worse (≥ 1) by LE testing. When using the combination of vaginal discharge and PMNL percentage as diagnostic tools (n = 1,099), the prevalences of PVD and ENDO were 17.1 and 36.2%, respectively. When using the combination of vaginal discharge and LE (n = 915), the prevalences of PVD and ENDO were 17.1 and 48.4%. The optimal strategies for predicting pregnancy status at first service were the use of LE only (AUC = 0.578) and PMNL percentage only (AUC = 0.575). Cows affected by PVD and ENDO had 0.36 and 0.32 times the odds, respectively, of being pregnant at first service when using PMNL percentage compared with that of unaffected

  11. Data-driven sensor placement from coherent fluid structures

    Science.gov (United States)

    Manohar, Krithika; Kaiser, Eurika; Brunton, Bingni W.; Kutz, J. Nathan; Brunton, Steven L.

    2017-11-01

    Optimal sensor placement is a central challenge in the prediction, estimation and control of fluid flows. We reinterpret sensor placement as optimizing discrete samples of coherent fluid structures for full state reconstruction. This permits a drastic reduction in the number of sensors required for faithful reconstruction, since complex fluid interactions can often be described by a small number of coherent structures. Our work optimizes point sensors using the pivoted matrix QR factorization to sample coherent structures directly computed from flow data. We apply this sampling technique in conjunction with various data-driven modal identification methods, including the proper orthogonal decomposition (POD) and dynamic mode decomposition (DMD). In contrast to POD-based sensors, DMD demonstrably enables the optimization of sensors for prediction in systems exhibiting multiple scales of dynamics. Finally, reconstruction accuracy from pivot sensors is shown to be competitive with sensors obtained using traditional computationally prohibitive optimization methods.

  12. Reliability Of A Novel Intracardiac Electrogram Method For AV And VV Delay Optimization And Comparability To Echocardiography Procedure For Determining Optimal Conduction Delays In CRT Patients

    Directory of Open Access Journals (Sweden)

    N Reinsch

    2009-03-01

    Full Text Available Background: Echocardiography is widely used to optimize CRT programming. A novel intracardiac electrogram method (IEGM was recently developed as an automated programmer-based method, designed to calculate optimal atrioventricular (AV and interventricular (VV delays and provide optimized delay values as an alternative to standard echocardiographic assessment.Objective: This study was aimed at determining the reliability of this new method. Furthermore the comparability of IEGM to existing echocardiographic parameters for determining optimal conduction delays was verified.Methods: Eleven patients (age 62.9± 8.7; 81% male; 73% ischemic, previously implanted with a cardiac resynchronisation therapy defibrillator (CRT-D underwent both echocardiographic and IEGM-based delay optimization.Results: Applying the IEGM method, concordance of three consecutively performed measurements was found in 3 (27% patients for AV delay and in 5 (45% patients for VV delay. Intra-individual variation between three measurements as assessed by the IEGM technique was up to 20 ms (AV: n=6; VV: n=4. E-wave, diastolic filling time and septal-to-lateral wall motion delay emerged as significantly different between the echo and IEGM optimization techniques (p < 0.05. The final AV delay setting was significantly different between both methods (echo: 126.4 ± 29.4 ms, IEGM: 183.6 ± 16.3 ms; p < 0.001; correlation: R = 0.573, p = 0.066. VV delay showed significant differences for optimized delays (echo: 46.4 ± 23.8 ms, IEGM: 10.9 ± 7.0 ms; p <0.01; correlation: R = -0.278, p = 0.407.Conclusion: The automated programmer-based IEGM-based method provides a simple and safe method to perform CRT optimization. However, the reliability of this method appears to be limited. Thus, it remains difficult for the examiner to determine the optimal hemodynamic settings. Additionally, as there was no correlation between the optimal AV- and VV-delays calculated by the IEGM method and the echo

  13. Determining the optimal number of independent components for reproducible transcriptomic data analysis.

    Science.gov (United States)

    Kairov, Ulykbek; Cantini, Laura; Greco, Alessandro; Molkenov, Askhat; Czerwinska, Urszula; Barillot, Emmanuel; Zinovyev, Andrei

    2017-09-11

    Independent Component Analysis (ICA) is a method that models gene expression data as an action of a set of statistically independent hidden factors. The output of ICA depends on a fundamental parameter: the number of components (factors) to compute. The optimal choice of this parameter, related to determining the effective data dimension, remains an open question in the application of blind source separation techniques to transcriptomic data. Here we address the question of optimizing the number of statistically independent components in the analysis of transcriptomic data for reproducibility of the components in multiple runs of ICA (within the same or within varying effective dimensions) and in multiple independent datasets. To this end, we introduce ranking of independent components based on their stability in multiple ICA computation runs and define a distinguished number of components (Most Stable Transcriptome Dimension, MSTD) corresponding to the point of the qualitative change of the stability profile. Based on a large body of data, we demonstrate that a sufficient number of dimensions is required for biological interpretability of the ICA decomposition and that the most stable components with ranks below MSTD have more chances to be reproduced in independent studies compared to the less stable ones. At the same time, we show that a transcriptomics dataset can be reduced to a relatively high number of dimensions without losing the interpretability of ICA, even though higher dimensions give rise to components driven by small gene sets. We suggest a protocol of ICA application to transcriptomics data with a possibility of prioritizing components with respect to their reproducibility that strengthens the biological interpretation. Computing too few components (much less than MSTD) is not optimal for interpretability of the results. The components ranked within MSTD range have more chances to be reproduced in independent studies.

  14. Determination of alpha constant value for brazilian reality aiming de radiation protection optimization

    International Nuclear Information System (INIS)

    Teixeira, Pedro Barbosa

    2003-01-01

    This work aims to present a methodology for the calculation of the alpha constant taking into account the actual conditions in Brazil. This constant is used for the minimization of the worker doses meaning the optimization of radiation protection. The alpha constant represents a monetary value to establish the health detriment associated to the stochastic effects for unit of collective dose, and is directly related to the value of the human life. Along the years, several methods have been developed to obtain the most appropriate value for the alpha constant. These methods will be objects of analysis of this work. This work presents two methods for determination of the alpha constant: 'human capital' that is based on GDP of the country and 'willingness-to-pay' that is established for the value that the population would be willing to pay for the safety of the nuclear and radioactive facilities. A new methodology for the calculation of the alpha constant has been proposed in this study, that is the combination of two method previously mentioned, and recommends a new value of US$ 16,000.00 per man-sievert. Currently the value established by CNEN is US$ 10,000.00 per men sievert. This work also presents, in full details, the main mathematical tools for the elaboration of the optimization of the radiation protection: cost-benefit analysis, extended cost-benefit analysis and multi attribute utility analysis. An applied example, for an uranium mine radiation protection optimization was used to compare those two values of the alpha constant. (author)

  15. Optimized computational method for determining the beta dose distribution using a multiple-element thermoluminescent dosimeter system

    International Nuclear Information System (INIS)

    Shen, L.; Levine, S.H.; Catchen, G.L.

    1987-01-01

    This paper describes an optimization method for determining the beta dose distribution in tissue, and it describes the associated testing and verification. The method uses electron transport theory and optimization techniques to analyze the responses of a three-element thermoluminescent dosimeter (TLD) system. Specifically, the method determines the effective beta energy distribution incident on the dosimeter system, and thus the system performs as a beta spectrometer. Electron transport theory provides the mathematical model for performing the optimization calculation. In this calculation, parameters are determined that produce calculated doses for each of the chip/absorber components in the three-element TLD system. The resulting optimized parameters describe an effective incident beta distribution. This method can be used to determine the beta dose specifically at 7 mg X cm-2 or at any depth of interest. The doses at 7 mg X cm-2 in tissue determined by this method are compared to those experimentally determined using an extrapolation chamber. For a great variety of pure beta sources having different incident beta energy distributions, good agreement is found. The results are also compared to those produced by a commonly used empirical algorithm. Although the optimization method produces somewhat better results, the advantage of the optimization method is that its performance is not sensitive to the specific method of calibration

  16. Determining the optimal pelvic floor muscle training regimen for women with stress urinary incontinence.

    Science.gov (United States)

    Dumoulin, Chantale; Glazener, Cathryn; Jenkinson, David

    2011-06-01

    Pelvic floor muscle (PFM) training has received Level-A evidence rating in the treatment of stress urinary incontinence (SUI) in women, based on meta-analysis of numerous randomized control trials (RCTs) and is recommended in many published guidelines. However, the actual regimen of PFM training used varies widely in these RCTs. Hence, to date, the optimal PFM training regimen for achieving continence remains unknown and the following questions persist: how often should women attend PFM training sessions and how many contractions should they perform for maximal effect? Is a regimen of strengthening exercises better than a motor control strategy or functional retraining? Is it better to administer a PFM training regimen to an individual or are group sessions equally effective, or better? Which is better, PFM training by itself or in combination with biofeedback, neuromuscular electrical stimulation, and/or vaginal cones? Should we use improvement or cure as the ultimate outcome to determine which regimen is the best? The questions are endless. As a starting point in our endeavour to identify optimal PFM training regimens, the aim of this study is (a) to review the present evidence in terms of the effectiveness of different PFM training regimens in women with SUI and (b) to discuss the current literature on PFM dysfunction in SUI women, including the up-to-date evidence on skeletal muscle training theory and other factors known to impact on women's participation in and adherence to PFM training. Copyright © 2011 Wiley-Liss, Inc.

  17. Is patient size important in dose determination and optimization in cardiology?

    International Nuclear Information System (INIS)

    Reay, J; Chapple, C L; Kotre, C J

    2003-01-01

    Patient dose determination and optimization have become more topical in recent years with the implementation of the Medical Exposures Directive into national legislation, the Ionising Radiation (Medical Exposure) Regulations. This legislation incorporates a requirement for new equipment to provide a means of displaying a measure of patient exposure and introduces the concept of diagnostic reference levels. It is normally assumed that patient dose is governed largely by patient size; however, in cardiology, where procedures are often very complex, the significance of patient size is less well understood. This study considers over 9000 cardiology procedures, undertaken throughout the north of England, and investigates the relationship between patient size and dose. It uses simple linear regression to calculate both correlation coefficients and significance levels for data sorted by both room and individual clinician for the four most common examinations, left ventrical and/or coronary angiography, single vessel stent insertion and single vessel angioplasty. This paper concludes that the correlation between patient size and dose is weak for the procedures considered. It also illustrates the use of an existing method for removing the effect of patient size from dose survey data. This allows typical doses and, therefore, reference levels to be defined for the purposes of dose optimization

  18. Determining optimal preventive maintenance interval for component of Well Barrier Element in an Oil & Gas Company

    Science.gov (United States)

    Siswanto, A.; Kurniati, N.

    2018-04-01

    An oil and gas company has 2,268 oil and gas wells. Well Barrier Element (WBE) is installed in a well to protect human, prevent asset damage and minimize harm to the environment. The primary WBE component is Surface Controlled Subsurface Safety Valve (SCSSV). The secondary WBE component is Christmas Tree Valves that consist of four valves i.e. Lower Master Valve (LMV), Upper Master Valve (UMV), Swab Valve (SV) and Wing Valve (WV). Current practice on WBE Preventive Maintenance (PM) program is conducted by considering the suggested schedule as stated on manual. Corrective Maintenance (CM) program is conducted when the component fails unexpectedly. Both PM and CM need cost and may cause production loss. This paper attempts to analyze the failure data and reliability based on historical data. Optimal PM interval is determined in order to minimize the total cost of maintenance per unit time. The optimal PM interval for SCSSV is 730 days, LMV is 985 days, UMV is 910 days, SV is 900 days and WV is 780 days. In average of all components, the cost reduction by implementing the suggested interval is 52%, while the reliability is improved by 4% and the availability is increased by 5%.

  19. OPTIMIZATION AND VALIDATION OF HPLC METHOD FOR TETRAMETHRIN DETERMINATION IN HUMAN SHAMPOO FORMULATION.

    Science.gov (United States)

    Zeric Stosic, Marina Z; Jaksic, Sandra M; Stojanov, Igor M; Apic, Jelena B; Ratajac, Radomir D

    2016-11-01

    High-performance liquid chromatography (HPLC) method with diode array detection (DAD) were optimized and validated for separation and determination of tetramethrin in an antiparasitic human shampoo. In order to optimize separation conditions, two different columns, different column oven temperatures, as well as mobile phase composition and ratio, were tested. Best separation was achieved on the Supelcosil TM LC-18- DB column (4.6 x 250 mm), particle size 5 jim, with mobile phase methanol : water (78 : 22, v/v) at a flow rate of 0.8 mL/min and at temperature of 30⁰C. The detection wavelength of the detector was set at 220 nm. Under the optimum chromatographic conditions, standard calibration curve was measured with good linearity [r2 = 0.9997]. Accuracy of the method defined as a mean recovery of tetramethrin from shampoo matrix was 100.09%. The advantages of this method are that it can easily be used for the routine analysis of drug tetramethrin in pharmaceutical formulas and in all pharmaceutical researches involving tetramethrin.

  20. Optimization of the indirect at neutron activation technique for the determination of boron in aqueous solutions

    International Nuclear Information System (INIS)

    Luz, L.C.Q.P. da.

    1984-01-01

    The purpose of this work was the development of an instrumental method for the optimization of the indirect neutron activation analysis of boron in aqueous solutions. The optimization took into account the analytical parameters under laboratory conditions: activation carried out with a 241 Am/Be neutron source and detection of the activity induced in vanadium with two NaI(Tl) gamma spectrometers. A calibration curve was thus obtained for a concentration range of 0 to 5000 ppm B. Later on, experimental models were built in order to study the feasibility of automation. The analysis of boron was finally performed, under the previously established conditions, with an automated system comprising the operations of transport, irradiation and counting. An improvement in the quality of the analysis was observed, with boron concentrations as low as 5 ppm being determined with a precision level better than 0.4%. The experimental model features all basic design elements for an automated device for the analysis of boron in agueous solutions wherever this is required, as in the operation of nuclear reactors. (Author) [pt

  1. Regional gray matter abnormalities in patients with schizophrenia determined with optimized voxel-based morphometry

    Science.gov (United States)

    Guo, XiaoJuan; Yao, Li; Jin, Zhen; Chen, Kewei

    2006-03-01

    This study examined regional gray matter abnormalities across the whole brain in 19 patients with schizophrenia (12 males and 7 females), comparing with 11 normal volunteers (7 males and 4 females). The customized brain templates were created in order to improve spatial normalization and segmentation. Then automated preprocessing of magnetic resonance imaging (MRI) data was conducted using optimized voxel-based morphometry (VBM). The statistical voxel based analysis was implemented in terms of two-sample t-test model. Compared with normal controls, regional gray matter concentration in patients with schizophrenia was significantly reduced in the bilateral superior temporal gyrus, bilateral middle frontal and inferior frontal gyrus, right insula, precentral and parahippocampal areas, left thalamus and hypothalamus as well as, however, significant increases in gray matter concentration were not observed across the whole brain in the patients. This study confirms and extends some earlier findings on gray matter abnormalities in schizophrenic patients. Previous behavior and fMRI researches on schizophrenia have suggested that cognitive capacity decreased and self-conscious weakened in schizophrenic patients. These regional gray matter abnormalities determined through structural MRI with optimized VBM may be potential anatomic underpinnings of schizophrenia.

  2. Optimization of quantitative waste volume determination technique for hanford waste tank closure

    International Nuclear Information System (INIS)

    Monts, David L.; Jang, Ping-Rey; Long, Zhiling; Okhuysen, Walter P.; Norton, Olin P.; Gresham, Lawrence L.; Su, Yi; Lindner, Jeffrey S.

    2011-01-01

    The Hanford Site is currently in the process of an extensive effort to empty and close its radioactive single-shell and double-shell waste storage tanks. Before this can be accomplished, it is necessary to know how much residual material is left in a given waste tank and the uncertainty with which that volume is known. The Institute for Clean Energy Technology (ICET) at Mississippi State University is currently developing a quantitative in-tank imaging system based on Fourier Transform Profilometry, FTP. FTP is a non-contact, 3-D shape measurement technique. By projecting a fringe pattern onto a target surface and observing its deformation due to surface irregularities from a different view angle, FTP is capable of determining the height (depth) distribution (and hence volume distribution) of the target surface, thus reproducing the profile of the target accurately under a wide variety of conditions. Hence FTP has the potential to be utilized for quantitative determination of residual wastes within Hanford waste tanks. In this paper, efforts to characterize the accuracy and precision of quantitative volume determination using FTP and the use of these results to optimize the FTP system for deployment within Hanford waste tanks are described. (author)

  3. Congestion management by determining optimal location of TCSC in deregulated power systems

    International Nuclear Information System (INIS)

    Besharat, Hadi; Taher, Seyed Abbas

    2008-01-01

    In a deregulated electricity market, it may always not be possible to dispatch all of the contracted power transactions due to congestion of the transmission corridors. The ongoing power system restructuring requires an opening of unused potentials of transmission system due to environmental, right-of-way and cost problems which are major hurdles for power transmission network expansion. Flexible AC transmission systems (FACTSs) devices can be an alternative to reduce the flows in heavily loaded lines, resulting in an increased loadability, low system loss, improved stability of the network, reduced cost of production and fulfilled contractual requirement by controlling the power flows in the network. A method to determine the optimal location of thyristor controlled series compensators (TCSCs) has been suggested in this paper based on real power performance index and reduction of total system VAR power losses. (author)

  4. Determining the optimal portal blood volume in a shunt before surgery in extrahepatic portal hypertension

    Directory of Open Access Journals (Sweden)

    Yurchuk Vladimir A

    2016-04-01

    Full Text Available The aim of the study: To determine the necessary shunt diameter and assess the optimal portal blood volume in a shunt in children with extrahepatic portal hypertension before the portosystemic shunt surgery. Changes in the liver hemodynamics were studied in 81 children aged from 4 to 7 years with extrahepatic portal hypertension. We established that it is necessary to calculate the shunt diameter and the blood volume in a shunt in patients with extrahepatic portal hypertension before the portosystemic shunt surgery. It allows us to preserve the hepatic portal blood flow and effectively decrease the pressure in the portal system. Portosystemic shunt surgery in patients with extrahepatic portal hypertension performed in accordance with the individualized shunt volume significantly decreases portal pressure, preserves stable hepatic hemodynamics and prevents gastro-esophageal hemorrhage.

  5. Optimal determination of the elastic constants of composite materials from ultrasonic wave-speed measurements

    Science.gov (United States)

    Castagnède, Bernard; Jenkins, James T.; Sachse, Wolfgang; Baste, Stéphane

    1990-03-01

    A method is described to optimally determine the elastic constants of anisotropic solids from wave-speeds measurements in arbitrary nonprincipal planes. For such a problem, the characteristic equation is a degree-three polynomial which generally does not factorize. By developing and rearranging this polynomial, a nonlinear system of equations is obtained. The elastic constants are then recovered by minimizing a functional derived from this overdetermined system of equations. Calculations of the functional are given for two specific cases, i.e., the orthorhombic and the hexagonal symmetries. Some numerical results showing the efficiency of the algorithm are presented. A numerical method is also described for the recovery of the orientation of the principal acoustical axes. This problem is solved through a double-iterative numerical scheme. Numerical as well as experimental results are presented for a unidirectional composite material.

  6. Optimization, Monotonicity and the Determination of Nash Equilibria — An Algorithmic Analysis

    Science.gov (United States)

    Lozovanu, D.; Pickl, S. W.; Weber, G.-W.

    2004-08-01

    This paper is concerned with the optimization of a nonlinear time-discrete model exploiting the special structure of the underlying cost game and the property of inverse matrices. The costs are interlinked by a system of linear inequalities. It is shown that, if the players cooperate, i.e., minimize the sum of all the costs, they achieve a Nash equilibrium. In order to determine Nash equilibria, the simplex method can be applied with respect to the dual problem. An introduction into the TEM model and its relationship to an economic Joint Implementation program is given. The equivalence problem is presented. The construction of the emission cost game and the allocation problem is explained. The assumption of inverse monotony for the matrices leads to a new result in the area of such allocation problems. A generalization of such problems is presented.

  7. Determination of optimal reformer temperature in a reformed methanol fuel cell system using ANFIS models and numerical optimization methods

    DEFF Research Database (Denmark)

    Justesen, Kristian Kjær; Andreasen, Søren Juhl

    2015-01-01

    In this work a method for choosing the optimal reformer temperature for a reformed methanol fuel cell system is presented based on a case study of a H3 350 module produced by Serenergy A/S. The method is based on ANFIS models of the dependence of the reformer output gas composition on the reformer...... temperature and fuel flow, and the dependence of the fuel cell voltage on the fuel cell temperature, current and anode supply gas CO content. These models are combined to give a matrix of system efficiencies at different fuel cell currents and reformer temperatures. This matrix is then used to find...... the reformer temperature which gives the highest efficiency for each fuel cell current. The average of this optimal efficiency curve is 32.11% and the average efficiency achieved using the standard constant temperature is 30.64% an increase of 1.47 percentage points. The gain in efficiency is 4 percentage...

  8. Relay Placement for FSO Multihop DF Systems With Link Obstacles and Infeasible Regions

    KAUST Repository

    Zhu, Bingcheng

    2015-05-19

    Optimal relay placement is studied for free-space optical multihop communication with link obstacles and infeasible regions. An optimal relay placement scheme is proposed to achieve the lowest outage probability, enable the links to bypass obstacles of various geometric shapes, and place the relay nodes in specified available regions. When the number of relay nodes is large, the searching space can grow exponentially, and thus, a grouping optimization technique is proposed to reduce the searching time. We numerically demonstrate that the grouping optimization can provide suboptimal solutions close to the optimal solutions, but the average searching time linearly grows with the number of relay nodes. Two useful theorems are presented to reveal insights into the optimal relay locations. Simulation results show that our proposed optimization framework can effectively provide desirable solution to the problem of optimal relay nodes placement. © 2015 IEEE.

  9. Optimizing aspects of pedestrian traffic in building designs

    KAUST Repository

    Rodriguez, Samuel

    2013-11-01

    In this work, we investigate aspects of building design that can be optimized. Architectural features that we explore include pillar placement in simple corridors, doorway placement in buildings, and agent placement for information dispersement in an evacuation. The metrics utilized are tuned to the specific scenarios we study, which include continuous flow pedestrian movement and building evacuation. We use Multidimensional Direct Search (MDS) optimization with an extreme barrier criteria to find optimal placements while enforcing building constraints. © 2013 IEEE.

  10. Optimizing aspects of pedestrian traffic in building designs

    KAUST Repository

    Rodriguez, Samuel; Yinghua Zhang,; Gans, Nicholas; Amato, Nancy M.

    2013-01-01

    In this work, we investigate aspects of building design that can be optimized. Architectural features that we explore include pillar placement in simple corridors, doorway placement in buildings, and agent placement for information dispersement in an evacuation. The metrics utilized are tuned to the specific scenarios we study, which include continuous flow pedestrian movement and building evacuation. We use Multidimensional Direct Search (MDS) optimization with an extreme barrier criteria to find optimal placements while enforcing building constraints. © 2013 IEEE.

  11. Expectations of Cattle Feeding Investors in Feeder Cattle Placements

    OpenAIRE

    Kastens, Terry L.; Schroeder, Ted C.

    1993-01-01

    Cattle feeders appear irrational when they place cattle on feed when projected profits are negative. Long futures positions appear to offer superior returns to cattle feeding investment. Cattle feeder behavior suggests that they believe a downward bias in live cattle futures persists and that cattle feeders use different information than the live cattle futures market price when making placement decisions. This paper examines feeder cattle placement determinants and compares performance of ex...

  12. Community Resources and Job Placement

    Science.gov (United States)

    Preston, Jim

    1977-01-01

    In cooperation with the chamber of commerce, various businesses, associations, and other community agencies, the Sarasota schools (Florida) supplement their own job placement and follow-up efforts with community job development strategies for placing high school graduates. (JT)

  13. An RTT-Aware Virtual Machine Placement Method

    Directory of Open Access Journals (Sweden)

    Li Quan

    2017-12-01

    Full Text Available Virtualization is a key technology for mobile cloud computing (MCC and the virtual machine (VM is a core component of virtualization. VM provides a relatively independent running environment for different applications. Therefore, the VM placement problem focuses on how to place VMs on optimal physical machines, which ensures efficient use of resources and the quality of service, etc. Most previous work focuses on energy consumption, network traffic between VMs and so on and rarely consider the delay for end users’ requests. In contrast, the latency between requests and VMs is considered in this paper for the scenario of optimal VM placement in MCC. In order to minimize average RTT for all requests, the round-trip time (RTT is first used as the metric for the latency of requests. Based on our proposed RTT metric, an RTT-Aware VM placement algorithm is then proposed to minimize the average RTT. Furthermore, the case in which one of the core switches does not work is considered. A VM rescheduling algorithm is proposed to keep the average RTT lower and reduce the fluctuation of the average RTT. Finally, in the simulation study, our algorithm shows its advantage over existing methods, including random placement, the traffic-aware VM placement algorithm and the remaining utilization-aware algorithm.

  14. Using Maximal Isometric Force to Determine the Optimal Load for Measuring Dynamic Muscle Power

    Science.gov (United States)

    Spiering, Barry A.; Lee, Stuart M. C.; Mulavara, Ajitkumar P.; Bentley, Jason R.; Nash, Roxanne E.; Sinka, Joseph; Bloomberg, Jacob J.

    2009-01-01

    Maximal power output occurs when subjects perform ballistic exercises using loads of 30-50% of one-repetition maximum (1-RM). However, performing 1-RM testing prior to power measurement requires considerable time, especially when testing involves multiple exercises. Maximal isometric force (MIF), which requires substantially less time to measure than 1-RM, might be an acceptable alternative for determining the optimal load for power testing. PURPOSE: To determine the optimal load based on MIF for maximizing dynamic power output during leg press and bench press exercises. METHODS: Twenty healthy volunteers (12 men and 8 women; mean +/- SD age: 31+/-6 y; body mass: 72 +/- 15 kg) performed isometric leg press and bench press movements, during which MIF was measured using force plates. Subsequently, subjects performed ballistic leg press and bench press exercises using loads corresponding to 20%, 30%, 40%, 50%, and 60% of MIF presented in randomized order. Maximal instantaneous power was calculated during the ballistic exercise tests using force plates and position transducers. Repeated-measures ANOVA and Fisher LSD post hoc tests were used to determine the load(s) that elicited maximal power output. RESULTS: For the leg press power test, six subjects were unable to be tested at 20% and 30% MIF because these loads were less than the lightest possible load (i.e., the weight of the unloaded leg press sled assembly [31.4 kg]). For the bench press power test, five subjects were unable to be tested at 20% MIF because these loads were less than the weight of the unloaded aluminum bar (i.e., 11.4 kg). Therefore, these loads were excluded from analysis. A trend (p = 0.07) for a main effect of load existed for the leg press exercise, indicating that the 40% MIF load tended to elicit greater power output than the 60% MIF load (effect size = 0.38). A significant (p . 0.05) main effect of load existed for the bench press exercise; post hoc analysis indicated that the effect of

  15. Determination of the optimal area of waste incineration in a rotary kiln using a simulation model.

    Science.gov (United States)

    Bujak, J

    2015-08-01

    The article presents a mathematical model to determine the flux of incinerated waste in terms of its calorific values. The model is applicable in waste incineration systems equipped with rotary kilns. It is based on the known and proven energy flux balances and equations that describe the specific losses of energy flux while considering the specificity of waste incineration systems. The model is universal as it can be used both for the analysis and testing of systems burning different types of waste (municipal, medical, animal, etc.) and for allowing the use of any kind of additional fuel. Types of waste incinerated and additional fuel are identified by a determination of their elemental composition. The computational model has been verified in three existing industrial-scale plants. Each system incinerated a different type of waste. Each waste type was selected in terms of a different calorific value. This allowed the full verification of the model. Therefore the model can be used to optimize the operation of waste incineration system both at the design stage and during its lifetime. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. DETERMINING OPTIMAL CUBE FOR 3D-DCT BASED VIDEO COMPRESSION FOR DIFFERENT MOTION LEVELS

    Directory of Open Access Journals (Sweden)

    J. Augustin Jacob

    2012-11-01

    Full Text Available This paper proposes new three dimensional discrete cosine transform (3D-DCT based video compression algorithm that will select the optimal cube size based on the motion content of the video sequence. It is determined by finding normalized pixel difference (NPD values, and by categorizing the cubes as “low” or “high” motion cube suitable cube size of dimension either [16×16×8] or[8×8×8] is chosen instead of fixed cube algorithm. To evaluate the performance of the proposed algorithm test sequence with different motion levels are chosen. By doing rate vs. distortion analysis the level of compression that can be achieved and the quality of reconstructed video sequence are determined and compared against fixed cube size algorithm. Peak signal to noise ratio (PSNR is taken to measure the video quality. Experimental result shows that varying the cube size with reference to the motion content of video frames gives better performance in terms of compression ratio and video quality.

  17. Rapid Titration of Measles and Other Viruses: Optimization with Determination of Replication Cycle Length

    Science.gov (United States)

    Grigorov, Boyan; Rabilloud, Jessica; Lawrence, Philip; Gerlier, Denis

    2011-01-01

    Background Measles virus (MV) is a member of the Paramyxoviridae family and an important human pathogen causing strong immunosuppression in affected individuals and a considerable number of deaths worldwide. Currently, measles is a re-emerging disease in developed countries. MV is usually quantified in infectious units as determined by limiting dilution and counting of plaque forming unit either directly (PFU method) or indirectly from random distribution in microwells (TCID50 method). Both methods are time-consuming (up to several days), cumbersome and, in the case of the PFU assay, possibly operator dependent. Methods/Findings A rapid, optimized, accurate, and reliable technique for titration of measles virus was developed based on the detection of virus infected cells by flow cytometry, single round of infection and titer calculation according to the Poisson's law. The kinetics follow up of the number of infected cells after infection with serial dilutions of a virus allowed estimation of the duration of the replication cycle, and consequently, the optimal infection time. The assay was set up to quantify measles virus, vesicular stomatitis virus (VSV), and human immunodeficiency virus type 1 (HIV-1) using antibody labeling of viral glycoprotein, virus encoded fluorescent reporter protein and an inducible fluorescent-reporter cell line, respectively. Conclusion Overall, performing the assay takes only 24–30 hours for MV strains, 12 hours for VSV, and 52 hours for HIV-1. The step-by-step procedure we have set up can be, in principle, applicable to accurately quantify any virus including lentiviral vectors, provided that a virus encoded gene product can be detected by flow cytometry. PMID:21915289

  18. Optimization of a radiochemistry method for plutonium determination in biological samples

    International Nuclear Information System (INIS)

    Cerchetti, Maria L.; Arguelles, Maria G.

    2005-01-01

    Plutonium has been widely used for civilian an military activities. Nevertheless, the methods to control work exposition have not evolved in the same way, remaining as one of the major challengers for the radiological protection practice. Due to the low acceptable incorporation limit, the usual determination is based on indirect methods in urine samples. Our main objective was to optimize a technique used to monitor internal contamination of workers exposed to Plutonium isotopes. Different parameters were modified and their influence on the three steps of the method was evaluated. Those which gave the highest yield and feasibility were selected. The method involves: 1-) Sample concentration (coprecipitation); 2-) Plutonium purification; and 3-) Source preparation by electrodeposition. On the coprecipitation phase, changes on temperature and concentration of the carrier were evaluated. On the ion-exchange separation, changes on the type of the resin, elution solution for hydroxylamine (concentration and volume), length and column recycle were evaluated. Finally, on the electrodeposition phase, we modified the following: electrolytic solution, pH and time. Measures were made by liquid scintillation counting and alpha spectrometry (PIPS). We obtained the following yields: 88% for coprecipitation (at 60 C degree with 2 ml of CaHPO 4 ), 71% for ion-exchange (resins AG 1x8 Cl - 100-200 mesh, hydroxylamine 0.1N in HCl 0.2N as eluent, column between 4.5 and 8 cm), and 93% for electrodeposition (H 2 SO 4 -NH 4 OH, 100 minutes and pH from 2 to 2.8). The expand uncertainty was 30% (NC 95%), the decision threshold (Lc) was 0.102 Bq/L and the minimum detectable activity was 0.218 Bq/L of urine. We obtained an optimized method to screen workers exposed to Plutonium. (author)

  19. Optimization and validation of spectrophotometric methods for determination of finasteride in dosage and biological forms

    Science.gov (United States)

    Amin, Alaa S.; Kassem, Mohammed A.

    2012-01-01

    Aim and Background: Three simple, accurate and sensitive spectrophotometric methods for the determination of finasteride in pure, dosage and biological forms, and in the presence of its oxidative degradates were developed. Materials and Methods: These methods are indirect, involve the addition of excess oxidant potassium permanganate for method A; cerric sulfate [Ce(SO4)2] for methods B; and N-bromosuccinimide (NBS) for method C of known concentration in acid medium to finasteride, and the determination of the unreacted oxidant by measurement of the decrease in absorbance of methylene blue for method A, chromotrope 2R for method B, and amaranth for method C at a suitable maximum wavelength, λmax: 663, 528, and 520 nm, for the three methods, respectively. The reaction conditions for each method were optimized. Results: Regression analysis of the Beer plots showed good correlation in the concentration ranges of 0.12–3.84 μg mL–1 for method A, and 0.12–3.28 μg mL–1 for method B and 0.14 – 3.56 μg mL–1 for method C. The apparent molar absorptivity, Sandell sensitivity, detection and quantification limits were evaluated. The stoichiometric ratio between the finasteride and the oxidant was estimated. The validity of the proposed methods was tested by analyzing dosage forms and biological samples containing finasteride with relative standard deviation ≤ 0.95. Conclusion: The proposed methods could successfully determine the studied drug with varying excess of its oxidative degradation products, with recovery between 99.0 and 101.4, 99.2 and 101.6, and 99.6 and 101.0% for methods A, B, and C, respectively. PMID:23781478

  20. Directly patching high-level exchange-correlation potential based on fully determined optimized effective potentials

    Science.gov (United States)

    Huang, Chen; Chi, Yu-Chieh

    2017-12-01

    The key element in Kohn-Sham (KS) density functional theory is the exchange-correlation (XC) potential. We recently proposed the exchange-correlation potential patching (XCPP) method with the aim of directly constructing high-level XC potential in a large system by patching the locally computed, high-level XC potentials throughout the system. In this work, we investigate the patching of the exact exchange (EXX) and the random phase approximation (RPA) correlation potentials. A major challenge of XCPP is that a cluster's XC potential, obtained by solving the optimized effective potential equation, is only determined up to an unknown constant. Without fully determining the clusters' XC potentials, the patched system's XC potential is "uneven" in the real space and may cause non-physical results. Here, we developed a simple method to determine this unknown constant. The performance of XCPP-RPA is investigated on three one-dimensional systems: H20, H10Li8, and the stretching of the H19-H bond. We investigated two definitions of EXX: (i) the definition based on the adiabatic connection and fluctuation dissipation theorem (ACFDT) and (ii) the Hartree-Fock (HF) definition. With ACFDT-type EXX, effective error cancellations were observed between the patched EXX and the patched RPA correlation potentials. Such error cancellations were absent for the HF-type EXX, which was attributed to the fact that for systems with fractional occupation numbers, the integral of the HF-type EXX hole is not -1. The KS spectra and band gaps from XCPP agree reasonably well with the benchmarks as we make the clusters large.

  1. Optimization and validation of Folin-Ciocalteu method for the determination of total polyphenol content of Pu-erh tea.

    Science.gov (United States)

    Musci, Marilena; Yao, Shicong

    2017-12-01

    Pu-erh tea is a post-fermented tea that has recently gained popularity worldwide, due to potential health benefits related to the antioxidant activity resulting from its high polyphenolic content. The Folin-Ciocalteu method is a simple, rapid, and inexpensive assay widely applied for the determination of total polyphenol content. Over the past years, it has been subjected to many modifications, often without any systematic optimization or validation. In our study, we sought to optimize the Folin-Ciocalteu method, evaluate quality parameters including linearity, precision and stability, and then apply the optimized model to determine the total polyphenol content of 57 Chinese teas, including green tea, aged and ripened Pu-erh tea. Our optimized Folin-Ciocalteu method reduced analysis time, allowed for the analysis of a large number of samples, to discriminate among the different teas, and to assess the effect of the post-fermentation process on polyphenol content.

  2. Physical optimization of afterloading techniques

    International Nuclear Information System (INIS)

    Anderson, L.L.

    1985-01-01

    Physical optimization in brachytherapy refers to the process of determining the radioactive-source configuration which yields a desired dose distribution. In manually afterloaded intracavitary therapy for cervix cancer, discrete source strengths are selected iteratively to minimize the sum of squares of differences between trial and target doses. For remote afterloading with a stepping-source device, optimized (continuously variable) dwell times are obtained, either iteratively or analytically, to give least squares approximations to dose at an arbitrary number of points; in vaginal irradiation for endometrial cancer, the objective has included dose uniformity at applicator surface points in addition to a tapered contour of target dose at depth. For template-guided interstitial implants, seed placement at rectangular-grid mesh points may be least squares optimized within target volumes defined by computerized tomography; effective optimization is possible only for (uniform) seed strength high enough that the desired average peripheral dose is achieved with a significant fraction of empty seed locations. (orig.) [de

  3. Joint sensor placement and power rating selection in energy harvesting wireless sensor networks

    KAUST Repository

    Bushnaq, Osama M.; Al-Naffouri, Tareq Y.; Chepuri, Sundeep Prabhakar; Leus, Geert

    2017-01-01

    In this paper, the focus is on optimal sensor placement and power rating selection for parameter estimation in wireless sensor networks (WSNs). We take into account the amount of energy harvested by the sensing nodes, communication link quality

  4. Placement suitability criteria of composite tape for mould surface in automated tape placement

    Directory of Open Access Journals (Sweden)

    Zhang Peng

    2015-10-01

    Full Text Available Automated tape placement is an important automated process used for fabrication of large composite structures in aeronautical industry. The carbon fiber composite parts realized with this process tend to replace the aluminum parts produced by high-speed machining. It is difficult to determine the appropriate width of the composite tape in automated tape placement. Wrinkling will appear in the tape if it does not suit for the mould surface. Thus, this paper deals with establishing placement suitability criteria of the composite tape for the mould surface. With the assumptions for ideal mapping and by applying some principles and theorems of differential geometry, the centerline trajectory of the composite tape is identified to follow the geodesic. The placement suitability of the composite tape is examined on three different types of non-developable mould surfaces and four criteria are derived. The developed criteria have been used to test the deposition process over several mould surfaces and the appropriate width for each mould surface is obtained by referring to these criteria.

  5. Optimal Parameters to Determine the Apparent Diffusion Coefficient in Diffusion Weighted Imaging via Simulation

    Science.gov (United States)

    Perera, Dimuthu

    Diffusion weighted (DW) Imaging is a non-invasive MR technique that provides information about the tissue microstructure using the diffusion of water molecules. The diffusion is generally characterized by the apparent diffusion coefficient (ADC) parametric map. The purpose of this study is to investigate in silico how the calculation of ADC is affected by image SNR, b-values, and the true tissue ADC. Also, to provide optimal parameter combination depending on the percentage accuracy and precision for prostate peripheral region cancer application. Moreover, to suggest parameter choices for any type of tissue, while providing the expected accuracy and precision. In this research DW images were generated assuming a mono-exponential signal model at two different b-values and for known true ADC values. Rician noise of different levels was added to the DWI images to adjust the image SNR. Using the two DWI images, ADC was calculated using a mono-exponential model for each set of b-values, SNR, and true ADC. 40,000 ADC data were collected for each parameter setting to determine the mean and the standard-deviation of the calculated ADC, as well as the percentage accuracy and precision with respect to the true ADC. The accuracy was calculated using the difference between known and calculated ADC. The precision was calculated using the standard-deviation of calculated ADC. The optimal parameters for a specific study was determined when both the percentage accuracy and precision were minimized. In our study, we simulated two true ADCs (ADC 0.00102 for tumor and 0.00180 mm2/s for normal prostate peripheral region tissue). Image SNR was varied from 2 to 100 and b-values were varied from 0 to 2000s/mm2. The results show that the percentage accuracy and percentage precision were minimized with image SNR. To increase SNR, 10 signal-averagings (NEX) were used considering the limitation in total scan time. The optimal NEX combination for tumor and normal tissue for prostate

  6. Using a computational model to quantify the potential impact of changing the placement of healthy beverages in stores as an intervention to "Nudge" adolescent behavior choice.

    Science.gov (United States)

    Wong, Michelle S; Nau, Claudia; Kharmats, Anna Yevgenyevna; Vedovato, Gabriela Milhassi; Cheskin, Lawrence J; Gittelsohn, Joel; Lee, Bruce Y

    2015-12-23

    Product placement influences consumer choices in retail stores. While sugar sweetened beverage (SSB) manufacturers expend considerable effort and resources to determine how product placement may increase SSB purchases, the information is proprietary and not available to the public health and research community. This study aims to quantify the effect of non-SSB product placement in corner stores on adolescent beverage purchasing behavior. Corner stores are small privately owned retail stores that are important beverage providers in low-income neighborhoods--where adolescents have higher rates of obesity. Using data from a community-based survey in Baltimore and parameters from the marketing literature, we developed a decision-analytic model to simulate and quantify how placement of healthy beverage (placement in beverage cooler closest to entrance, distance from back of the store, and vertical placement within each cooler) affects the probability of adolescents purchasing non-SSBs. In our simulation, non-SSB purchases were 2.8 times higher when placed in the "optimal location"--on the second or third shelves of the front cooler--compared to the worst location on the bottom shelf of the cooler farthest from the entrance. Based on our model results and survey data, we project that moving non-SSBs from the worst to the optional location would result in approximately 5.2 million more non-SSBs purchased by Baltimore adolescents annually. Our study is the first to quantify the potential impact of changing placement of beverages in corner stores. Our findings suggest that this could be a low-cost, yet impactful strategy to nudge this population--highly susceptible to obesity--towards healthier beverage decisions.

  7. Optimization of sampling for the determination of the mean Radium-226 concentration in surface soil

    International Nuclear Information System (INIS)

    Williams, L.R.; Leggett, R.W.; Espegren, M.L.; Little, C.A.

    1987-08-01

    This report describes a field experiment that identifies an optimal method for determination of compliance with the US Environmental Protection Agency's Ra-226 guidelines for soil. The primary goals were to establish practical levels of accuracy and precision in estimating the mean Ra-226 concentration of surface soil in a small contaminated region; to obtain empirical information on composite vs. individual soil sampling and on random vs. uniformly spaced sampling; and to examine the practicality of using gamma measurements in predicting the average surface radium concentration and in estimating the number of soil samples required to obtain a given level of accuracy and precision. Numerous soil samples were collected on each six sites known to be contaminated with uranium mill tailings. Three types of samples were collected on each site: 10-composite samples, 20-composite samples, and individual or post hole samples; 10-composite sampling is the method of choice because it yields a given level of accuracy and precision for the least cost. Gamma measurements can be used to reduce surface soil sampling on some sites. 2 refs., 5 figs., 7 tabs

  8. Cognitive Development Optimization Algorithm Based Support Vector Machines for Determining Diabetes

    Directory of Open Access Journals (Sweden)

    Utku Kose

    2016-03-01

    Full Text Available The definition, diagnosis and classification of Diabetes Mellitus and its complications are very important. First of all, the World Health Organization (WHO and other societies, as well as scientists have done lots of studies regarding this subject. One of the most important research interests of this subject is the computer supported decision systems for diagnosing diabetes. In such systems, Artificial Intelligence techniques are often used for several disease diagnostics to streamline the diagnostic process in daily routine and avoid misdiagnosis. In this study, a diabetes diagnosis system, which is formed via both Support Vector Machines (SVM and Cognitive Development Optimization Algorithm (CoDOA has been proposed. Along the training of SVM, CoDOA was used for determining the sigma parameter of the Gauss (RBF kernel function, and eventually, a classification process was made over the diabetes data set, which is related to Pima Indians. The proposed approach offers an alternative solution to the field of Artificial Intelligence-based diabetes diagnosis, and contributes to the related literature on diagnosis processes.

  9. Morphology Analysis and Optimization: Crucial Factor Determining the Performance of Perovskite Solar Cells

    Directory of Open Access Journals (Sweden)

    Wenjin Zeng

    2017-03-01

    Full Text Available This review presents an overall discussion on the morphology analysis and optimization for perovskite (PVSK solar cells. Surface morphology and energy alignment have been proven to play a dominant role in determining the device performance. The effect of the key parameters such as solution condition and preparation atmosphere on the crystallization of PVSK, the characterization of surface morphology and interface distribution in the perovskite layer is discussed in detail. Furthermore, the analysis of interface energy level alignment by using X-ray photoelectron spectroscopy and ultraviolet photoelectron spectroscopy is presented to reveals the correlation between morphology and charge generation and collection within the perovskite layer, and its influence on the device performance. The techniques including architecture modification, solvent annealing, etc. were reviewed as an efficient approach to improve the morphology of PVSK. It is expected that further progress will be achieved with more efforts devoted to the insight of the mechanism of surface engineering in the field of PVSK solar cells.

  10. Electrodialytic desalination of brackish water: determination of optimal experimental parameters using full factorial design

    Science.gov (United States)

    Gmar, Soumaya; Helali, Nawel; Boubakri, Ali; Sayadi, Ilhem Ben Salah; Tlili, Mohamed; Amor, Mohamed Ben

    2017-12-01

    The aim of this work is to study the desalination of brackish water by electrodialysis (ED). A two level-three factor (23) full factorial design methodology was used to investigate the influence of different physicochemical parameters on the demineralization rate (DR) and the specific power consumption (SPC). Statistical design determines factors which have the important effects on ED performance and studies all interactions between the considered parameters. Three significant factors were used including applied potential, salt concentration and flow rate. The experimental results and statistical analysis show that applied potential and salt concentration are the main effect for DR as well as for SPC. The effect of interaction between applied potential and salt concentration was observed for SPC. A maximum value of 82.24% was obtained for DR under optimum conditions and the best value of SPC obtained was 5.64 Wh L-1. Empirical regression models were also obtained and used to predict the DR and the SPC profiles with satisfactory results. The process was applied for the treatment of real brackish water using the optimal parameters.

  11. Determination of an optimal priming duration and concentration protocol for pepper seeds (Capsicum annuum L.

    Directory of Open Access Journals (Sweden)

    Hassen ALOUI

    2015-12-01

    Full Text Available Seed priming is a simple pre-germination method to improve seed performance and to attenuate the effects of stress exposure. The objective of this study was to determinate an optimal priming protocol for three pepper cultivars (Capsicum annuum L.: ‘Beldi’, ‘Baklouti’ and ‘Anaheim Chili’. Seeds were primed with three solutions of NaCl, KCl and CaCl2 (0, 10, 20 and 50 mM for three different durations (12, 24 and 36h. Control seeds were soaked in distilled water for the same durations. After that, all seeds were kept to germinate in laboratory under normal light and controlled temperature. Results indicated that priming depends on concentration, duration and cultivar. The best combinations that we obtained were: KCl priming (10 mM, 36h for ‘Beldi’ cultivar, CaCl2 priming (10 mM, 36h for ‘Baklouti’ cultivar and finally NaCl priming (50 mM, 24h for ‘Anaheim Chili’ cultivar. Generally, priming had an effect on total germination percentage, mean germination time, germination index and the coefficient of velocity compared to control seeds. The beneficial effect of seed priming could be used for improving salt tolerance on germination and early seedling growth for pepper cultivar.

  12. Common ECG Lead Placement Errors. Part I: Limb Lead Reversals

    Directory of Open Access Journals (Sweden)

    Allison V. Rosen

    2015-10-01

    Full Text Available Background: Electrocardiography (ECG is a very useful diagnostic tool. However, errors in placement of ECG leads can create artifacts, mimic pathologies, and hinder proper ECG interpretation. It is important for members of the health care team to be able to recognize the common patterns resulting from lead placement errors. Methods: 12-lead ECGs were recorded in a single male healthy subject in his mid 20s. Six different limb lead reversals were compared to ECG recordings from correct lead placement. Results: Classic ECG patterns were observed when leads were reversed. Methods of discriminating these ECG patterns from true pathologic findings were described. Conclusion: Correct recording and interpretation of ECGs is key to providing optimal patient care. It is therefore crucial to be able to recognize common ECG patterns that are indicative of lead reversals.

  13. Factors influencing radiation therapy student clinical placement satisfaction

    International Nuclear Information System (INIS)

    Bridge, Pete; Carmichael, Mary-Ann

    2014-01-01

    Introduction: Radiation therapy students at Queensland University of Technology (QUT) attend clinical placements at five different clinical departments with varying resources and support strategies. This study aimed to determine the relative availability and perceived importance of different factors affecting student support while on clinical placement. The purpose of the research was to inform development of future support mechanisms to enhance radiation therapy students’ experience on clinical placement. Methods: This study used anonymous Likert-style surveys to gather data from years 1 and 2 radiation therapy students from QUT and clinical educators from Queensland relating to availability and importance of support mechanisms during clinical placements in a semester. Results: The study findings demonstrated student satisfaction with clinical support and suggested that level of support on placement influenced student employment choices. Staff support was perceived as more important than physical resources; particularly access to a named mentor, a clinical educator and weekly formative feedback. Both students and educators highlighted the impact of time pressures. Conclusions: The support offered to radiation therapy students by clinical staff is more highly valued than physical resources or models of placement support. Protected time and acknowledgement of the importance of clinical education roles are both invaluable. Joint investment in mentor support by both universities and clinical departments is crucial for facilitation of effective clinical learning

  14. Factors influencing radiation therapy student clinical placement satisfaction

    Science.gov (United States)

    Bridge, Pete; Carmichael, Mary-Ann

    2014-01-01

    Introduction: Radiation therapy students at Queensland University of Technology (QUT) attend clinical placements at five different clinical departments with varying resources and support strategies. This study aimed to determine the relative availability and perceived importance of different factors affecting student support while on clinical placement. The purpose of the research was to inform development of future support mechanisms to enhance radiation therapy students’ experience on clinical placement. Methods: This study used anonymous Likert-style surveys to gather data from years 1 and 2 radiation therapy students from QUT and clinical educators from Queensland relating to availability and importance of support mechanisms during clinical placements in a semester. Results: The study findings demonstrated student satisfaction with clinical support and suggested that level of support on placement influenced student employment choices. Staff support was perceived as more important than physical resources; particularly access to a named mentor, a clinical educator and weekly formative feedback. Both students and educators highlighted the impact of time pressures. Conclusions: The support offered to radiation therapy students by clinical staff is more highly valued than physical resources or models of placement support. Protected time and acknowledgement of the importance of clinical education roles are both invaluable. Joint investment in mentor support by both universities and clinical departments is crucial for facilitation of effective clinical learning. PMID:26229635

  15. Factors influencing radiation therapy student clinical placement satisfaction

    Energy Technology Data Exchange (ETDEWEB)

    Bridge, Pete; Carmichael, Mary-Ann [School of Clinical Sciences, Queensland University of Technology, Brisbane (Australia)

    2014-02-15

    Introduction: Radiation therapy students at Queensland University of Technology (QUT) attend clinical placements at five different clinical departments with varying resources and support strategies. This study aimed to determine the relative availability and perceived importance of different factors affecting student support while on clinical placement. The purpose of the research was to inform development of future support mechanisms to enhance radiation therapy students’ experience on clinical placement. Methods: This study used anonymous Likert-style surveys to gather data from years 1 and 2 radiation therapy students from QUT and clinical educators from Queensland relating to availability and importance of support mechanisms during clinical placements in a semester. Results: The study findings demonstrated student satisfaction with clinical support and suggested that level of support on placement influenced student employment choices. Staff support was perceived as more important than physical resources; particularly access to a named mentor, a clinical educator and weekly formative feedback. Both students and educators highlighted the impact of time pressures. Conclusions: The support offered to radiation therapy students by clinical staff is more highly valued than physical resources or models of placement support. Protected time and acknowledgement of the importance of clinical education roles are both invaluable. Joint investment in mentor support by both universities and clinical departments is crucial for facilitation of effective clinical learning.

  16. Value-based distributed generator placements for service quality improvements

    Energy Technology Data Exchange (ETDEWEB)

    Teng, Jen-Hao; Chen, Chi-Fa [Department of Electrical Engineering, I-Shou University, No. 1, Section 1, Syuecheng Road, Dashu Township, Kaohsiung Country 840 (Taiwan); Liu, Yi-Hwa [Department of Electrical Engineering, National Taiwan University of Science and Technology, Taipei (Taiwan); Chen, Chia-Yen [Department of Computer Science, The University of Auckland (New Zealand)

    2007-03-15

    Distributed generator (DG) resources are small, self-contained electric generating plants that can provide power to homes, businesses or industrial facilities in distribution feeders. They can be used to reduce power loss and improve service reliability. However, the values of DGs are largely dependent on their types, sizes and locations as they were installed in distribution feeders. A value-based method is proposed in this paper to enhance the reliability and obtain the benefits for DG placement. The benefits of DG placement described in this paper include power cost saving, power loss reduction, and reliability enhancement. The costs of DG placement include the investment, maintenance and operating costs. The proposed value-based method tries to find the best tradeoff between the costs and benefits of DG placement and then find the optimal types of DG and their corresponding locations and sizes in distribution feeders. The derived formulations are solved by a genetic algorithm based method. Test results show that with proper types, sizes and installation site selection, DG placement can be used to improve system reliability, reduce customer interruption costs and save power cost; as well as enabling electric utilities to obtain the maximal economical benefits. (author)

  17. Development of the Animal Management and Husbandry Online Placement Tool.

    Science.gov (United States)

    Bates, Lucy; Crowther, Emma; Bell, Catriona; Kinnison, Tierney; Baillie, Sarah

    2013-01-01

    The workplace provides veterinary students with opportunities to develop a range of skills, making workplace learning an important part of veterinary education in many countries. Good preparation for work placements is vital to maximize learning; to this end, our group has developed a series of three computer-aided learning (CAL) packages to support students. The third of this series is the Animal Management and Husbandry Online Placement Tool (AMH OPT). Students need a sound knowledge of animal husbandry and the ability to handle the common domestic species. However, teaching these skills at university is not always practical and requires considerable resources. In the UK, the Royal College of Veterinary Surgeons (RCVS) requires students to complete 12 weeks of pre-clinical animal management and husbandry work placements or extramural studies (EMS). The aims are for students to improve their animal handling skills and awareness of husbandry systems, develop communication skills, and understand their future clients' needs. The AMH OPT is divided into several sections: Preparation, What to Expect, Working with People, Professionalism, Tips, and Frequently Asked Questions. Three stakeholder groups (university EMS coordinators, placement providers, and students) were consulted initially to guide the content and design and later to evaluate previews. Feedback from stakeholders was used in an iterative design process, resulting in a program that aims to facilitate student preparation, optimize the learning opportunities, and improve the experience for both students and placement providers. The CAL is available online and is open-access worldwide to support students during veterinary school.

  18. METHODOLOGY FOR DETERMINING OPTIMAL EXPOSURE PARAMETERS OF A HYPERSPECTRAL SCANNING SENSOR

    Directory of Open Access Journals (Sweden)

    P. Walczykowski

    2016-06-01

    Full Text Available The purpose of the presented research was to establish a methodology that would allow the registration of hyperspectral images with a defined spatial resolution on a horizontal plane. The results obtained within this research could then be used to establish the optimum sensor and flight parameters for collecting aerial imagery data using an UAV or other aerial system. The methodology is based on an user-selected optimal camera exposure parameters (i.e. time, gain value and flight parameters (i.e. altitude, velocity. A push-broom hyperspectral imager- the Headwall MicroHyperspec A-series VNIR was used to conduct this research. The measurement station consisted of the following equipment: a hyperspectral camera MicroHyperspec A-series VNIR, a personal computer with HyperSpec III software, a slider system which guaranteed the stable motion of the sensor system, a white reference panel and a Siemens star, which was used to evaluate the spatial resolution. Hyperspectral images were recorded at different distances between the sensor and the target- from 5m to 100m. During the registration process of each acquired image, many exposure parameters were changed, such as: the aperture value, exposure time and speed of the camera’s movement on the slider. Based on all of the registered hyperspectral images, some dependencies between chosen parameters had been developed: - the Ground Sampling Distance – GSD and the distance between the sensor and the target, - the speed of the camera and the distance between the sensor and the target, - the exposure time and the gain value, - the Density Number and the gain value. The developed methodology allowed us to determine the speed and the altitude of an unmanned aerial vehicle on which the sensor would be mounted, ensuring that the registered hyperspectral images have the required spatial resolution.

  19. Determination of the optimal stylet strategy for the C-MAC videolaryngoscope.

    LENUS (Irish Health Repository)

    McElwain, J

    2010-04-01

    The C-MAC videolaryngoscope is a novel intubation device that incorporates a camera system at the end of its blade, thereby facilitating obtaining a view of the glottis without alignment of the oral, pharyngeal and tracheal axes. It retains the traditional Macintosh blade shape and can be used as a direct or indirect laryngoscope. We wished to determine the optimal stylet strategy for use with the C-MAC. Ten anaesthetists were allowed up to three attempts to intubate the trachea in one easy and three progressively more difficult laryngoscopy scenarios in a SimMan manikin with four tracheal tube stylet strategies: no stylet; stylet; directional stylet (Parker Flex-It); and hockey-stick stylet. The use of a stylet conferred no advantage in the easy laryngoscopy scenario. In the difficult scenarios, the directional and hockey-stick stylets performed best. In the most difficult scenario, the median (IQR [range]) duration of the successful intubation attempt was lowest with the hockey-stick stylet; 18 s (15-22 [12-43]) s, highest with the unstyletted tracheal tube; 60 s (60-60 [60, 60]) s and styletted tracheal tube 60 s (29-60 [18-60]) s, and intermediate with the directional stylet 21 s (15-60 [8-60]) s. The use of a stylet alone does not confer benefit in the setting of easy laryngoscopy. However, in more difficult laryngoscopy scenarios, the C-MAC videolaryngoscope performs best when used with a stylet that angulates the distal tracheal tube. The hockey-stick stylet configuration performed best in the scenarios tested.

  20. Determining the optimal approach to identifying individuals with chronic obstructive pulmonary disease: The DOC study.

    Science.gov (United States)

    Ronaldson, Sarah J; Dyson, Lisa; Clark, Laura; Hewitt, Catherine E; Torgerson, David J; Cooper, Brendan G; Kearney, Matt; Laughey, William; Raghunath, Raghu; Steele, Lisa; Rhodes, Rebecca; Adamson, Joy

    2018-06-01

    Early identification of chronic obstructive pulmonary disease (COPD) results in patients receiving appropriate management for their condition at an earlier stage in their disease. The determining the optimal approach to identifying individuals with chronic obstructive pulmonary disease (DOC) study was a case-finding study to enhance early identification of COPD in primary care, which evaluated the diagnostic accuracy of a series of simple lung function tests and symptom-based case-finding questionnaires. Current smokers aged 35 or more were invited to undertake a series of case-finding tools, which comprised lung function tests (specifically, spirometry, microspirometry, peak flow meter, and WheezoMeter) and several case-finding questionnaires. The effectiveness of these tests, individually or in combination, to identify small airways obstruction was evaluated against the gold standard of spirometry, with the quality of spirometry tests assessed by independent overreaders. The study was conducted with general practices in the Yorkshire and Humberside area, in the UK. Six hundred eighty-one individuals met the inclusion criteria, with 444 participants completing their study appointments. A total of 216 (49%) with good-quality spirometry readings were included in the analysis. The most effective case-finding tools were found to be the peak flow meter alone, the peak flow meter plus WheezoMeter, and microspirometry alone. In addition to the main analysis, where the severity of airflow obstruction was based on fixed ratios and percent of predicted values, sensitivity analyses were conducted by using lower limit of normal values. This research informs the choice of test for COPD identification; case-finding by use of the peak flow meter or microspirometer could be used routinely in primary care for suspected COPD patients. Only those testing positive to these tests would move on to full spirometry, thereby reducing unnecessary spirometric testing. © 2018 John Wiley

  1. Automated beam placement for breast radiotherapy using a support vector machine based algorithm

    International Nuclear Information System (INIS)

    Zhao Xuan; Kong, Dewen; Jozsef, Gabor; Chang, Jenghwa; Wong, Edward K.; Formenti, Silvia C.; Wang Yao

    2012-01-01

    Purpose: To develop an automated beam placement technique for whole breast radiotherapy using tangential beams. We seek to find optimal parameters for tangential beams to cover the whole ipsilateral breast (WB) and minimize the dose to the organs at risk (OARs). Methods: A support vector machine (SVM) based method is proposed to determine the optimal posterior plane of the tangential beams. Relative significances of including/avoiding the volumes of interests are incorporated into the cost function of the SVM. After finding the optimal 3-D plane that separates the whole breast (WB) and the included clinical target volumes (CTVs) from the OARs, the gantry angle, collimator angle, and posterior jaw size of the tangential beams are derived from the separating plane equation. Dosimetric measures of the treatment plans determined by the automated method are compared with those obtained by applying manual beam placement by the physicians. The method can be further extended to use multileaf collimator (MLC) blocking by optimizing posterior MLC positions. Results: The plans for 36 patients (23 prone- and 13 supine-treated) with left breast cancer were analyzed. Our algorithm reduced the volume of the heart that receives >500 cGy dose (V5) from 2.7 to 1.7 cm 3 (p = 0.058) on average and the volume of the ipsilateral lung that receives >1000 cGy dose (V10) from 55.2 to 40.7 cm 3 (p = 0.0013). The dose coverage as measured by volume receiving >95% of the prescription dose (V95%) of the WB without a 5 mm superficial layer decreases by only 0.74% (p = 0.0002) and the V95% for the tumor bed with 1.5 cm margin remains unchanged. Conclusions: This study has demonstrated the feasibility of using a SVM-based algorithm to determine optimal beam placement without a physician's intervention. The proposed method reduced the dose to OARs, especially for supine treated patients, without any relevant degradation of dose homogeneity and coverage in general.

  2. Determination of Optimal Flow Paths for Safety Injection According to Accident Conditions

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Kwae Hwan; Kim, Ju Hyun; Kim, Dong Yeong; Na, Man Gyun [Chosun Univ., Gwangju (Korea, Republic of); Hur, Seop; Kim, Changhwoi [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-05-15

    In case severe accidents happen, major safety parameters of nuclear reactors are rapidly changed. Therefore, operators are unable to respond appropriately. This situation causes the human error of operators that led to serious accidents at Chernobyl. In this study, we aimed to develop an algorithm that can be used to select the optimal flow path for cold shutdown in serious accidents, and to recover an NPP quickly and efficiently from the severe accidents. In order to select the optimal flow path, we applied a Dijkstra algorithm. The Dijkstra algorithm is used to find the path of minimum total length between two given nodes and needs a weight (or length) matrix. In this study, the weight between nodes was calculated from frictional and minor losses inside pipes. That is, the optimal flow path is found so that the pressure drop between a starting node (water source) and a destination node (position that cooling water is injected) is minimized. In case a severe accident has happened, if we inject cooling water through the optimized flow path, then the nuclear reactor will be safely and effectively returned into the cold shutdown state. In this study, we have analyzed the optimal flow paths for safety injection as a preliminary study for developing an accident recovery system. After analyzing the optimal flow path using the Dijkstra algorithm, and the optimal flow paths were selected by calculating the head loss according to path conditions.

  3. A Method to Determine Supply Voltage of Permanent Magnet Motor at Optimal Design Stage

    Science.gov (United States)

    Matustomo, Shinya; Noguchi, So; Yamashita, Hideo; Tanimoto, Shigeya

    The permanent magnet motors (PM motors) are widely used in electrical machinery, such as air conditioner, refrigerator and so on. In recent years, from the point of view of energy saving, it is necessary to improve the efficiency of PM motor by optimization. However, in the efficiency optimization of PM motor, many design variables and many restrictions are required. In this paper, the efficiency optimization of PM motor with many design variables was performed by using the voltage driven finite element analysis with the rotating simulation of the motor and the genetic algorithm.

  4. ‘Employers’ perspectives on maximising undergraduate student learning from the outdoor education centre work placement

    OpenAIRE

    Lawton, Mark

    2017-01-01

    Recognising the growth in provision of vocational undergraduate programmes and the requirement for high quality work placement opportunities, managers from four residential outdoor education centres were interviewed to determine their perceptions on the components necessary to maximise student learning. The findings showed that the managers greatly valued the potential of a work placement; a need for clarity over the expectations for all stakeholders and that the placement remained authentic ...

  5. Influence of the faces relative arrangement on the optimal reloading station location and analytical determination of its coordinates

    Directory of Open Access Journals (Sweden)

    V.К. Slobodyanyuk

    2017-04-01

    Full Text Available The purpose of this study is to develop a methodology of the optimal rock mass run-of-mine (RoM stock point determination and research of the influence of faces spatial arrangement on this point. The research represents an overview of current researches, where algorithms of the Fermat-Torricelli-Steiner point are used in order to minimize the logistic processes. The methods of mathematical optimization and analytical geometry were applied. Formulae for the optimal point coordinates determination for a 4 faces were established using the latter methods. Mining technology with use of reloading stations is rather common at the deep iron ore pits. In most cases, when deciding on location of RoM stock, its high-altitude position in space of the pit is primarily taken into account. However, the location of the reloading station in a layout also has a significant influence on technical and economic parameters of open-pit mining operations. The traditional approach, which considers a point of the center of gravity as an optimal point for RoM stock location, does not guarantee the minimum haulage. In mathematics, the Fermat-Torricelli point that provides a minimum distance to the vertices of the triangle is known. It is shown that the minimum haulage is provided when the point of RoM stock location and Fermat-Torricelli point coincide. In terms of open pit mining operations, the development of a method that will determine an optimal point of RoM stock location for a working area with respect to the known coordinates of distinguished points on the basis of new weight factors is of particular practical importance. A two-stage solution to the problem of determining the rational point of RoM stock location (with a minimal transport work for any number of faces is proposed. Such optimal point for RoM stock location reduces the transport work by 10–20 %.

  6. Determining the optimal mix of federal and contract fire crews: a case study from the Pacific Northwest.

    Science.gov (United States)

    Geoffrey H. Donovan

    2006-01-01

    Federal land management agencies in the United States are increasingly relying on contract crews as opposed to agency fire crews. Despite this increasing reliance on contractors, there have been no studies to determine what the optimal mix of contract and agency fire crews should be. A mathematical model is presented to address this question and is applied to a case...

  7. Optimized Analytical Method to Determine Gallic and Picric Acids in Pyrotechnic Samples by Using HPLC/UV (Reverse Phase)

    International Nuclear Information System (INIS)

    Garcia Alonso, S.; Perez Pastor, R. M.

    2013-01-01

    A study on the optimization and development of a chromatographic method for the determination of gallic and picric acids in pyrotechnic samples is presented. In order to achieve this, both analytical conditions by HPLC with diode detection and extraction step of a selected sample were studied. (Author)

  8. DETERMINING A ROBUST D-OPTIMAL DESIGN FOR TESTING FOR DEPARTURE FROM ADDITIVITY IN A MIXTURE OF FOUR PFAAS

    Science.gov (United States)

    Our objective was to determine an optimal experimental design for a mixture of perfluoroalkyl acids (PFAAs) that is robust to the assumption of additivity. Of particular focus to this research project is whether an environmentally relevant mixture of four PFAAs with long half-liv...

  9. Determining a Robust D-Optimal Design for Testing for Departure from Additivity in a Mixture of Four Perfluoroalkyl Acids.

    Science.gov (United States)

    Our objective is to determine an optimal experimental design for a mixture of perfluoroalkyl acids (PFAAs) that is robust to the assumption of additivity. PFAAs are widely used in consumer products and industrial applications. The presence and persistence of PFAAs, especially in ...

  10. Functional Fit Evaluation to Determine Optimal Ease Requirements in Canadian Forces Chemical Protective Gloves

    National Research Council Canada - National Science Library

    Tremblay-Lutter, Julie

    1995-01-01

    A functional fit evaluation of the Canadian Forces (CF) chemical protective lightweight glove was undertaken in order to quantify the amount of ease required within the glove for optimal functional fit...

  11. Continuous determination of optimal cerebral perfusion pressure in traumatic brain injury

    NARCIS (Netherlands)

    Aries, M.J.H.; Czosnyka, Marek; Budohoski, Karol P.; Steiner, Luzius A.; Lavinio, Andrea; Kolias, Angelos G.; Hutchinson, Peter J.; Brady, Ken M.; Menon, David K.; Pickard, John D.; Smielewski, Peter

    Objectives: We have sought to develop an automated methodology for the continuous updating of optimal cerebral perfusion pressure (CPPopt) for patients after severe traumatic head injury, using continuous monitoring of cerebrovascular pressure reactivity. We then validated the CPPopt algorithm by

  12. DEVELOPMENT OF THE METHOD OF DETERMINING THE TARGET FUNCTION OF OPTIMIZATION OF POWER PLANT

    Directory of Open Access Journals (Sweden)

    O. Maksymovа

    2017-08-01

    Full Text Available It has been proposed the application of an optimization criterion based on properties of target functions, taken from the elements of technical, economic and thermodynamic analyses. Marginal costs indicators of energy for different energy products have also been identified. Target function of the power plant optimization was proposed, that considers energy expenditure in the presented plant and in plants closing the energy sources generation and consumption balance.

  13. Transesophageal Echocardiography-Guided Epicardial Left Ventricular Lead Placement by Video-Assisted Thoracoscopic Surgery in Nonresponders to Biventricular Pacing and Previous Chest Surgery.

    Science.gov (United States)

    Schroeder, Carsten; Chung, Jane M; Mackall, Judith A; Cakulev, Ivan T; Patel, Aaron; Patel, Sunny J; Hoit, Brian D; Sahadevan, Jayakumar

    2018-06-14

    The aim of the study was to study the feasibility, safety, and efficacy of transesophageal echocardiography-guided intraoperative left ventricular lead placement via a video-assisted thoracoscopic surgery approach in patients with failed conventional biventricular pacing. Twelve patients who could not have the left ventricular lead placed conventionally underwent epicardial left ventricular lead placement by video-assisted thoracoscopic surgery. Eight patients had previous chest surgery (66%). Operative positioning was a modified far lateral supine exposure with 30-degree bed tilt, allowing for groin and sternal access. To determine the optimal left ventricular location for lead placement, the left ventricular surface was divided arbitrarily into nine segments. These segments were transpericardially paced using a hand-held malleable pacing probe identifying the optimal site verified by transesophageal echocardiography. The pacing leads were screwed into position via a limited pericardiotomy. The video-assisted thoracoscopic surgery approach was successful in all patients. Biventricular pacing was achieved in all patients and all reported symptomatic benefit with reduction in New York Heart Association class from III to I-II (P = 0.016). Baseline ejection fraction was 23 ± 3%; within 1-year follow-up, the ejection fraction increased to 32 ± 10% (P = 0.05). The mean follow-up was 566 days. The median length of hospital stay was 7 days with chest tube removal between postoperative days 2 and 5. In patients who are nonresponders to conventional biventricular pacing, intraoperative left ventricular lead placement using anatomical and functional characteristics via a video-assisted thoracoscopic surgery approach is effective in improving heart failure symptoms. This optimized left ventricular lead placement is feasible and safe. Previous chest surgery is no longer an exclusion criterion for a video-assisted thoracoscopic surgery approach.

  14. PRODUCT PLACEMENT IN BRAND PROMOTION

    Directory of Open Access Journals (Sweden)

    Alicja Mikołajczyk

    2015-06-01

    Full Text Available Product placement can have a significant impact on brand awareness and customer purchasing decisions. The article discusses techniques applied in the mass media against the EU legal background and the opportunities it offers in reaching the target audience.

  15. A divide and conquer approach to determine the Pareto frontier for optimization of protein engineering experiments

    Science.gov (United States)

    He, Lu; Friedman, Alan M.; Bailey-Kellogg, Chris

    2016-01-01

    In developing improved protein variants by site-directed mutagenesis or recombination, there are often competing objectives that must be considered in designing an experiment (selecting mutations or breakpoints): stability vs. novelty, affinity vs. specificity, activity vs. immunogenicity, and so forth. Pareto optimal experimental designs make the best trade-offs between competing objectives. Such designs are not “dominated”; i.e., no other design is better than a Pareto optimal design for one objective without being worse for another objective. Our goal is to produce all the Pareto optimal designs (the Pareto frontier), in order to characterize the trade-offs and suggest designs most worth considering, but to avoid explicitly considering the large number of dominated designs. To do so, we develop a divide-and-conquer algorithm, PEPFR (Protein Engineering Pareto FRontier), that hierarchically subdivides the objective space, employing appropriate dynamic programming or integer programming methods to optimize designs in different regions. This divide-and-conquer approach is efficient in that the number of divisions (and thus calls to the optimizer) is directly proportional to the number of Pareto optimal designs. We demonstrate PEPFR with three protein engineering case studies: site-directed recombination for stability and diversity via dynamic programming, site-directed mutagenesis of interacting proteins for affinity and specificity via integer programming, and site-directed mutagenesis of a therapeutic protein for activity and immunogenicity via integer programming. We show that PEPFR is able to effectively produce all the Pareto optimal designs, discovering many more designs than previous methods. The characterization of the Pareto frontier provides additional insights into the local stability of design choices as well as global trends leading to trade-offs between competing criteria. PMID:22180081

  16. Realistic Approach for Phasor Measurement Unit Placement

    DEFF Research Database (Denmark)

    Rather, Zakir Hussain; Chen, Zhe; Thøgersen, Paul

    2015-01-01

    This paper presents a realistic cost-effectivemodel for optimal placement of phasor measurement units (PMUs) for complete observability of a power system considering practical cost implications. The proposed model considers hidden or otherwise unaccounted practical costs involved in PMU...... installation. Consideration of these hidden but significant and integral part of total PMU installation costs was inspired from practical experience on a real-life project. The proposedmodel focuses on the minimization of total realistic costs instead of a widely used theoretical concept of a minimal number...... of PMUs. The proposed model has been applied to IEEE 14-bus, IEEE 24-bus, IEEE 30-bus, New England 39-bus, and large power system of 300 buses and real life Danish grid. A comparison of the presented results with those reported by traditionalmethods has also been shown to justify the effectiveness...

  17. Balance of Interactions Determines Optimal Survival in Multi-Species Communities.

    Directory of Open Access Journals (Sweden)

    Anshul Choudhary

    Full Text Available We consider a multi-species community modelled as a complex network of populations, where the links are given by a random asymmetric connectivity matrix J, with fraction 1 - C of zero entries, where C reflects the over-all connectivity of the system. The non-zero elements of J are drawn from a Gaussian distribution with mean μ and standard deviation σ. The signs of the elements Jij reflect the nature of density-dependent interactions, such as predatory-prey, mutualism or competition, and their magnitudes reflect the strength of the interaction. In this study we try to uncover the broad features of the inter-species interactions that determine the global robustness of this network, as indicated by the average number of active nodes (i.e. non-extinct species in the network, and the total population, reflecting the biomass yield. We find that the network transitions from a completely extinct system to one where all nodes are active, as the mean interaction strength goes from negative to positive, with the transition getting sharper for increasing C and decreasing σ. We also find that the total population, displays distinct non-monotonic scaling behaviour with respect to the product μC, implying that survival is dependent not merely on the number of links, but rather on the combination of the sparseness of the connectivity matrix and the net interaction strength. Interestingly, in an intermediate window of positive μC, the total population is maximal, indicating that too little or too much positive interactions is detrimental to survival. Rather, the total population levels are optimal when the network has intermediate net positive connection strengths. At the local level we observe marked qualitative changes in dynamical patterns, ranging from anti-phase clusters of period 2 cycles and chaotic bands, to fixed points, under the variation of mean μ of the interaction strengths. We also study the correlation between synchronization and survival

  18. Automated Fiber Placement of Advanced Materials (Preprint)

    National Research Council Canada - National Science Library

    Benson, Vernon M; Arnold, Jonahira

    2006-01-01

    .... ATK has been working with the Air Force Research Laboratory to foster improvements in the BMI materials and in the fiber placement processing techniques to achieve rates comparable to Epoxy placement rates...

  19. Angioplasty and stent placement - carotid artery

    Science.gov (United States)

    ... medlineplus.gov/ency/article/002953.htm Angioplasty and stent placement - carotid artery To use the sharing features ... to remove plaque buildup ( endarterectomy ) Carotid angioplasty with stent placement Description Carotid angioplasty and stenting (CAS) is ...

  20. DETERMINATION ALGORITHM OF OPTIMAL GEOMETRICAL PARAMETERS FOR COMPONENTS OF FREIGHT CARS ON THE BASIS OF GENERALIZED MATHEMATICAL MODELS

    Directory of Open Access Journals (Sweden)

    O. V. Fomin

    2013-10-01

    Full Text Available Purpose. Presentation of features and example of the use of the offered determination algorithm of optimum geometrical parameters for the components of freight cars on the basis of the generalized mathematical models, which is realized using computer. Methodology. The developed approach to search for optimal geometrical parameters can be described as the determination of optimal decision of the selected set of possible variants. Findings. The presented application example of the offered algorithm proved its operation capacity and efficiency of use. Originality. The determination procedure of optimal geometrical parameters for freight car components on the basis of the generalized mathematical models was formalized in the paper. Practical value. Practical introduction of the research results for universal open cars allows one to reduce container of their design and accordingly to increase the carrying capacity almost by100 kg with the improvement of strength characteristics. Taking into account the mass of their park this will provide a considerable economic effect when producing and operating. The offered approach is oriented to the distribution of the software packages (for example Microsoft Excel, which are used by technical services of the most enterprises, and does not require additional capital investments (acquisitions of the specialized programs and proper technical staff training. This proves the correctness of the research direction. The offered algorithm can be used for the solution of other optimization tasks on the basis of the generalized mathematical models.

  1. Determination of optimal angiographic viewing angles: Basic principles and evaluation study

    International Nuclear Information System (INIS)

    Dumay, A.C.M.; Reiber, J.H.C.; Gerbrands, J.J.

    1994-01-01

    Foreshortening of vessel segments in angiographic (biplane) projection images may cause misinterpretation of the extent and degree of coronary artery disease. The views in which the object of interest are visualized with minimum foreshortening are called optimal views. In this paper the authors present a complete approach to obtain such views with computer-assisted techniques. The object of interest is first visualized in two arbitrary views. Two landmarks of the object are manually defined in the two projection images. With complete information of the projection geometry, the vector representation of the object in the three-dimensional space is computed. This vector is perpendicular to a plane in which the views are called optimal. The user has one degree of freedom to define a set of optimal biplane views. The angle between the central beams of the imaging systems can be chosen freely. The computation of the orientation of the object and of corresponding optimal biplane views have been evaluated with a simple hardware phantom. The mean and the standard deviation of the overall errors in the calculation of the optimal angulation angles were 1.8 degree and 1.3 degree, respectively, when the user defined a rotation angle

  2. Optimization of wet digestion procedure of blood and tissue for selenium determination by means of 75Se tracer

    International Nuclear Information System (INIS)

    Holynska, B.; Lipinska-Kalita, K.

    1977-01-01

    Selenium-75 tracer has been used for optimization of analytical procedure of selenium determination in blood and tissue. Wet digestion procedure and reduction of selenium to its elemental form with tellurium as coprecipitant have been tested. It is seen that the use of a mixture of perchloric and sulphuric acid with sodium molybdenate for the wet digestion of organic matter followed by the reduction of selenium to its elementary form by a mixture of stannous chloride and hydroxylamine hydrochloride results in very good recovery of selenium. Recovery of selenium obtained with the use of optimized analytical procedure amounts to 95% and precision is equal to 4.2%. (T.I.)

  3. The determination of optimal cells disintegration method of Candida albicans and Candida tropicalis fungals

    Directory of Open Access Journals (Sweden)

    M. V. Rybalkyn

    2014-08-01

    Candida tropicalis fungi has been prepared separately on Sabouraud agar. Incubation has been done at 25 ± 2º C for 6 days and then washed by 25 ml of sterile 0.9% isotonic sodium chloride solution. We determined the microbiological purity of cell suspension of Candida albicans and Candida tropicalis fungi visually and by microscopy. Further washings has been obtained by centrifuged at speed 3000 r / min for 10 min. The resulting precipitate of fungi has been proved by sterile isotonic 0.9% sodium chloride solution to (8,5 – 9х108 in 1 ml of standardized suspension and by counting the cells in the Goryaeva fungi cell. For cell disruption fungi has been resorted to the action of ultrasound, rubbing with abrasive material and freeze-thaw. Key parameters in the ultrasonic disintegration are: frequency 22 kHz, the intensity of 5 W/cm2, a temperature of 25 ± 2° C, time 15 minutes, 10 ml of 0,9% isotonic sterile sodium chloride solution. For grinding fungal cells using mortar, pestle, quartz sand and biomaterial in a 1:1 ratio, and 10 mL of sterile isotonic 0,9% sodium chloride solution. Freezing and thawing have been performed in 10 ml sterile isotonic 0.9% sodium chloride solution at a temperature of -25 ± 2 ° C and 25 ± 2 ° C. In each case the amount of protein and polysaccharides has been calculated. For a more detailed analysis the monosaccharide composition has been determined in each case. It is possible to establish the optimal method of cell disruption of Candida albicans and Candida tropicalis fungi, namely ultrasonic disintegration. In the future we plan to study the immunological properties of the proteins and polysaccharides on animals.

  4. Determining the optimal load for jump squats: a review of methods and calculations.

    Science.gov (United States)

    Dugan, Eric L; Doyle, Tim L A; Humphries, Brendan; Hasson, Christopher J; Newton, Robert U

    2004-08-01

    There has been an increasing volume of research focused on the load that elicits maximum power output during jump squats. Because of a lack of standardization for data collection and analysis protocols, results of much of this research are contradictory. The purpose of this paper is to examine why differing methods of data collection and analysis can lead to conflicting results for maximum power and associated optimal load. Six topics relevant to measurement and reporting of maximum power and optimal load are addressed: (a) data collection equipment, (b) inclusion or exclusion of body weight force in calculations of power, (c) free weight versus Smith machine jump squats, (d) reporting of average versus peak power, (e) reporting of load intensity, and (f) instructions given to athletes/ participants. Based on this information, a standardized protocol for data collection and reporting of jump squat power and optimal load is presented.

  5. Two-phase strategy of controlling motor coordination determined by task performance optimality.

    Science.gov (United States)

    Shimansky, Yury P; Rand, Miya K

    2013-02-01

    A quantitative model of optimal coordination between hand transport and grip aperture has been derived in our previous studies of reach-to-grasp movements without utilizing explicit knowledge of the optimality criterion or motor plant dynamics. The model's utility for experimental data analysis has been demonstrated. Here we show how to generalize this model for a broad class of reaching-type, goal-directed movements. The model allows for measuring the variability of motor coordination and studying its dependence on movement phase. The experimentally found characteristics of that dependence imply that execution noise is low and does not affect motor coordination significantly. From those characteristics it is inferred that the cost of neural computations required for information acquisition and processing is included in the criterion of task performance optimality as a function of precision demand for state estimation and decision making. The precision demand is an additional optimized control variable that regulates the amount of neurocomputational resources activated dynamically. It is shown that an optimal control strategy in this case comprises two different phases. During the initial phase, the cost of neural computations is significantly reduced at the expense of reducing the demand for their precision, which results in speed-accuracy tradeoff violation and significant inter-trial variability of motor coordination. During the final phase, neural computations and thus motor coordination are considerably more precise to reduce the cost of errors in making a contact with the target object. The generality of the optimal coordination model and the two-phase control strategy is illustrated on several diverse examples.

  6. [Clinical research of using optimal compliance to determine positive end-expiratory pressure].

    Science.gov (United States)

    Xu, Lei; Feng, Quan-sheng; Lian, Fu; Shao, Xin-hua; Li, Zhi-bo; Wang, Zhi-yong; Li, Jun

    2012-07-01

    To observe the availability and security of optimal compliance strategy to titrate the optimal positive end-expiratory pressure (PEEP), compared with quasi-static pressure-volume curve (P-V curve) traced by low-flow method. Fourteen patients received mechanical ventilation with acute respiratory distress syndrome (ARDS) admitted in intensive care unit (ICU) of Tianjin Third Central Hospital from November 2009 to December 2010 were divided into two groups(n = 7). The quasi-static P-V curve method and the optimal compliance titration were used to set the optimal PEEP respectively, repeated 3 times in a row. The optimal PEEP and the consistency of repeated experiments were compared between groups. The hemodynamic parameters, oxygenation index (OI), lung compliance (C), cytokines and pulmonary surfactant-associated protein D (SP-D) concentration in plasma before and 2, 4, and 6 hours after the experiment were observed in each group. (1) There were no significant differences in gender, age and severity of disease between two groups. (2)The optimal PEEP [cm H(2)O, 1 cm H(2)O=0.098 kPa] had no significant difference between quasi-static P-V curve method group and the optimal compliance titration group (11.53 ± 2.07 vs. 10.57 ± 0.87, P>0.05). The consistency of repeated experiments in quasi-static P-V curve method group was poor, the slope of the quasi-static P-V curve in repeated experiments showed downward tendency. The optimal PEEP was increasing in each measure. There was significant difference between the first and the third time (10.00 ± 1.58 vs. 12.80 ± 1.92, P vs. 93.71 ± 5.38, temperature: 38.05 ± 0.73 vs. 36.99 ± 1.02, IL-6: 144.84 ± 23.89 vs. 94.73 ± 5.91, TNF-α: 151.46 ± 46.00 vs. 89.86 ± 13.13, SP-D: 33.65 ± 8.66 vs. 16.63 ± 5.61, MAP: 85.47 ± 9.24 vs. 102.43 ± 8.38, CCI: 3.00 ± 0.48 vs. 3.81 ± 0.81, OI: 62.00 ± 21.45 vs. 103.40 ± 37.27, C: 32.10 ± 2.92 vs. 49.57 ± 7.18, all P safety and usability.

  7. The same number of optimized parameters scheme for determining intermolecular interaction energies

    DEFF Research Database (Denmark)

    Kristensen, Kasper; Ettenhuber, Patrick; Eriksen, Janus Juul

    2015-01-01

    We propose the Same Number Of Optimized Parameters (SNOOP) scheme as an alternative to the counterpoise method for treating basis set superposition errors in calculations of intermolecular interaction energies. The key point of the SNOOP scheme is to enforce that the number of optimized wave...... as numerically. Numerical results for second-order Møller-Plesset perturbation theory (MP2) and coupled-cluster with single, double, and approximate triple excitations (CCSD(T)) show that the SNOOP scheme in general outperforms the uncorrected and counterpoise approaches. Furthermore, we show that SNOOP...

  8. Mathematics Placement at the University of Illinois

    Science.gov (United States)

    Ahlgren Reddy, Alison; Harper, Marc

    2013-01-01

    Data from the ALEKS-based placement program at the University of Illinois is presented visually in several ways. The placement exam (an ALEKS assessment) contains precise item-specific information and the data show many interesting properties of the student populations of the placement courses, which include Precalculus, Calculus, and Business…

  9. A Cognitive Model of College Mathematics Placement

    Science.gov (United States)

    1989-08-01

    study focused on the precalculus -- calculus placement decision. The Cognitive model uses novel, or analysis level, placement test items in an attempt to...relative to the requirements of a precalculus course. Placement test scores may be partitioned to give analysis and non-analysis subtest scores which can...67 5.1.1 1989 Intercorrelations ....................................................................... 67 5.1.2 1989 Precalculus -Calculus

  10. A regulatory adjustment process for the determination of the optimal percentage requirement in an electricity market with Tradable Green Certificates

    International Nuclear Information System (INIS)

    Currier, Kevin M.

    2013-01-01

    A system of Tradable Green Certificates (TGCs) is a market-based subsidy scheme designed to promote electricity generation from renewable energy sources such as wind power. Under a TGC system, the principal policy instrument is the “percentage requirement,” which stipulates the percentage of total electricity production (“green” plus “black”) that must be obtained from renewable sources. In this paper, we propose a regulatory adjustment process that a regulator can employ to determine the socially optimal percentage requirement, explicitly accounting for environmental damages resulting from black electricity generation. - Highlights: • A Tradable Green Certificate (TGC) system promotes energy production from renewable sources. • We consider an electricity oligopoly operated under a TGC system. • Welfare analysis must account for damages from “black” electricity production. • We characterize the welfare maximizing (optimal) “percentage requirement.” • We present a regulatory adjustment process that computes the optimal percentage requirement iteratively

  11. Intraoperative Factors that Predict the Successful Placement of Essure Microinserts.

    Science.gov (United States)

    Arthuis, Chloé J; Simon, Emmanuel G; Hébert, Thomas; Marret, Henri

    To determine whether the number of coils visualized in the uterotubal junction at the end of hysteroscopic microinsert placement predicts successful tubal occlusion. Cohort retrospective study (Canadian Task Force classification II-2). Department of obstetrics and gynecology in a teaching hospital. One hundred fifty-three women underwent tubal microinsert placement for permanent birth control from 2010 through 2014. The local institutional review board approved this study. Three-dimensional transvaginal ultrasound (3D TVU) was routinely performed 3 months after hysteroscopic microinsert placement to check position in the fallopian tube. The correlation between the number of coils visible at the uterotubal junction at the end of the hysteroscopic microinsert placement procedure and the device position on the 3-month follow-up 3D TVU in 141 patients was evaluated. The analysis included 276 microinserts placed during hysteroscopy. The median number of coils visible after the hysteroscopic procedure was 4 (interquartile range, 3-5). Devices for 30 patients (21.3%) were incorrectly positioned according to the 3-month follow-up 3D TVU, and hysterosalpingography was recommended. In those patients the median number of coils was in both the right (interquartile range, 2-4) and left (interquartile range, 1-3) uterotubal junctions. The number of coils visible at the uterotubal junction at the end of the placement procedure was the only factor that predicted whether the microinsert was well positioned at the 3-month 3D TVU confirmation (odds ratio, .44; 95% confidence interval, .28-.63). When 5 or more coils were visible, no incorrectly placed microinsert could be seen on the follow-up 3D TVU; the negative predictive value was 100%. No pregnancies were reported. The number of coils observed at the uterotubal junction at the time of microinsert placement should be considered a significant predictive factor of accurate and successful microinsert placement. Copyright © 2017

  12. Determination of optimal LWR containment design, excluding accidents more severe than Class 8

    International Nuclear Information System (INIS)

    Cave, L.; Min, T.K.

    1980-04-01

    Information is presented concerning the restrictive effect of existing NRC requirements; definition of possible targets for containment; possible containment systems for LWR; optimization of containment design for class 3 through class 8 accidents (PWR); estimated costs of some possible containment arrangements for PWR relative to the standard dry containment system; estimated costs of BWR containment

  13. Fetal porcine ventral mesencephalon graft. Determination of the optimal gestational age for implantation in Parkinsonian patients

    NARCIS (Netherlands)

    HogenEsch, RI; Koopmans, J; Copray, JCVM; van Roon, WMC; Kema, [No Value; Molenaar, G; Go, KG; Staal, MJ

    Human fetal ventral mesencephalon tissue has been used as dopaminergic striatal implants in Parkinsonian patients, so far with variable effects. Fetuses from animals that breed in large litters, e.g., pigs, have been considered as alternative donors of dopaminergic tissue. The optimal gestational

  14. From determinism and probability to chaos: chaotic evolution towards philosophy and methodology of chaotic optimization.

    Science.gov (United States)

    Pei, Yan

    2015-01-01

    We present and discuss philosophy and methodology of chaotic evolution that is theoretically supported by chaos theory. We introduce four chaotic systems, that is, logistic map, tent map, Gaussian map, and Hénon map, in a well-designed chaotic evolution algorithm framework to implement several chaotic evolution (CE) algorithms. By comparing our previous proposed CE algorithm with logistic map and two canonical differential evolution (DE) algorithms, we analyse and discuss optimization performance of CE algorithm. An investigation on the relationship between optimization capability of CE algorithm and distribution characteristic of chaotic system is conducted and analysed. From evaluation result, we find that distribution of chaotic system is an essential factor to influence optimization performance of CE algorithm. We propose a new interactive EC (IEC) algorithm, interactive chaotic evolution (ICE) that replaces fitness function with a real human in CE algorithm framework. There is a paired comparison-based mechanism behind CE search scheme in nature. A simulation experimental evaluation is conducted with a pseudo-IEC user to evaluate our proposed ICE algorithm. The evaluation result indicates that ICE algorithm can obtain a significant better performance than or the same performance as interactive DE. Some open topics on CE, ICE, fusion of these optimization techniques, algorithmic notation, and others are presented and discussed.

  15. Determining the Optimal Number of Spinal Manipulation Sessions for Chronic Low-Back Pain

    Science.gov (United States)

    ... Optimal Number of Spinal Manipulation Sessions for Chronic Low-Back Pain Share: © Matthew Lester Findings from the largest and ... study of spinal manipulative therapy (SMT) for chronic low-back pain suggest that 12 sessions (SMT) may be the ...

  16. Determining the Optimal Values of Exponential Smoothing Constants--Does Solver Really Work?

    Science.gov (United States)

    Ravinder, Handanhal V.

    2013-01-01

    A key issue in exponential smoothing is the choice of the values of the smoothing constants used. One approach that is becoming increasingly popular in introductory management science and operations management textbooks is the use of Solver, an Excel-based non-linear optimizer, to identify values of the smoothing constants that minimize a measure…

  17. From Determinism and Probability to Chaos: Chaotic Evolution towards Philosophy and Methodology of Chaotic Optimization

    Science.gov (United States)

    2015-01-01

    We present and discuss philosophy and methodology of chaotic evolution that is theoretically supported by chaos theory. We introduce four chaotic systems, that is, logistic map, tent map, Gaussian map, and Hénon map, in a well-designed chaotic evolution algorithm framework to implement several chaotic evolution (CE) algorithms. By comparing our previous proposed CE algorithm with logistic map and two canonical differential evolution (DE) algorithms, we analyse and discuss optimization performance of CE algorithm. An investigation on the relationship between optimization capability of CE algorithm and distribution characteristic of chaotic system is conducted and analysed. From evaluation result, we find that distribution of chaotic system is an essential factor to influence optimization performance of CE algorithm. We propose a new interactive EC (IEC) algorithm, interactive chaotic evolution (ICE) that replaces fitness function with a real human in CE algorithm framework. There is a paired comparison-based mechanism behind CE search scheme in nature. A simulation experimental evaluation is conducted with a pseudo-IEC user to evaluate our proposed ICE algorithm. The evaluation result indicates that ICE algorithm can obtain a significant better performance than or the same performance as interactive DE. Some open topics on CE, ICE, fusion of these optimization techniques, algorithmic notation, and others are presented and discussed. PMID:25879067

  18. From Determinism and Probability to Chaos: Chaotic Evolution towards Philosophy and Methodology of Chaotic Optimization

    Directory of Open Access Journals (Sweden)

    Yan Pei

    2015-01-01

    Full Text Available We present and discuss philosophy and methodology of chaotic evolution that is theoretically supported by chaos theory. We introduce four chaotic systems, that is, logistic map, tent map, Gaussian map, and Hénon map, in a well-designed chaotic evolution algorithm framework to implement several chaotic evolution (CE algorithms. By comparing our previous proposed CE algorithm with logistic map and two canonical differential evolution (DE algorithms, we analyse and discuss optimization performance of CE algorithm. An investigation on the relationship between optimization capability of CE algorithm and distribution characteristic of chaotic system is conducted and analysed. From evaluation result, we find that distribution of chaotic system is an essential factor to influence optimization performance of CE algorithm. We propose a new interactive EC (IEC algorithm, interactive chaotic evolution (ICE that replaces fitness function with a real human in CE algorithm framework. There is a paired comparison-based mechanism behind CE search scheme in nature. A simulation experimental evaluation is conducted with a pseudo-IEC user to evaluate our proposed ICE algorithm. The evaluation result indicates that ICE algorithm can obtain a significant better performance than or the same performance as interactive DE. Some open topics on CE, ICE, fusion of these optimization techniques, algorithmic notation, and others are presented and discussed.

  19. A model based on stochastic dynamic programming for determining China's optimal strategic petroleum reserve policy

    International Nuclear Information System (INIS)

    Zhang Xiaobing; Fan Ying; Wei Yiming

    2009-01-01

    China's Strategic Petroleum Reserve (SPR) is currently being prepared. But how large the optimal stockpile size for China should be, what the best acquisition strategies are, how to release the reserve if a disruption occurs, and other related issues still need to be studied in detail. In this paper, we develop a stochastic dynamic programming model based on a total potential cost function of establishing SPRs to evaluate the optimal SPR policy for China. Using this model, empirical results are presented for the optimal size of China's SPR and the best acquisition and drawdown strategies for a few specific cases. The results show that with comprehensive consideration, the optimal SPR size for China is around 320 million barrels. This size is equivalent to about 90 days of net oil import amount in 2006 and should be reached in the year 2017, three years earlier than the national goal, which implies that the need for China to fill the SPR is probably more pressing; the best stockpile release action in a disruption is related to the disruption levels and expected continuation probabilities. The information provided by the results will be useful for decision makers.

  20. Determination of optimal parameters for the composting of solid organic wastes

    Energy Technology Data Exchange (ETDEWEB)

    Verdonck, O.; Penninck, R.; Boodt, M. De (Laboratory of Soil Physics, Soil Conditioning and Horticultural, Soil Science, Faculty of Agriculture, State University of Gent, Belgium); Vleeschauwer, D. De (Public Waste Company, Mechelen, Belgium); Berthelsen, L.; Wood Pedersen, J. (eds.)

    1983-01-01

    In order to obtain the best possible conditions for composting carbon rich wastes should be mixed with nitrogen rich materials to acheive an equilibrated nitrogen concentration. Urea and ammonia addition results in optimal composting, phosphates are not necessary. Ideal pH is about 7, moisture content must be in equilibrium with aeration in compost so that excess does not decrease microbiological activity.

  1. Multivariate optimization of headspace trap for furan and furfural simultaneous determination in sponge cake.

    Science.gov (United States)

    Cepeda-Vázquez, Mayela; Blumenthal, David; Camel, Valérie; Rega, Barbara

    2017-03-01

    Furan, a possibly carcinogenic compound to humans, and furfural, a naturally occurring volatile contributing to aroma, can be both found in thermally treated foods. These process-induced compounds, formed by close reaction pathways, play an important role as markers of food safety and quality. A method capable of simultaneously quantifying both molecules is thus highly relevant for developing mitigation strategies and preserving the sensory properties of food at the same time. We have developed a unique reliable and sensitive headspace trap (HS trap) extraction method coupled to GC-MS for the simultaneous quantification of furan and furfural in a solid processed food (sponge cake). HS Trap extraction has been optimized using an optimal design of experiments (O-DOE) approach, considering four instrumental and two sample preparation variables, as well as a blocking factor identified during preliminary assays. Multicriteria and multiple response optimization was performed based on a desirability function, yielding the following conditions: thermostatting temperature, 65°C; thermostatting time, 15min; number of pressurization cycles, 4; dry purge time, 0.9min; water / sample amount ratio (dry basis), 16; and total amount (water + sample amount, dry basis), 10g. The performances of the optimized method were also assessed: repeatability (RSD: ≤3.3% for furan and ≤2.6% for furfural), intermediate precision (RSD: 4.0% for furan and 4.3% for furfural), linearity (R 2 : 0.9957 for furan and 0.9996 for furfural), LOD (0.50ng furan g sample dry basis -1 and 10.2ng furfural g sample dry basis -1 ), LOQ (0.99ng furan g sample dry basis -1 and 41.1ng furfural g sample dry basis -1 ). Matrix effect was observed mainly for furan. Finally, the optimized method was applied to other sponge cakes with different matrix characteristics and levels of analytes. Copyright © 2016. Published by Elsevier B.V.

  2. How do we facilitate international clinical placements for nursing students: A cross-sectional exploration of the structure, aims and objectives of placements.

    Science.gov (United States)

    Browne, Caroline A; Fetherston, Catherine M

    2018-07-01

    International clinical placements provide undergraduate students with a unique and complex clinical learning environment, to explore cultural awareness, experience different health care settings and achieve clinical competencies. Higher education institutions need to consider how to structure these placements to ensure appropriate and achievable aims and learning outcomes. In this study we described the structure, aims and learning outcomes associated with international clinical placement opportunities currently undertaken by Australian undergraduate nursing students in the Asia region. Forty eight percent (n = 18) of the institutions invited responded. Eight institutions met the inclusion criteria, one of which offered three placements in the region, resulting in 10 international placements for which data were provided. An online survey tool was used to collect data during August and September 2015 on international clinical placements conducted by the participating universities. Descriptive data on type and numbers of placements is presented, along with results from the content analysis conducted to explore data from open ended questions on learning aims and outcomes. One hundred students undertook 10 International Clinical Placements offered in the Asian region by eight universities. Variations across placements were found in the length of placement, the number of students participating, facilitator to student ratios and assessment techniques used. Five categories related to the aims of the programs were identified: 'becoming culturally aware through immersion', 'working with the community to promote health', 'understanding the role of nursing within the health care setting', 'translating theory into professional clinical practice', and 'developing relationships in international learning environments'. Four categories related to learning outcomes were identified: 'understanding healthcare and determinants of health', 'managing challenges', 'understanding the

  3. Optimized and validated high-performance liquid chromatography method for the determination of deoxynivalenol and aflatoxins in cereals.

    Science.gov (United States)

    Skendi, Adriana; Irakli, Maria N; Papageorgiou, Maria D

    2016-04-01

    A simple, sensitive and accurate analytical method was optimized and developed for the determination of deoxynivalenol and aflatoxins in cereals intended for human consumption using high-performance liquid chromatography with diode array and fluorescence detection and a photochemical reactor for enhanced detection. A response surface methodology, using a fractional central composite design, was carried out for optimization of the water percentage at the beginning of the run (X1, 80-90%), the level of acetonitrile at the end of gradient system (X2, 10-20%) with the water percentage fixed at 60%, and the flow rate (X3, 0.8-1.2 mL/min). The studied responses were the chromatographic peak area, the resolution factor and the time of analysis. Optimal chromatographic conditions were: X1 = 80%, X2 = 10%, and X3 = 1 mL/min. Following a double sample extraction with water and a mixture of methanol/water, mycotoxins were rapidly purified by an optimized solid-phase extraction protocol. The optimized method was further validated with respect to linearity (R(2) >0.9991), sensitivity, precision, and recovery (90-112%). The application to 23 commercial cereal samples from Greece showed contamination levels below the legally set limits, except for one maize sample. The main advantages of the developed method are the simplicity of operation and the low cost. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. DETERMINATION OF OPTIMAL CONTOURS OF OPEN PIT MINE DURING OIL SHALE EXPLOITATION, BY MINEX 5.2.3. PROGRAM

    Directory of Open Access Journals (Sweden)

    Miroslav Ignjatović

    2013-04-01

    Full Text Available By examination and determination of optimal solution of technological processes of exploitation and oil shale processing from Aleksinac site and with adopted technical solution and exploitation of oil shale, derived a technical solution that optimize contour of the newly defined open pit mine. In the world, this problem is solved by using a computer program that has become the established standard for quick and efficient solution for this problem. One of the computer’s program, which can be used for determination of the optimal contours of open pit mines is Minex 5.2.3. program, produced in Australia in the Surpac Minex Group Pty Ltd Company, which is applied at the Mining and Metallurgy Institute Bor (no. of licenses are SSI - 24765 and SSI - 24766. In this study, authors performed 11 optimization of deposit geo - models in Minex 5.2.3. based on the tests results, performed in a laboratory for soil mechanics of Mining and Metallurgy Institute, Bor, on samples from the site of Aleksinac deposits.

  5. Experimental investigation for determination of optimal X-ray beam tube voltages in a newly developed digital breast tomosynthesis system

    Energy Technology Data Exchange (ETDEWEB)

    Park, Hye-Suk, E-mail: radiosugar@yonsei.ac.kr [Department of Radiological Science and Research Institute of Health Science, Yonsei University, Wonju, Gangwon 220-710 (Korea, Republic of); Kim, Ye-Seul, E-mail: radiohesugar@gmail.com [Department of Radiological Science and Research Institute of Health Science, Yonsei University, Wonju, Gangwon 220-710 (Korea, Republic of); Choi, Young-Wook, E-mail: ywchoi@keri.re.kr [Korea Electrotechnology Research Institute (KERI), Ansan, Geongki 426-170 (Korea, Republic of); Choi, JaeGu, E-mail: jgchoi88@paran.com [Korea Electrotechnology Research Institute (KERI), Ansan, Geongki 426-170 (Korea, Republic of); Rhee, Yong-Chun, E-mail: ycrhee@yonsei.ac.kr [Department of Radiological Science and Research Institute of Health Science, Yonsei University, Wonju, Gangwon 220-710 (Korea, Republic of); Kim, Hee-Joung, E-mail: hjk1@yonsei.ac.kr [Department of Radiological Science and Research Institute of Health Science, Yonsei University, Wonju, Gangwon 220-710 (Korea, Republic of)

    2014-11-01

    Our purpose was to investigate optimal tube voltages (kVp) for a newly developed digital breast tomosynthesis (DBT) process and to determine tube current–exposure time products (mA s) for the average glandular dose (AGD), which is similar to that of the two views in conventional mammography (CM). In addition, the optimal acquisition parameters for this system were compared with those of CM. The analysis was based on the contrast-to-noise ratio (CNR) from the simulated micro-calcifications on homogeneous phantoms, and the figure of merit (FOM) was retrieved from the CNR and AGD at X-ray tube voltages ranging from 24 to 40 kVp at intervals of 2 kV. The optimal kVp increased more than 2 kV with increasing glandularity for thicker (≥50 mm) breast phantoms. The optimal kVp for DBT was found to be 4–7 kV higher than that calculated for CM with breast phantoms thicker than 50 mm. This is likely due to the greater effect of noise and dose reduction by kVp increment when using the lower dose per projection in DBT. It is important to determine optimum acquisition conditions for a maximally effective DBT system. The results of our study provide useful information to further improve DBT for high quality imaging.

  6. Use of Debye's series to determine the optimal edge-effect terms for computing the extinction efficiencies of spheroids.

    Science.gov (United States)

    Lin, Wushao; Bi, Lei; Liu, Dong; Zhang, Kejun

    2017-08-21

    The extinction efficiencies of atmospheric particles are essential to determining radiation attenuation and thus are fundamentally related to atmospheric radiative transfer. The extinction efficiencies can also be used to retrieve particle sizes or refractive indices through particle characterization techniques. This study first uses the Debye series to improve the accuracy of high-frequency extinction formulae for spheroids in the context of Complex angular momentum theory by determining an optimal number of edge-effect terms. We show that the optimal edge-effect terms can be accurately obtained by comparing the results from the approximate formula with their counterparts computed from the invariant imbedding Debye series and T-matrix methods. An invariant imbedding T-matrix method is employed for particles with strong absorption, in which case the extinction efficiency is equivalent to two plus the edge-effect efficiency. For weakly absorptive or non-absorptive particles, the T-matrix results contain the interference between the diffraction and higher-order transmitted rays. Therefore, the Debye series was used to compute the edge-effect efficiency by separating the interference from the transmission on the extinction efficiency. We found that the optimal number strongly depends on the refractive index and is relatively insensitive to the particle geometry and size parameter. By building a table of optimal numbers of edge-effect terms, we developed an efficient and accurate extinction simulator that has been fully tested for randomly oriented spheroids with various aspect ratios and a wide range of refractive indices.

  7. IDEA 2004: Section 615 (k) (Placement in Alternative Educational Setting). PHP-c111

    Science.gov (United States)

    PACER Center, 2005

    2005-01-01

    School personnel may consider any unique circumstances on a case-by-case basis when determining whether to order a change in placement for a child with a disability who violates a code of student conduct. This article describes IDEA 2004: Section 615 (k), which discusses the placement of special needs children in alternative educational settings.…

  8. Determining the Optimal Protocol for Measuring an Albuminuria Class Transition in Clinical Trials in Diabetic Kidney Disease

    DEFF Research Database (Denmark)

    Kröpelin, Tobias F; de Zeeuw, Dick; Remuzzi, Giuseppe

    2016-01-01

    Albuminuria class transition (normo- to micro- to macroalbuminuria) is used as an intermediate end point to assess renoprotective drug efficacy. However, definitions of such class transition vary between trials. To determine the most optimal protocol, we evaluated the approaches used in four...... effect increased (decreased precision) with stricter end point definitions, resulting in a loss of statistical significance. In conclusion, the optimal albuminuria transition end point for use in drug intervention trials can be determined with a single urine collection for albuminuria assessment per...... clinical trials testing the effect of renin-angiotensin-aldosterone system intervention on albuminuria class transition in patients with diabetes: the BENEDICT, the DIRECT, the ALTITUDE, and the IRMA-2 Trial. The definition of albuminuria class transition used in each trial differed from the definitions...

  9. Observability-Enhanced PMU Placement Considering Conventional Measurements and Contingencies

    Directory of Open Access Journals (Sweden)

    M. Esmaili

    2014-12-01

    Full Text Available Phasor Measurement Units (PMUs are in growing attention in recent power systems because of their paramount abilities in state estimation. PMUs are placed in existing power systems where there are already installed conventional measurements, which can be helpful if they are considered in PMU optimal placement. In this paper, a method is proposed for optimal placement of PMUs incorporating conventional measurements of zero injection buses and branch flow measurements using a permutation matrix. Furthermore, the effect of single branch outage and single PMU failure is included in the proposed method. When a branch with a flow measurement goes out, the network loses one observability path (the branch and one conventional measurement (the flow measurement. The permutation matrix proposed here is able to model the outage of a branch equipped with a flow measurement or connected to a zero injection bus. Also, measurement redundancy, and consequently measurement reliability, is enhanced without increasing the number of PMUs this implies a more efficient usage of PMUs than previous methods. The PMU placement problem is formulated as a mixed-integer linear programming that results in the global optimal solution. Results obtained from testing the proposed method on four well-known test systems in diverse situations confirm its efficiency.

  10. An analytical method to determine the optimal size of a photovoltaic plant

    Energy Technology Data Exchange (ETDEWEB)

    Barra, L; Catalanotti, S; Fontana, F; Lavorante, F

    1984-01-01

    In this paper, a simplified method for the optimal sizing of a photovoltaic system is presented. The results have been obtained for Italian meteorological data, but the methodology can be applied to any geographical area. The system studied is composed of a photovoltaic array, power tracker, battery storage, inverter and load. Computer simulation was used to obtain the performance of this system for many values of field area, battery storage value, solar flux and load by keeping constant the efficiencies. A simple fit was used to achieve a formula relating the system variables to the performance. Finally, the formulae for the optimal values of the field area and the battery storage value are shown.

  11. Determining optimal selling price and lot size with process reliability and partial backlogging considerations

    Science.gov (United States)

    Hsieh, Tsu-Pang; Cheng, Mei-Chuan; Dye, Chung-Yuan; Ouyang, Liang-Yuh

    2011-01-01

    In this article, we extend the classical economic production quantity (EPQ) model by proposing imperfect production processes and quality-dependent unit production cost. The demand rate is described by any convex decreasing function of the selling price. In addition, we allow for shortages and a time-proportional backlogging rate. For any given selling price, we first prove that the optimal production schedule not only exists but also is unique. Next, we show that the total profit per unit time is a concave function of price when the production schedule is given. We then provide a simple algorithm to find the optimal selling price and production schedule for the proposed model. Finally, we use a couple of numerical examples to illustrate the algorithm and conclude this article with suggestions for possible future research.

  12. Subsurface water parameters: optimization approach to their determination from remotely sensed water color data.

    Science.gov (United States)

    Jain, S C; Miller, J R

    1976-04-01

    A method, using an optimization scheme, has been developed for the interpretation of spectral albedo (or spectral reflectance) curves obtained from remotely sensed water color data. This method used a two-flow model of the radiation flow and solves for the albedo. Optimization fitting of predicted to observed reflectance data is performed by a quadratic interpolation method for the variables chlorophyll concentration and scattering coefficient. The technique is applied to airborne water color data obtained from Kawartha Lakes, Sargasso Sea, and Nova Scotia coast. The modeled spectral albedo curves are compared to those obtained experimentally, and the computed optimum water parameters are compared to ground truth values. It is shown that the backscattered spectral signal contains information that can be interpreted to give quantitative estimates of the chlorophyll concentration and turbidity in the waters studied.

  13. A systematic approach to determine optimal composition of gel used in radiation therapy

    International Nuclear Information System (INIS)

    Chang, Yuan-Jen; Hsieh, Bor-Tsung; Liang, Ji-An

    2011-01-01

    The design of experiment was used to find the optimal composition of N-isopropyl acrylamide (NIPAM) gel. Optical computed tomography was used to scan the polymer gel dosimeter, which was irradiated from 0 to 20 Gy. The study was conducted following a statistical method using a two-level fractional factorial plan involving four variables (gelatin-5% and 6%, NIPAM-3% and 5%, Bis-2.5% and 3%, and THPC-5 and 10 mM). We produced three batches of gels of the same composition to replicate the experiments. Based on the statistical analysis, a regression model was built. The optimal gel composition for the dose range 0-15 Gy with linearity up to 1.000 is as follows: gelatin (5.67%), NIPAM (5%), Bis (2.56%), and THPC (10 mM). The dose response of the NIPAM polymer gel attains stability about 24 h after irradiation and remains stable up to 3 months.

  14. Optimization Extracting Technology of Cynomorium songaricum Rupr. Saponins by Ultrasonic and Determination of Saponins Content in Samples with Different Source

    OpenAIRE

    Xiaoli Wang; Qingwei Wei; Xinqiang Zhu; Chunmei Wang; Yonggang Wang; Peng Lin; Lin Yang

    2015-01-01

    Extraction process was optimized by single factor and orthogonal experiment (L9 (34)). Moreover, the content determination was studied in methodology. The optimum ultrasonic extraction conditions were: ethanol concentration of 75%, ultrasonic power of 420 w, the solid-liquid ratio of 1:15, extraction duration of 45 min, extraction temperature of 90°C and extraction for 2 times. Saponins content in Guazhou samples was significantly higher than those in Xinjiang and Inner Mongolia. Meanwhile, G...

  15. The Determination of the Optimal Material Proportion in Natural Fiber-Cement Composites Using Design of Mixture Experiments

    OpenAIRE

    Aramphongphun Chuckaphun; Ungtawondee Kampanart; Chaysuwan Duangrudee

    2016-01-01

    This research aims to determine the optimal material proportion in a natural fiber-cement composite as an alternative to an asbestos fibercement composite while the materials cost is minimized and the properties still comply with Thai Industrial Standard (TIS) for applications of profile sheet roof tiles. Two experimental sets were studied in this research. First, a three-component mixture of (i) virgin natural fiber, (ii) synthetic fiber and (iii) cement was studied while the proportion of c...

  16. Optimal Testing Intervals in the Squatting Test to Determine Baroreflex Sensitivity

    OpenAIRE

    Ishitsuka, S.; Kusuyama, N.; Tanaka, M.

    2014-01-01

    The recently introduced “squatting test” (ST) utilizes a simple postural change to perturb the blood pressure and to assess baroreflex sensitivity (BRS). In our study, we estimated the reproducibility of and the optimal testing interval between the STs in healthy volunteers. Thirty-four subjects free of cardiovascular disorders and taking no medication were instructed to perform the repeated ST at 30-sec, 1-min, and 3-min intervals in duplicate in a random sequence, while the systolic blood p...

  17. Optimization of focused ultrasonic extraction of propellant components determined by gas chromatography/mass spectrometry.

    Science.gov (United States)

    Fryš, Ondřej; Česla, Petr; Bajerová, Petra; Adam, Martin; Ventura, Karel

    2012-09-15

    A method for focused ultrasonic extraction of nitroglycerin, triphenyl amine and acetyl tributyl citrate presented in double-base propellant samples following by the gas chromatography/mass spectrometry analysis was developed. A face-centered central composite design of the experiments and response surface modeling was used for optimization of the time, amplitude and sample amount. The dichloromethane was used as the extractant solvent. The optimal extraction conditions with respect to the maximum yield of the lowest abundant compound triphenyl amine were found at the 20 min extraction time, 35% amplitude of ultrasonic waves and 2.5 g of the propellant sample. The results obtained under optimal conditions were compared with the results achieved with validated Soxhlet extraction method, which is typically used for isolation and pre-concentration of compounds from the samples of explosives. The extraction yields for acetyl tributyl citrate using both extraction methods were comparable; however, the yield of ultrasonic extraction of nitroglycerin and triphenyl amine was lower than using Soxhlet extraction. The possible sources of different extraction yields are estimated and discussed. Copyright © 2012 Elsevier B.V. All rights reserved.

  18. Determination of optimal samples for robot calibration based on error similarity

    Directory of Open Access Journals (Sweden)

    Tian Wei

    2015-06-01

    Full Text Available Industrial robots are used for automatic drilling and riveting. The absolute position accuracy of an industrial robot is one of the key performance indexes in aircraft assembly, and can be improved through error compensation to meet aircraft assembly requirements. The achievable accuracy and the difficulty of accuracy compensation implementation are closely related to the choice of sampling points. Therefore, based on the error similarity error compensation method, a method for choosing sampling points on a uniform grid is proposed. A simulation is conducted to analyze the influence of the sample point locations on error compensation. In addition, the grid steps of the sampling points are optimized using a statistical analysis method. The method is used to generate grids and optimize the grid steps of a Kuka KR-210 robot. The experimental results show that the method for planning sampling data can be used to effectively optimize the sampling grid. After error compensation, the position accuracy of the robot meets the position accuracy requirements.

  19. Product placement and its aplication in foreign film

    OpenAIRE

    Vaněk, Tomáš

    2010-01-01

    Marketing and commertial communication and position of product placement within it, legislature governing product placement and its aplication, history of product placement, forms of product placement, use of product placement within marketing campaign, aplication of product placement in movie Casino Royale.

  20. Humanitarian engineering placements in our own communities

    Science.gov (United States)

    VanderSteen, J. D. J.; Hall, K. R.; Baillie, C. A.

    2010-05-01

    There is an increasing interest in the humanitarian engineering curriculum, and a service-learning placement could be an important component of such a curriculum. International placements offer some important pedagogical advantages, but also have some practical and ethical limitations. Local community-based placements have the potential to be transformative for both the student and the community, although this potential is not always seen. In order to investigate the role of local placements, qualitative research interviews were conducted. Thirty-two semi-structured research interviews were conducted and analysed, resulting in a distinct outcome space. It is concluded that local humanitarian engineering placements greatly complement international placements and are strongly recommended if international placements are conducted. More importantly it is seen that we are better suited to address the marginalised in our own community, although it is often easier to see the needs of an outside populace.

  1. Multivariate optimization of an ultrasound-assisted extraction procedure for Cu, Mn, Ni and Zn determination in ration to chickens

    Directory of Open Access Journals (Sweden)

    JOELIA M. BARROS

    2013-09-01

    Full Text Available In this work, multivariate optimization techniques were used to develop a method based on the ultrasound-assisted extraction for copper, manganese, nickel and zinc determination from rations for chicken nutrition using flame atomic absorption spectrometry. The proportions of extracting components (2.0 mol.L–1 nitric, hydrochloric and acetic acid solutions were optimized using centroid-simplex mixture design. The optimum proportions of this mixture taken as percentage of each component were respectively 20%, 37% and 43%. Variables of method (sample mass, sonication time and final acid concentration were optimized using Doehlert design. The optimum values found for these variables were respectively 0.24 g, 18s and 3.6 mol.L–1. The developed method allows copper, manganese, nickel and zinc determination with quantification limits of 2.82; 4.52; 10.7; e 9.69 µg.g–1, and precision expressed as relative standard deviation (%RSD, 25 µg.g–1, N = 5 of 5.30; 2.13; 0.88; and 0.83%, respectively. This method was applied in the analytes determination from chicken rations collected from specialized commerce in Jequié city (Bahia State/Brazil. Application of paired t-test at the obtained results, in a confidence level of 95%, does not show significant difference between the proposed method and the microwave-assisted digestion.

  2. Spectrophotometric determination of fluoxetine by molecularly imprinted polypyrrole and optimization by experimental design, artificial neural network and genetic algorithm

    Science.gov (United States)

    Nezhadali, Azizollah; Motlagh, Maryam Omidvar; Sadeghzadeh, Samira

    2018-02-01

    A selective method based on molecularly imprinted polymer (MIP) solid-phase extraction (SPE) using UV-Vis spectrophotometry as a detection technique was developed for the determination of fluoxetine (FLU) in pharmaceutical and human serum samples. The MIPs were synthesized using pyrrole as a functional monomer in the presence of FLU as a template molecule. The factors that affecting the preparation and extraction ability of MIP such as amount of sorbent, initiator concentration, the amount of monomer to template ratio, uptake shaking rate, uptake time, washing buffer pH, take shaking rate, Taking time and polymerization time were considered for optimization. First a Plackett-Burman design (PBD) consists of 12 randomized runs were applied to determine the influence of each factor. The other optimization processes were performed using central composite design (CCD), artificial neural network (ANN) and genetic algorithm (GA). At optimal condition the calibration curve showed linearity over a concentration range of 10- 7-10- 8 M with a correlation coefficient (R2) of 0.9970. The limit of detection (LOD) for FLU was obtained 6.56 × 10- 9 M. The repeatability of the method was obtained 1.61%. The synthesized MIP sorbent showed a good selectivity and sensitivity toward FLU. The MIP/SPE method was used for the determination of FLU in pharmaceutical, serum and plasma samples, successfully.

  3. Spectrophotometric determination of fluoxetine by molecularly imprinted polypyrrole and optimization by experimental design, artificial neural network and genetic algorithm.

    Science.gov (United States)

    Nezhadali, Azizollah; Motlagh, Maryam Omidvar; Sadeghzadeh, Samira

    2018-02-05

    A selective method based on molecularly imprinted polymer (MIP) solid-phase extraction (SPE) using UV-Vis spectrophotometry as a detection technique was developed for the determination of fluoxetine (FLU) in pharmaceutical and human serum samples. The MIPs were synthesized using pyrrole as a functional monomer in the presence of FLU as a template molecule. The factors that affecting the preparation and extraction ability of MIP such as amount of sorbent, initiator concentration, the amount of monomer to template ratio, uptake shaking rate, uptake time, washing buffer pH, take shaking rate, Taking time and polymerization time were considered for optimization. First a Plackett-Burman design (PBD) consists of 12 randomized runs were applied to determine the influence of each factor. The other optimization processes were performed using central composite design (CCD), artificial neural network (ANN) and genetic algorithm (GA). At optimal condition the calibration curve showed linearity over a concentration range of 10 -7 -10 -8 M with a correlation coefficient (R 2 ) of 0.9970. The limit of detection (LOD) for FLU was obtained 6.56×10 -9 M. The repeatability of the method was obtained 1.61%. The synthesized MIP sorbent showed a good selectivity and sensitivity toward FLU. The MIP/SPE method was used for the determination of FLU in pharmaceutical, serum and plasma samples, successfully. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Using an optimal CC-PLSR-RBFNN model and NIR spectroscopy for the starch content determination in corn

    Science.gov (United States)

    Jiang, Hao; Lu, Jiangang

    2018-05-01

    Corn starch is an important material which has been traditionally used in the fields of food and chemical industry. In order to enhance the rapidness and reliability of the determination for starch content in corn, a methodology is proposed in this work, using an optimal CC-PLSR-RBFNN calibration model and near-infrared (NIR) spectroscopy. The proposed model was developed based on the optimal selection of crucial parameters and the combination of correlation coefficient method (CC), partial least squares regression (PLSR) and radial basis function neural network (RBFNN). To test the performance of the model, a standard NIR spectroscopy data set was introduced, containing spectral information and chemical reference measurements of 80 corn samples. For comparison, several other models based on the identical data set were also briefly discussed. In this process, the root mean square error of prediction (RMSEP) and coefficient of determination (Rp2) in the prediction set were used to make evaluations. As a result, the proposed model presented the best predictive performance with the smallest RMSEP (0.0497%) and the highest Rp2 (0.9968). Therefore, the proposed method combining NIR spectroscopy with the optimal CC-PLSR-RBFNN model can be helpful to determine starch content in corn.

  5. Parameters Determination of Yoshida Uemori Model Through Optimization Process of Cyclic Tension-Compression Test and V-Bending Springback

    Directory of Open Access Journals (Sweden)

    Serkan Toros

    Full Text Available Abstract In recent years, the studies on the enhancement of the prediction capability of the sheet metal forming simulations have increased remarkably. Among the used models in the finite element simulations, the yield criteria and hardening models have a great importance for the prediction of the formability and springback. The required model parameters are determined by using the several test results, i.e. tensile, compression, biaxial stretching tests (bulge test and cyclic tests (tension-compression. In this study, the Yoshida-Uemori (combined isotropic and kinematic hardening model is used to determine the performance of the springback prediction. The model parameters are determined by the optimization processes of the cyclic test by finite element simulations. However, in the study besides the cyclic tests, the model parameters are also evaluated by the optimization process of both cyclic and V-die bending simulations. The springback angle predictions with the model parameters obtained by the optimization of both cyclic and V-die bending simulations are found to mimic the experimental results in a better way than those obtained from only cyclic tests. However, the cyclic simulation results are found to be close enough to the experimental results.

  6. Fathers: A Placement Resource for Abused and Neglected Children?

    Science.gov (United States)

    Greif, Geoffrey L.; Zuravin, Susan J.

    1989-01-01

    Investigated 17 custodial and 18 noncustodial fathers of abused or neglected children to determine: (1) how fathers get custody; (2) how situations in which fathers get custody differ from those in which they do not; and (3) the degree to which father placements are satisfactory. (SAK)

  7. Improved Relay Node Placement Algorithm for Wireless Sensor Networks Application in Wind Farm

    DEFF Research Database (Denmark)

    Chen, Qinyin; Hu, Y.; Chen, Zhe

    2013-01-01

    -tolerance. Each wind turbine has a potentially large number of data points needing to be monitored and collected, as farms continue to increase in scale; distances between turbines can reach several hundred meters. Optimal placement of relays in a large farm requires an efficient algorithmic solution. A relay...... node placement algorithm is proposed in this paper to approximate the optimal position for relays connecting each turbine. However, constraints are then required to prevent relay nodes being overloaded in 3-dimensions. The algorithm is extended to 3-dimensional Euclidean space for this optimal relay...

  8. Consumer Buying Behaviour; A Factor of Compulsive Buying Prejudiced by Windowsill Placement

    OpenAIRE

    Hameed, Irfan; Soomro, Yasir

    2012-01-01

    This empirical research investigates the impact of windowsill placement on the compulsive buying behavior of consumers on three different types of products i.e., convenience products, shopping products, and specialty products. Positive effect of windowsill placement on all three types of product categories has been hypothesized. The categorical regression (Optimal scaling) was used to test the hypotheses. The data was collected via self administered questionnaire from Pakistan through systema...

  9. PMU Placement Based on Heuristic Methods, when Solving the Problem of EPS State Estimation

    OpenAIRE

    I. N. Kolosok; E. S. Korkina; A. M. Glazunova

    2014-01-01

    Creation of satellite communication systems gave rise to a new generation of measurement equipment – Phasor Measurement Unit (PMU). Integrated into the measurement system WAMS, the PMU sensors provide a real picture of state of energy power system (EPS). The issues of PMU placement when solving the problem of EPS state estimation (SE) are discussed in many papers. PMU placement is a complex combinatorial problem, and there is not any analytical function to optimize its variables. Therefore,...

  10. Optimization of the determinant of the Vandermonde matrix and related matrices

    Energy Technology Data Exchange (ETDEWEB)

    Lundengård, Karl; Österberg, Jonas; Silvestrov, Sergei [Division of Applied Mathematics, School of Education, Culture and Communication, Mälardalen University, Box 883, SE-721 23 Västerås (Sweden)

    2014-12-10

    Various techniques for interpolation of data, moment matching in stochastic applications and various methods in numerical analysis can be described using Vandermonde matrices. For this reason the properties of the determinant of the Vandermonde matrix and related matrices are interesting. Here the extreme points of the Vandermonde determinant, and related determinants, on some simple surfaces such as the unit sphere are analyzed, both numerically and analytically. Some results are also visualized in various dimensions. The extreme points of the Vandermonde determinant are also related to the roots of certain orthogonal polynomials such as the Hermite polynomials.

  11. Determination of the optimal sample size for a clinical trial accounting for the population size.

    Science.gov (United States)

    Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin

    2017-07-01

    The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Optimal right heart filling pressure in acute respiratory distress syndrome determined by strain echocardiography.

    Science.gov (United States)

    Garcia-Montilla, Romel; Imam, Faryal; Miao, Mi; Stinson, Kathryn; Khan, Akram; Heitner, Stephen

    2017-06-01

    Right ventricular (RV) systolic dysfunction is common in acute respiratory distress syndrome (ARDS). While preload optimization is crucial in its management, dynamic fluid responsiveness indices lack reliability, and there is no consensus on target central venous pressure (CVP). We analyzed the utility of RV free wall longitudinal strain (RVFWS) in the estimation of optimal RV filling pressure in ARDS. A retrospective cross-sectional analysis of clinical data and echocardiograms of patients with ARDS was performed. Tricuspid annular plane systolic excursion (TAPSE), tricuspid peak systolic velocity (S'), RV fractional area change (RVFAC), RVFWS, CVP, systolic pulmonary artery pressure (SPAP), and left ventricular ejection fraction (LVEF) were measured. Fifty-one patients with moderate-severe ARDS were included. There were inverse correlations between CVP and TAPSE, S', RVFAC, RVFWS, and LVEF. The most significant was with RVFWS (r:.74, R 2 :.55, P:.00001). Direct correlations with creatinine and lactate were noted. Receiver operating characteristic analysis showed that RVFWS -21% (normal reference value) was associated with CVP: 13 mm Hg (AUC: 0.92, 95% CI: 0.83-1.00). Regression model analysis of CVP, and RVFWS interactions established an RVFWS range from -18% to -24%. RVFWS -24% corresponded to CVP: 11 mm Hg and RVFWS -18% to CVP: 15 mm Hg. Beyond a CVP of 15 mm Hg, biventricular systolic dysfunction rapidly ensues. Our data are the first to show that an RV filling pressure of 13±2 mm Hg-as by CVP-correlates with optimal RV mechanics as evaluated by strain echocardiography in patients with moderate-severe ARDS. © 2017, Wiley Periodicals, Inc.

  13. Determination of selenium in urine by inductively coupled plasma mass spectrometry: interferences and optimization

    DEFF Research Database (Denmark)

    Gammelgaard, Bente; Jons, O.

    1999-01-01

    when the nebulizer gas flow rate was optimized for each solute. The influences of sample uptake rate, nebulizer flow rate and rf power were examined in multivariate experiments. The nebulizer gas flow rate and rf power were found to be interdependent, but the sample pump flow rate was independent......, ethanol, propanol, butanol, glycerol, acetonitrile and acetic acid) were examined for their sensitivity enhancement effect. Enhancement factors up to six were obtained and were dependent on the nebulizer gas flow and rf power. There was no important difference in the enhancement effects of these solutes...

  14. Development and optimization of the determination of pharmaceuticals in water samples by SPE and HPLC with diode-array detection.

    Science.gov (United States)

    Pavlović, Dragana Mutavdžić; Ašperger, Danijela; Tolić, Dijana; Babić, Sandra

    2013-09-01

    This paper describes the development, optimization, and validation of a method for the determination of five pharmaceuticals from different therapeutic classes (antibiotics, anthelmintics, glucocorticoides) in water samples. Water samples were prepared using SPE and extracts were analyzed by HPLC with diode-array detection. The efficiency of 11 different SPE cartridges to extract the investigated compounds from water was tested in preliminary experiments. Then, the pH of the water sample, elution solvent, and sorbent mass were optimized. Except for optimization of the SPE procedure, selection of the optimal HPLC column with different stationary phases from different manufacturers has been performed. The developed method was validated using spring water samples spiked with appropriate concentrations of pharmaceuticals. Good linearity was obtained in the range of 2.4-200 μg/L, depending on the pharmaceutical with the correlation coefficients >0.9930 in all cases, except for ciprofloxacin (0.9866). Also, the method has revealed that low LODs (0.7-3.9 μg/L), good precision (intra- and interday) with RSD below 17% and recoveries above 98% for all pharmaceuticals. The method has been successfully applied to the analysis of production wastewater samples from the pharmaceutical industry. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Concise Approach for Determining the Optimal Annual Capacity Shortage Percentage using Techno-Economic Feasibility Parameters of PV Power System

    Science.gov (United States)

    Alghoul, M. A.; Ali, Amer; Kannanaikal, F. V.; Amin, N.; Sopian, K.

    2017-11-01

    PV power systems have been commercially available and widely used for decades. The performance of a reliable PV system that fulfils the expectations requires correct input data and careful design. Inaccurate input data of the techno-economic feasibility would affect the size, cost aspects, stability and performance of PV power system on the long run. The annual capacity shortage is one of the main input data that should be selected with careful attention. The aim of this study is to reveal the effect of different annual capacity shortages on the techno-economic feasibility parameters and determining the optimal value for Baghdad city location using HOMER simulation tool. Six values of annual capacity shortage percentages (0%, 1%, 2%, 3%, 4%, and 5%), and wide daily load profile range (10 kWh - 100 kWh) are implemented. The optimal annual capacity shortage is the value that always "wins" when each techno-economic feasibility parameter is at its optimal/ reasonable criteria. The results showed that the optimal annual capacity shortage that reduces significantly the cost of PV power system while keeping the PV system with reasonable technical feasibility is 3%. This capacity shortage value can be carried as a reference value in future works for Baghdad city location. Using this approach of analysis at other locations, annual capacity shortage can be always offered as a reference value for those locations.

  16. Issues in the determination of the optimal portfolio of electricity supply options

    International Nuclear Information System (INIS)

    Hickey, Emily A.; Lon Carlson, J.; Loomis, David

    2010-01-01

    In recent years a growing amount of attention has been focused on the need to develop a cost-effective portfolio of electricity supply options that provides society with a measure of protection from such factors as fuel price volatility and supply interruptions. A number of strategies, including portfolio theory, real options theory, and different measures of diversity have been suggested. In this paper we begin by first considering how we might characterize an optimal portfolio of supply options and identify a number of constraints that must be satisfied as part of the optimization process. We then review the strengths and limitations of each approach listed above. The results of our review lead us to conclude that, of the strategies we consider, using the concept of diversity to assess the viability of an electricity supply portfolio is most appropriate. We then provide an example of how a particular measure of diversity, the Shannon-Weiner Index, can be used to assess the diversity of the electricity supply portfolio in the state of Illinois, the region served by the Midwest Independent System Operator (MISO), and the continental United States.

  17. Han's model parameters for microalgae grown under intermittent illumination: Determined using particle swarm optimization.

    Science.gov (United States)

    Pozzobon, Victor; Perre, Patrick

    2018-01-21

    This work provides a model and the associated set of parameters allowing for microalgae population growth computation under intermittent lightning. Han's model is coupled with a simple microalgae growth model to yield a relationship between illumination and population growth. The model parameters were obtained by fitting a dataset available in literature using Particle Swarm Optimization method. In their work, authors grew microalgae in excess of nutrients under flashing conditions. Light/dark cycles used for these experimentations are quite close to those found in photobioreactor, i.e. ranging from several seconds to one minute. In this work, in addition to producing the set of parameters, Particle Swarm Optimization robustness was assessed. To do so, two different swarm initialization techniques were used, i.e. uniform and random distribution throughout the search-space. Both yielded the same results. In addition, swarm distribution analysis reveals that the swarm converges to a unique minimum. Thus, the produced set of parameters can be trustfully used to link light intensity to population growth rate. Furthermore, the set is capable to describe photodamages effects on population growth. Hence, accounting for light overexposure effect on algal growth. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. A Determination Method of Optimal Customization Degree of Logistics Service Supply Chain with Mass Customization Service

    Directory of Open Access Journals (Sweden)

    Weihua Liu

    2014-01-01

    Full Text Available Customization degree is a very important field of mass customization. Its improvement could enhance customer satisfaction and further increase customer demand while correspondingly it will increase service price and decrease customer satisfaction and demand. Therefore this paper discusses how to deal with such issues in logistics service supply chain (LSSC with a logistics service integrator (LSI and a customer. With the establishment of customer demand function for logistics services and profit functions of the LSI and the customer, three different decision modes are proposed (i.e., customization degree dominated by LSI, customization degree dominated by customer, and customization degree decided by concentrated supply chain; many interesting findings are achieved. Firstly, to achieve customization cooperation between LSI and customer, measures should be taken to make the unit increase cost of the customized logistics services lower than a certain value. Secondly, there are differences between the optimal customization degree dominated by LSI and that dominated by customer. And in both cases, the dominator could realize more profit than the follower. Thirdly, with the profit secondary distribution strategy, the modified decentralized decision mode could accomplish the maximum profit achieved in centralized decision mode and meanwhile get the optimal customization degree.

  19. Multiple responses optimization in the development of a headspace gas chromatography method for the determination of residual solvents in pharmaceuticals

    Directory of Open Access Journals (Sweden)

    Carla M. Teglia

    2015-10-01

    Full Text Available An efficient generic static headspace gas chromatography (HSGC method was developed, optimized and validated for the routine determination of several residual solvents (RS in drug substance, using a strategy with two sets of calibration. Dimethylsulfoxide (DMSO was selected as the sample diluent and internal standards were used to minimize signal variations due to the preparative step. A gas chromatograph from Agilent Model 6890 equipped with flame ionization detector (FID and a DB-624 (30 m×0.53 mm i.d., 3.00 µm film thickness column was used. The inlet split ratio was 5:1. The influencing factors in the chromatographic separation of the analytes were determined through a fractional factorial experimental design. Significant variables: the initial temperature (IT, the final temperature (FT of the oven and the carrier gas flow rate (F were optimized using a central composite design. Response transformation and desirability function were applied to find out the optimal combination of the chromatographic variables to achieve an adequate resolution of the analytes and short analysis time. These conditions were 30 °C for IT, 158 °C for FT and 1.90 mL/min for F. The method was proven to be accurate, linear in a wide range and very sensitive for the analyzed solvents through a comprehensive validation according to the ICH guidelines. Keywords: Headspace gas chromatography, Residual solvents, Pharmaceuticals, Surface response methodology, Desirability function

  20. Computer realization of an algorithm for determining the optimal arrangement of a fast power reactor core with hexagonal assemblies

    International Nuclear Information System (INIS)

    Karpov, V.A.; Rybnikov, A.F.

    1983-01-01

    An algorithm for solving the problems associated with fast nuclear reactor computer-aided design is suggested. Formulation of the discrete optimization problem dealing with chosing of the first loading arrangement, determination of the control element functional purpose and the order of their rearrangement during reactor operation as well as the choice of operations for core reloading is given. An algorithm for computerized solutions of the mentioned optimization problem based on variational methods relized in the form of the DESIGN program complex written in FORTRAN for the BEhSM-6 computer is proposed. A fast-response program for solving the diffusion equations of two-dimensional reactor permitting to obtain the optimization problem solution at reasonable period of time is developed to conduct necessary neutron-physical calculations for the reactor in hexagonal geometry. The DESIGN program can be included into a computer-aided design system for automation of the procedure of determining the fast power reactor core arrangement. Application of the DESIGN program permits to avoid the routine calculations on substantiation of neutron-physical and thermal-hydraulic characteristics of the reactor core that releases operators from essential waste of time and increases efficiency of their work

  1. PMU Placement Methods in Power Systems based on Evolutionary Algorithms and GPS Receiver

    Directory of Open Access Journals (Sweden)

    M. R. Mosavi

    2013-06-01

    Full Text Available In this paper, optimal placement of Phasor Measurement Unit (PMU using Global Positioning System (GPS is discussed. Ant Colony Optimization (ACO, Simulated Annealing (SA, Particle Swarm Optimization (PSO and Genetic Algorithm (GA are used for this problem. Pheromone evaporation coefficient and the probability of moving from state x to state y by ant are introduced into the ACO. The modified algorithm overcomes the ACO in obtaining global optimal solution and convergence speed, when applied to optimizing the PMU placement problem. We also compare this simulink with SA, PSO and GA that to find capability of ACO in the search of optimal solution. The fitness function includes observability, redundancy and number of PMU. Logarithmic Least Square Method (LLSM is used to calculate the weights of fitness function. The suggested optimization method is applied in 30-bus IEEE system and the simulation results show modified ACO find results better than PSO and SA, but same result with GA.

  2. Determination of transport properties and optimization of lithium-ion batteries

    Science.gov (United States)

    Stewart, Sarah Grace

    We have adapted the method of restricted diffusion to measure diffusion coefficients in lithium-battery electrolytes using Ultra-Violent-Visible (UV-Vis) absorption. The use of UV-Vis absorption reduces the likelihood of side reactions. Here we describe the measurement of the diffusion coefficient in lithium-battery electrolytic solutions. The diffusion coefficient is seen to decrease with increasing concentration according to the following: D = 3.018·10-5 exp(-0.357c), for LiPF 6 in acetonitrile and D = 2.582·10-5 exp(-2.856c) for LiPF6 in EC:DEC (with D in cm2/s and c in moles per liter). This technique may be useful for any liquid solution with a UV-active species of D greater than 10-6 cm2/s. Activity coefficients were measured in concentration cell and melting-point-depression experiments. Results from concentration-cell experiments are presented for solutions of lithium hexafluorophosphate (LiPF6) in propylene carbonate (PC) as well as in a 1:1 by weight solution of ethylene carbonate (EC) and ethyl methyl carbonate (EMC). Heat capacity results are also presented. The thermodynamic factor of LiPF6 solutions in EC varies between ca. 1.33 and ca. 6.10 in the concentration range ca. 0.06 to 1.25 M (which appears to be a eutectic point). We show that the solutions of LiPF6 investigated are not ideal but that an assumption of ideality for these solutions may overestimate the specific energy of a lithium-ion cell by only 0.6%. The thermodynamic and transport properties that we have measured are used in a system model. We have used this model to optimize the design of an asymmetric-hybrid system. This technology attempts to bridge the gap in energy density between a battery and supercapacitor. In this system, the positive electrode stores charge through a reversible, nonfaradaic adsorption of anions on the surface. The negative electrode is nanostructured Li4Ti 5O12, which reversibly intercalates lithium. We use the properties that we have measured in a system

  3. METHODOLOGY FOR DETERMINING THE OPTIMAL CLEANING PERIOD OF HEAT EXCHANGERS BY USING THE CRITERIA OF MINIMUM COST

    Directory of Open Access Journals (Sweden)

    Yanileisy Rodríguez Calderón

    2015-04-01

    Full Text Available One of the most serious problems of the Process Industry is that when planning the maintenance of the heat exchangers is not applied the methodologies based on economic criteria to optimize periods of cleaning surfaces resulting in additional costs for the company and for the country. This work develops and proposes a methodical based on the criterion of Minimum Cost for determining the optimal cleaning period. It is given an example of application of this method to the case of intercoolers of a centrifugal compressor with a high fouling level.It occurs this because is used sea water with many microorganisms as cooling agent which severely embeds transfer surfaces of side water. The methodology employed can be generalized to other applications.

  4. Determination of the Optimal Operating Parameters for Jefferson Laboratory's Cryogenic Cold Compressor Systems

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, Jr., Joe D. [Christopher Newport Univ., Newport News, VA (United States)

    2003-01-01

    The technology of Jefferson Laboratory's (JLab) Continuous Electron Beam Accelerator Facility (CEBAF) and Free Electron Laser (FEL) requires cooling from one of the world's largest 2K helium refrigerators known as the Central Helium Liquefier (CHL). The key characteristic of CHL is the ability to maintain a constant low vapor pressure over the large liquid helium inventory using a series of five cold compressors. The cold compressor system operates with a constrained discharge pressure over a range of suction pressures and mass flows to meet the operational requirements of CEBAF and FEL. The research topic is the prediction of the most thermodynamically efficient conditions for the system over its operating range of mass flows and vapor pressures with minimum disruption to JLab operations. The research goal is to find the operating points for each cold compressor for optimizing the overall system at any given flow and vapor pressure.

  5. Optimal parameters determination of the orbital weld technique using microstructural and chemical properties of welded joint

    International Nuclear Information System (INIS)

    Miranda, A.; Echevarria, J.F.; Rondon, S.; Leiva, P.; Sendoya, F.A.; Amalfi, J.; Lopez, M.; Dominguez, H.

    1999-01-01

    The paper deals with the study of the main parameters of thermal cycle in Orbital Automatic Weld, as a particular process of the GTAW Weld technique. Also is concerned with the investigation of microstructural and mechanical properties of welded joints made with Orbital Technique in SA 210 Steel, a particular alloy widely use during the construction of Economizers of Power Plants. A number of PC software were used in this sense in order to anticipate the main mechanical and structural characteristics of Weld metal and the Heat Affected Zone (HAZ). The papers also might be of great value during selection of optimal Weld parameters to produce sound and high quality Welds during the construction / assembling of structural components in high requirements industrial sectors and also to make a reliable prediction of weld properties

  6. Determination of Optimal Opening Scheme for Electromagnetic Loop Networks Based on Fuzzy Analytic Hierarchy Process

    Directory of Open Access Journals (Sweden)

    Yang Li

    2016-01-01

    Full Text Available Studying optimization and decision for opening electromagnetic loop networks plays an important role in planning and operation of power grids. First, the basic principle of fuzzy analytic hierarchy process (FAHP is introduced, and then an improved FAHP-based scheme evaluation method is proposed for decoupling electromagnetic loop networks based on a set of indicators reflecting the performance of the candidate schemes. The proposed method combines the advantages of analytic hierarchy process (AHP and fuzzy comprehensive evaluation. On the one hand, AHP effectively combines qualitative and quantitative analysis to ensure the rationality of the evaluation model; on the other hand, the judgment matrix and qualitative indicators are expressed with trapezoidal fuzzy numbers to make decision-making more realistic. The effectiveness of the proposed method is validated by the application results on the real power system of Liaoning province of China.

  7. Determining a sustainable and economically optimal wastewater treatment and discharge strategy.

    Science.gov (United States)

    Hardisty, Paul E; Sivapalan, Mayuran; Humphries, Robert

    2013-01-15

    Options for treatment and discharge of wastewater in regional Western Australia (WA) are examined from the perspective of overall sustainability and social net benefit. Current practice in the state has typically involved a basic standard of treatment deemed to be protective of human health, followed by discharge to surface water bodies. Community and regulatory pressure to move to higher standards of treatment is based on the presumption that a higher standard of treatment is more protective of the environment and society, and thus is more sustainable. This analysis tests that hypothesis for Western Australian conditions. The merits of various wastewater treatment and discharge strategies are examined by quantifying financial costs (capital and operations), and by monetising the wider environmental and social costs and benefits of each option over an expanded planning horizon (30 years). Six technical treatment-disposal options were assessed at a test site, all of which met the fundamental criterion of protecting human health. From a financial perspective, the current business-as-usual option is preferred - it is the least cost solution. However, valuing externalities such as water, greenhouse gases, ecological impacts and community amenity, the status quo is revealed as sub-optimal. Advanced secondary treatment with stream disposal improves water quality and provides overall net benefit to society. All of the other options were net present value (NPV) negative. Sensitivity analysis shows that the favoured option outperforms all of the others under a wide range of financial and externality values and assumptions. Expanding the findings across the state reveals that moving from the identified socially optimal level of treatment to higher (tertiary) levels of treatment would result in a net loss to society equivalent to several hundred million dollars. In other words, everyone benefits from improving treatment to the optimum point. But society, the environment, and

  8. Determination of optimal parameters for three-dimensional reconstruction images of central airways using helical CT

    International Nuclear Information System (INIS)

    Hirose, Takahumi; Akata, Soichi; Matsuno, Naoto; Nagao, Takeshi; Abe, Kimihiko

    2002-01-01

    Three-dimensional (3D) image reconstruction of central airways using helical CT requires several user-defined parameters that exceed the requirements of conventional CT. The purpose of this study was to evaluate the optimal parameters for 3D images of central airways using helical CT. In our experimental study using a piglet immediately after sacrifice, 3D images of the central airway were evaluated with changes of 3D imaging parameters, such as detector collimation (1, 2, 3 and 6 mm), table speed (1, 2, 3 and 5 mm/sec), tube electric current (50, 100, 150, 200 and 250 mA), reconstruction interval (0.3, 0.5, 1, 2 and 3 mm), algorithm (mediastinum and lung) and interpolation method (180 deg and 360 deg). To minimize detector collimation, table speed, and reconstruction interval could provide the best 3D images of the central airway. Stair-step artifacts could also be reduced with a slow table speed. However, decreasing the collimation and table speed decreases not only the effective section thickness but also the scan coverage that can be achieved with a helical CT. For routine diagnosis, we conclude that optimal parameters for 3D images of the central airway are to minimize the table speed necessary to cover the volume of interest and to set detector collimation to 1/2 of the table speed. The reconstruction intervals should also be selected at up to 1/2 of the detector collimation, but with trade-offs of increased image processing time, data storage requirements, and physician time for image review. Regarding to tube electric current, 200 mA or more was necessary. Pixel noise increased with the algorithm for the lung. The 180 deg interpolation is better than 360 deg interpolation due to thin effective section thickness. (author)

  9. Determination of optimal wet ethanol composition as a fuel in spark ignition engine

    International Nuclear Information System (INIS)

    Fagundez, J.L.S.; Sari, R.L.; Mayer, F.D.; Martins, M.E.S.; Salau, N.P.G.

    2017-01-01

    Highlights: • Batch distillation to produce HEF and fuel blends of wet ethanol. • Conversion efficiency of a SI engine operating with HEF and wet ethanol. • NEF as a new metric to calculate the energy efficiency of HEF and wet ethanol. • Optimal wet ethanol composition as a fuel in SI engine based on NEF. - Abstract: Studies are unanimous that the greatest fraction of the energy necessary to produce hydrous ethanol fuel (HEF), i.e. above 95%v/v of ethanol in water, is spent on water removal (distillation). Previous works have assessed the energy efficiency of HEF; but few, if any, have done the same for wet ethanol fuel (sub-azeotropic hydrous ethanol). Hence, a new metric called net energy factor (NEF) is proposed to calculate the energy efficiency of wet ethanol and HEF. NEF calculates the ratio of Lower Heating Value (LHV) derived from ethanol fuel, total energy out, to energy used to obtain ethanol fuel as distillate, total energy in. Distillation tests were performed batchwise to obtain as distillate HEF and four different fuel blends of wet ethanol with a range from 60%v/v to 90%v/v of ethanol and the amount of energy spent to distillate each ethanol fuel calculated. The efficiency parameters of a SI engine operating with the produced ethanol fuels was tested to calculate their respective conversion efficiency. The results of net energy factors show a clear advantage of wet ethanol fuels over HEF; the optimal efficiency was wet ethanol fuel with 70%v/v of ethanol.

  10. Determination of Optimal Parameters for Diffusion Bonding of Semi-Solid Casting Aluminium Alloy by Response Surface Methodology

    Directory of Open Access Journals (Sweden)

    Kaewploy Somsak

    2015-01-01

    Full Text Available Liquid state welding techniques available are prone to gas porosity problems. To avoid this solid state bonding is usually an alternative of preference. Among solid state bonding techniques, diffusion bonding is often employed in aluminium alloy automotive parts welding in order to enhance their mechanical properties. However, there has been no standard procedure nor has there been any definitive criterion for judicious welding parameters setting. It is thus a matter of importance to find the set of optimal parameters for effective diffusion bonding. This work proposes the use of response surface methodology in determining such a set of optimal parameters. Response surface methodology is more efficient in dealing with complex process compared with other techniques available. There are two variations of response surface methodology. The one adopted in this work is the central composite design approach. This is because when the initial upper and lower bounds of the desired parameters are exceeded the central composite design approach is still capable of yielding the optimal values of the parameters that appear to be out of the initially preset range. Results from the experiments show that the pressing pressure and the holding time affect the tensile strength of jointing. The data obtained from the experiment fits well to a quadratic equation with high coefficient of determination (R2 = 94.21%. It is found that the optimal parameters in the process of jointing semi-solid casting aluminium alloy by using diffusion bonding are the pressing pressure of 2.06 MPa and 214 minutes of the holding time in order to achieve the highest tensile strength of 142.65 MPa

  11. Comparison of fluorescence-enhancing reagents and optimization of laser fluorimetric technique for the determination of dissolved uranium

    International Nuclear Information System (INIS)

    Ceren Kuetahyali; Joaquin Cobos; Rondinella, V.V.

    2011-01-01

    Results from tests aimed at optimizing an instrumental procedure for the direct and fast determination of uranium in solution by laser fluorescence are presented. A comparison of sample fluorescence measured using different fluorescence enhancing reagents was performed: sodium pyrophosphate, orthophosphoric acid, sulphuric acid and a commercially available fluorescence enhancer were tested for the determination of uranium. From the experimental results, 0.01 M Na 4 P 2 O 7 x 10H 2 O showed the best performance. Effects of reagent pH, different matrices, different concentrations of dissolved Th, and sample volume were investigated. Applications of the improved procedure for the determination of uranium in samples arising from UO 2 -based high level nuclear waste dissolution studies are described. (author)

  12. Percutaneous placement of ureteral stent

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Seung Hyup; Park, Jae Hyung; Han, Joon Koo; Han, Man Chung [Seoul National University College of Medicine, Seoul (Korea, Republic of)

    1990-12-15

    Antegrade placement of ureteral stents has successfully achieved in 41 of 46 ureters. When it was difficult to advance ureteral stents through the lesion, it was facilitated by retrograde guide-wire snare technique through urethra. Complications associated with the procedure were non-function of ureteral stent by occlusion, upward migration,and spontaneous fracture of ureteral stent. These complications were managed by percutaneous nephrostomy and removal of ureteral stents by guide-wire snare technique and insertion of a new ureteral stent. While blood cell in urine was markedly increased in about 50% of patients following the procedure.

  13. Determination of the Optimal Position of Pendulums of an Active Self-balancing Device

    Science.gov (United States)

    Ziyakaev, G. R.; Kazakova, O. A.; Yankov, V. V.; Ivkina, O. P.

    2017-04-01

    The demand of the modern manufacturing industry for machines with high motion speed leads to increased load and vibration activity of the main elements of rotor systems. Vibration reduces operating life of bearings, has adversary effects on human organism, and can cause accidents. One way to compensate for a rotating rotor's imbalance is the use of active self-balancing devices. The aim of this work is to determine the position of their pendulums, in which the imbalance is minimized. As a result of the study, a formula for determining the angle of the pendulums was obtained.

  14. Determination of optimal pollution levels through multiple-criteria decision making: an application to the Spanish electricity sector

    International Nuclear Information System (INIS)

    Linares, P.

    1999-01-01

    An efficient pollution management requires the harmonisation of often conflicting economic and environmental aspects. A compromise has to be found, in which social welfare is maximised. The determination of this social optimum has been attempted with different tools, of which the most correct according to neo-classical economics may be the one based on the economic valuation of the externalities of pollution. However, this approach is still controversial, and few decision makers trust the results obtained enough to apply them. But a very powerful alternative exists, which avoids the problem of monetizing physical impacts. Multiple-criteria decision making provides methodologies for dealing with impacts in different units, and for incorporating the preferences of decision makers or society as a whole, thus allowing for the determination of social optima under heterogeneous criteria, which is usually the case of pollution management decisions. In this paper, a compromise programming model is presented for the determination of the optimal pollution levels for the electricity industry in Spain for carbon dioxide, sulphur dioxide, nitrous oxides, and radioactive waste. The preferences of several sectors of society are incorporated explicitly into the model, so that the solution obtained represents the optimal pollution level from a social point of view. Results show that cost minimisation is still the main objective for society, but the simultaneous consideration of the rest of the criteria achieves large pollution reductions at a low cost increment. (Author)

  15. Determination Of Optimal Stope Strike Length On Steep Orebodies Through Laser Scanning At Lubambe Copper Zambia

    Directory of Open Access Journals (Sweden)

    Kalume H

    2017-08-01

    Full Text Available Lubambe Copper Mine is located in Chililabombwe Zambia and is a joint copper mining venture with three partners that include African Rainbow Minerals 40 Vale 40 and the Government of Zambia 20. The current mining method utilises Longitudinal Room and Pillar Mining LRP on 70m long panels strike length. However these long panels have resulted in unprecedented levels of dilution mainly from the collapse of hanging wall laminated ore shale OS2 leading to reduced recoveries. Observations made underground show high variability in geological and geotechnical conditions of the rock mass with factors such as weathering on joints lamina spaced joints and stress changes induced by mining all contributing to weakening and early collapse of the hanging wall. Therefore a study was undertaken to establish the optimal stope strike length of steep ore bodies at Lubambe. The exercise involved the use of Faro Laser Scanner every four stope rings blasted with time when the scan was performed noted. The spatial coherence of lasers makes them ideal measuring tools in situations where measurements need to be taken in inaccessible areas. Recent advances in laser scanning coupled with the exponential increase in processing power have greatly improved the methods used to estimate stope tonnages extracted from massive inaccessible stopes. The collected data was then used to construct digital three dimensional models of the stope contents. Sections were cut every metre with deformations taken and analysed with respect to time. Deformation rates from the hanging wall was reducing from 0.14thr to 0.07thr between rings 1 to 8. This reduction was as a result of slot blasting that involved drilling and blasting a number of holes at the same time. Between rings 8 to 25 deformation was constant averaging 0.28thr and between rings 26 and 28 a sharp increase in deformation rate was experienced from as low as 0.16thr to 6.33thr. This sharp increase defines the optimal stope length

  16. Optimization and comparison of three different methods for the determination of Rn-222 in water

    International Nuclear Information System (INIS)

    Belloni, P.; Ingrao, G.; Cavaioli, M.; Notaro, M.; Torri, G.; Vasselli, R.; Mancini, C.; Santaroni, P.

    1995-01-01

    Three different systems for the determination of radon in water have been examined: liquid scintillation counting (LSC), degassification followed by Lucas cell counting (LCC) and gamma counting (GC). Particular care has been devoted to the sampling methodologies of the water. Comparative results for several environmental samples are given. A critical evaluation is also given on the basis of the final aim of the measurements

  17. The optimal scheme of self blood pressure measurement as determined from ambulatory blood pressure recordings

    NARCIS (Netherlands)

    Verberk, Willem J.; Kroon, Abraham A.; Kessels, Alfons G. H.; Lenders, Jacques W. M.; Thien, Theo; van Montfrans, Gert A.; Smit, Andries J.; de Leeuw, Peter W.

    Objective To determine how many self-measurements of blood pressure (BP) should be taken at home in order to obtain a reliable estimate of a patient's BP. Design Participants performed self blood pressure measurement (SBPM) for 7 days (triplicate morning and evening readings). In all of them, office

  18. An innovative approach to determine economically optimal coastal setback lines for risk informed coastal zone management

    NARCIS (Netherlands)

    Ranasinghe, R.; Jongejan, R.B.; Callaghan, D.; Vrijling, J.K.

    2012-01-01

    Current methods used to determine Coastal setback lines have several limitations. Furthermore, the historical practice of defining setback lines based on a single deterministic estimate is also proving inadequate with the emergence of risk management style coastal planning frameworks which require

  19. Optimization and comparison of three different methods for the determination of Rn-222 in water

    Energy Technology Data Exchange (ETDEWEB)

    Belloni, P.; Ingrao, G. [ENEA CRE, Casaccia AMB-BIO, Roma (Italy); Cavaioli, M.; Notaro, M.; Torri, G.; Vasselli, R. [ANPA, National Environmental Protection Agency, DISP ARA MET, Roma (Italy); Mancini, C. [Nuclear Engineering Department, University `La Sapienza, Roma (Italy); Santaroni, P. [National Institute of Nutrition, Roma (Italy)

    1995-10-19

    Three different systems for the determination of radon in water have been examined: liquid scintillation counting (LSC), degassification followed by Lucas cell counting (LCC) and gamma counting (GC). Particular care has been devoted to the sampling methodologies of the water. Comparative results for several environmental samples are given. A critical evaluation is also given on the basis of the final aim of the measurements.

  20. Peculiarities of product placement in Lithuanian movies

    OpenAIRE

    Pilelienė, Lina; Jurgilaitė, Sigita

    2013-01-01

    The scientific problem analysed in the article is formulates as follows: how product placement is used in Lithuanian movies. The object of the article is product placement in Lithuanian movies, and the aim is to analyse the peculiarities of product placement in Lithuanian movies. Following methods were used to reveal the problem and reach the aim. Theoretical analysis of scientific literature was provided to construct the framework for the research. The analysis of current usage of product pl...