WorldWideScience

Sample records for determine optimal placement

  1. Optimal placement of capacito

    Directory of Open Access Journals (Sweden)

    N. Gnanasekaran

    2016-06-01

    Full Text Available Optimal size and location of shunt capacitors in the distribution system plays a significant role in minimizing the energy loss and the cost of reactive power compensation. This paper presents a new efficient technique to find optimal size and location of shunt capacitors with the objective of minimizing cost due to energy loss and reactive power compensation of distribution system. A new Shark Smell Optimization (SSO algorithm is proposed to solve the optimal capacitor placement problem satisfying the operating constraints. The SSO algorithm is a recently developed metaheuristic optimization algorithm conceptualized using the shark’s hunting ability. It uses a momentum incorporated gradient search and a rotational movement based local search for optimization. To demonstrate the applicability of proposed method, it is tested on IEEE 34-bus and 118-bus radial distribution systems. The simulation results obtained are compared with previous methods reported in the literature and found to be encouraging.

  2. Derivative load voltage and particle swarm optimization to determine optimum sizing and placement of shunt capacitor in improving line losses

    Directory of Open Access Journals (Sweden)

    Mohamed Milad Baiek

    2016-12-01

    Full Text Available The purpose of this research is to study optimal size and placement of shunt capacitor in order to minimize line loss. Derivative load bus voltage was calculated to determine the sensitive load buses which further being optimum with the placement of shunt capacitor. Particle swarm optimization (PSO was demonstrated on the IEEE 14 bus power system to find optimum size of shunt capacitor in reducing line loss. The objective function was applied to determine the proper placement of capacitor and get satisfied solutions towards constraints. The simulation was run over Matlab under two scenarios namely base case and increasing 100% load. Derivative load bus voltage was simulated to determine the most sensitive load bus. PSO was carried out to determine the optimum sizing of shunt capacitor at the most sensitive bus. The results have been determined that the most sensitive bus was bus number 14 for the base case and increasing 100% load. The optimum sizing was 8.17 Mvar for the base case and 23.98 Mvar for increasing load about 100%. Line losses were able to reduce approximately 0.98% for the base case and increasing 100% load reduced about 3.16%. The proposed method was also proven as a better result compared with harmony search algorithm (HSA method. HSA method recorded loss reduction ratio about 0.44% for the base case and 2.67% when the load was increased by 100% while PSO calculated loss reduction ratio about 1.12% and 4.02% for the base case and increasing 100% load respectively. The result of this study will support the previous study and it is concluded that PSO was successfully able to solve some engineering problems as well as to find a solution in determining shunt capacitor sizing on the power system simply and accurately compared with other evolutionary optimization methods.

  3. Comparison of metaheuristic techniques to determine optimal placement of biomass power plants

    International Nuclear Information System (INIS)

    Reche-Lopez, P.; Ruiz-Reyes, N.; Garcia Galan, S.; Jurado, F.

    2009-01-01

    This paper deals with the application and comparison of several metaheuristic techniques to optimize the placement and supply area of biomass-fueled power plants. Both, trajectory and population-based methods are applied for our goal. In particular, two well-known trajectory method, such as Simulated Annealing (SA) and Tabu Search (TS), and two commonly used population-based methods, such as Genetic Algorithms (GA) and Particle Swarm Optimization (PSO) are hereby considered. In addition, a new binary PSO algorithm has been proposed, which incorporates an inertia weight factor, like the classical continuous approach. The fitness function for the metaheuristics is the profitability index, defined as the ratio between the net present value and the initial investment. In this work, forest residues are considered as biomass source, and the problem constraints are: the generation system must be located inside the supply area, and its maximum electric power is 5 MW. The comparative results obtained by all considered metaheuristics are discussed. Random walk has also been assessed for the problem we deal with.

  4. Comparison of metaheuristic techniques to determine optimal placement of biomass power plants

    Energy Technology Data Exchange (ETDEWEB)

    Reche-Lopez, P.; Ruiz-Reyes, N.; Garcia Galan, S. [Telecommunication Engineering Department, University of Jaen Polytechnic School, C/ Alfonso X el Sabio 28, 23700 Linares, Jaen (Spain); Jurado, F. [Electrical Engineering Department, University of Jaen Polytechnic School, C/ Alfonso X el Sabio 28, 23700 Linares, Jaen (Spain)

    2009-08-15

    This paper deals with the application and comparison of several metaheuristic techniques to optimize the placement and supply area of biomass-fueled power plants. Both, trajectory and population-based methods are applied for our goal. In particular, two well-known trajectory method, such as Simulated Annealing (SA) and Tabu Search (TS), and two commonly used population-based methods, such as Genetic Algorithms (GA) and Particle Swarm Optimization (PSO) are hereby considered. In addition, a new binary PSO algorithm has been proposed, which incorporates an inertia weight factor, like the classical continuous approach. The fitness function for the metaheuristics is the profitability index, defined as the ratio between the net present value and the initial investment. In this work, forest residues are considered as biomass source, and the problem constraints are: the generation system must be located inside the supply area, and its maximum electric power is 5 MW. The comparative results obtained by all considered metaheuristics are discussed. Random walk has also been assessed for the problem we deal with. (author)

  5. Optimization of portal placement for endoscopic calcaneoplasty

    NARCIS (Netherlands)

    van Sterkenburg, Maayke N.; Groot, Minke; Sierevelt, Inger N.; Spennacchio, Pietro A.; Kerkhoffs, Gino M. M. J.; van Dijk, C. Niek

    2011-01-01

    The purpose of our study was to determine an anatomic landmark to help locate portals in endoscopic calcaneoplasty. The device for optimal portal placement (DOPP) was developed to measure the distance from the distal fibula tip to the calcaneus (DFC) in 28 volunteers to determine the location of the

  6. Optimal Placement of Cerebral Oximeter Monitors to Avoid the Frontal Sinus as Determined by Computed Tomography.

    Science.gov (United States)

    Gregory, Alexander J; Hatem, Muhammed A; Yee, Kevin; Grocott, Hilary P

    2016-01-01

    To determine the optimal location to place cerebral oximeter optodes to avoid the frontal sinus, using the orbit of the skull as a landmark. Retrospective observational study. Academic hospital. Fifty adult patients with previously acquired computed tomography angiography scans of the head. The distance between the superior orbit of the skull and the most superior edge of the frontal sinus was measured using imaging software. The mean (SD) frontal sinus height was 16.4 (7.2) mm. There was a nonsignificant trend toward larger frontal sinus height in men compared with women (p = 0.12). Age, height, and body surface area did not correlate with frontal sinus height. Head circumference was positively correlated (r = 0.32; p = 0.03) to frontal sinus height, with a low level of predictability based on linear regression (R(2) = 0.10; p = 0.02). Placing cerebral oximeter optodes>3 cm from the superior rim of the orbit will avoid the frontal sinus in>98% of patients. Predicting the frontal sinus height based on common patient variables is difficult. Additional studies are required to evaluate the recommended height in pediatric populations and patients of various ethnic backgrounds. The clinical relevance of avoiding the frontal sinus also needs to be further elucidated. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Determination of optimal placements of markers on the thigh during walking and landing

    Directory of Open Access Journals (Sweden)

    Pain M.T.G.

    2010-06-01

    Full Text Available Kinematics of skin markers are affected by skin tissue artefact with respect to the bone during sports activities or locomotion. The purpose of this study is to determine the less disturbed marker’s location for walking and landing. Twenty-six markers were put on the thigh of nine male subjects. Each subject performed a static trial, a setup movement for determining a functional hip joint centre and five walking and landing trials. The marker displacements were obtained by comparing recorded marker positions and solidified marker positions based on the geometry of the static acquisition. The markers were subsequently ranked from the worst to the least deformed. The ranking of each trial for each subject was analyzed with the concordance coefficient of Kendall and descriptive statistics were used to determine the most and the least disturbed markers. The results show reproducibility between trials for each subject for the two movements. Statistical analysis shows that the most deformed markers during walking were located close to the hip and knee joints whereas the least disturbed were on the mid-thigh. The landing analysis does not permit to determine the best markers from the worst.

  8. Sensor Placement Optimization using Chama

    Energy Technology Data Exchange (ETDEWEB)

    Klise, Katherine A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Geotechnology and Engineering Dept.; Nicholson, Bethany L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Discrete Math and Optimization Dept.; Laird, Carl Damon [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Discrete Math and Optimization Dept.

    2017-10-01

    Continuous or regularly scheduled monitoring has the potential to quickly identify changes in the environment. However, even with low - cost sensors, only a limited number of sensors can be deployed. The physical placement of these sensors, along with the sensor technology and operating conditions, can have a large impact on the performance of a monitoring strategy. Chama is an open source Python package which includes mixed - integer, stochastic programming formulations to determine sensor locations and technology that maximize monitoring effectiveness. The methods in Chama are general and can be applied to a wide range of applications. Chama is currently being used to design sensor networks to monitor airborne pollutants and to monitor water quality in water distribution systems. The following documentation includes installation instructions and examples, description of software features, and software license. The software is intended to be used by regulatory agencies, industry, and the research community. It is assumed that the reader is familiar with the Python Programming Language. References are included for addit ional background on software components. Online documentation, hosted at http://chama.readthedocs.io/, will be updated as new features are added. The online version includes API documentation .

  9. Brocade: Optimal flow placement in SDN networks

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    Today' network poses several challanges to network providers. These challanges fall in to a variety of areas ranging from determining efficient utilization of network bandwidth to finding out which user applications consume majority of network resources. Also, how to protect a given network from volumetric and botnet attacks. Optimal placement of flows deal with identifying network issues and addressing them in a real-time. The overall solution helps in building new services where a network is more secure and more efficient. Benefits derived as a result are increased network efficiency due to better capacity and resource planning, better security with real-time threat mitigation, and improved user experience as a result of increased service velocity.

  10. Optimal PMU Placement with Uncertainty Using Pareto Method

    Directory of Open Access Journals (Sweden)

    A. Ketabi

    2012-01-01

    Full Text Available This paper proposes a method for optimal placement of Phasor Measurement Units (PMUs in state estimation considering uncertainty. State estimation has first been turned into an optimization exercise in which the objective function is selected to be the number of unobservable buses which is determined based on Singular Value Decomposition (SVD. For the normal condition, Differential Evolution (DE algorithm is used to find the optimal placement of PMUs. By considering uncertainty, a multiobjective optimization exercise is hence formulated. To achieve this, DE algorithm based on Pareto optimum method has been proposed here. The suggested strategy is applied on the IEEE 30-bus test system in several case studies to evaluate the optimal PMUs placement.

  11. Optimal PMU Placement By Improved Particle Swarm Optimization

    DEFF Research Database (Denmark)

    Rather, Zakir Hussain; Liu, Leo; Chen, Zhe

    2013-01-01

    This paper presents an improved method of binary particle swarm optimization (IBPSO) technique for optimal phasor measurement unit (PMU) placement in a power network for complete system observability. Various effective improvements have been proposed to enhance the efficiency and convergence rate...... of conventional particle swarm optimization method. The proposed method of IBPSO ensures optimal PMU placement with and without consideration of zero injection measurements. The proposed method has been applied to standard test systems like 17 bus, IEEE 24-bus, IEEE 30-bus, New England 39-bus, IEEE 57-bus system...

  12. Kinematically optimal robot placement for minimum time coordinated motion

    Energy Technology Data Exchange (ETDEWEB)

    Feddema, J.T.

    1995-10-01

    This paper describes an algorithm for determining the optimal placement of a robotic manipulator within a workcell for minimum time coordinated motion. The algorithm uses a simple principle of coordinated motion to estimate the time of a joint interpolated motion. Specifically, the coordinated motion profile is limited by the slowest axis. Two and six degree of freedom (DOF) examples are presented. In experimental tests on a FANUC S-800 arm, the optimal placement of the robot can improve cycle time of a robotic operation by as much as 25%. In high volume processes where the robot motion is currently the limiting factor, this increased throughput can result in substantial cost savings.

  13. New strategy for optimizing wavelength converter placement

    Science.gov (United States)

    Foo, Y. C.; Chien, S. F.; Low, Andy L. Y.; Teo, C. F.; Lee, Youngseok

    2005-01-01

    This paper proposes a new strategic alternate-path routing to be combined with the particle swarm optimization (PSO) algorithm to better solve the wavelength converters placement problem. The strategic search heuristic is designed to provide network connectivity topologies for the converters to be placed more effectively. The new strategy is applied to the 14-node NSFNET to examine its efficiency in reducing the blocking probability in sparse wavelength conversion network. Computed results show that, when applied to the identical optimization framework, our search method outperforms both the equal-cost multipath routing and traffic-engineering-aware shortest-path routing.

  14. Optimal capacitor sizing and placement based on real time analysis ...

    African Journals Online (AJOL)

    In this paper, optimal capacitor sizing and placement method was used to improve energy efficiency. It involves the placement of capacitors in a specific location with suitable sizing based on the current load of the electrical system. The optimization is done in real time scenario where the sizing and placement of the ...

  15. Optimal PMU placement using Iterated Local Search

    Energy Technology Data Exchange (ETDEWEB)

    Hurtgen, M.; Maun, J.-C. [Universite Libre de Bruxelles, Avenue F. Roosevelt 50, B-1050 Brussels (Belgium)

    2010-10-15

    An essential tool for power system monitoring is state estimation. Using PMUs can greatly improve the state estimation process. However, for state estimation, the PMUs should be placed appropriately in the network. The problem of optimal PMU placement for full observability is analysed in this paper. The objective of the paper is to minimise the size of the PMU configuration while allowing full observability of the network. The method proposed initially suggests a PMU distribution which makes the network observable. The Iterated Local Search (ILS) metaheuristic is then used to minimise the size of the PMU configuration needed to observe the network. The algorithm is tested on IEEE test networks with 14, 57 and 118 nodes and compared to the results obtained in previous publications. (author)

  16. Pose optimization and port placement for robot-assisted minimally invasive surgery in cholecystectomy.

    Science.gov (United States)

    Feng, Mei; Jin, Xingze; Tong, Weihua; Guo, Xiaoyu; Zhao, Ji; Fu, Yili

    2017-12-01

    Pose optimization and port placement are critical issues for preoperative preparation in robot-assisted minimally invasive surgery (RMIS), and affect the robot performance and surgery quality. This paper proposes a method for pose optimization and port placement for RMIS in cholecystectomy that considers both the robot and surgery requirements. The robot pose optimization was divided into optimization of the positioning joint configuration and optimization of the end effector configuration. To determine the optimal location for the trocar port placement, the operational workspace was defined as the evaluation index. The port area was divided into many sub-areas, and that with the maximum operational workspace was selected as the location for the port placement. Considering the left robotic arm as an example, the location for the port placement and joints angles for robotic arm configuration were discussed and simulated using the proposed method. This research can provide guidelines for surgeons in preoperative preparation. Copyright © 2017 John Wiley & Sons, Ltd.

  17. MVMO-based approach for optimal placement and tuning of ...

    African Journals Online (AJOL)

    DR OKE

    Variance Mapping Optimization (MVMO-S) to solve the multi-scenario formulation of the optimal placement and coordinated tuning of power system supplementary damping controllers (POCDCs). The effectiveness of the approach is evaluated ...

  18. Network placement optimization for large-scale distributed system

    Science.gov (United States)

    Ren, Yu; Liu, Fangfang; Fu, Yunxia; Zhou, Zheng

    2018-01-01

    The network geometry strongly influences the performance of the distributed system, i.e., the coverage capability, measurement accuracy and overall cost. Therefore the network placement optimization represents an urgent issue in the distributed measurement, even in large-scale metrology. This paper presents an effective computer-assisted network placement optimization procedure for the large-scale distributed system and illustrates it with the example of the multi-tracker system. To get an optimal placement, the coverage capability and the coordinate uncertainty of the network are quantified. Then a placement optimization objective function is developed in terms of coverage capabilities, measurement accuracy and overall cost. And a novel grid-based encoding approach for Genetic algorithm is proposed. So the network placement is optimized by a global rough search and a local detailed search. Its obvious advantage is that there is no need for a specific initial placement. At last, a specific application illustrates this placement optimization procedure can simulate the measurement results of a specific network and design the optimal placement efficiently.

  19. Optimal sensor placement in integrated gasification combined cycle power systems

    International Nuclear Information System (INIS)

    Lee, Adrian J.; Diwekar, Urmila M.

    2012-01-01

    Highlights: ► Addresses the sensor placement problem in advanced power system. ► Presents the problem as a stochastic programming problem. ► Considers fisher information based objectives along with the economics of sensor. ► For the first time addresses the problem of sensor placement in advanced power systems. -- Abstract: The optimal sensor placement problem involves determining the most effective locations to place a network of sensors across an array of measurable signals, in accordance with a set of specified objectives and constraints, such as cost, performance, and sensitivity to variations in uncertain environments. In advanced power systems, such as in pulverized coal and integrated gasification combined cycle power plants, the placement of sensors on-line within the power generation process can be expensive or technically infeasible due to certain harsh environments. This paper uses advanced modeling techniques to simulate the system’s steady state behavior, and to capture the variability in unknown process variables using the accuracy information from a given set of online sensors. This variability and measurement error is analyzed using a technique from information theory to determine the most cost-effective network of on-line sensors by formulating a nonlinear, stochastic binary integer problem. The solution is achieved by using an efficient sampling technique, Better Optimization algorithm for Nonlinear Uncertain Systems. The key contribution of using Fisher information as a metric for observation order is that it generalizes the Gaussian assumption on representing process and measurement variability for systems governed by nonlinear dynamics.

  20. Optimal Sensor Placement for Latticed Shell Structure Based on an Improved Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Xun Zhang

    2014-01-01

    Full Text Available Optimal sensor placement is a key issue in the structural health monitoring of large-scale structures. However, some aspects in existing approaches require improvement, such as the empirical and unreliable selection of mode and sensor numbers and time-consuming computation. A novel improved particle swarm optimization (IPSO algorithm is proposed to address these problems. The approach firstly employs the cumulative effective modal mass participation ratio to select mode number. Three strategies are then adopted to improve the PSO algorithm. Finally, the IPSO algorithm is utilized to determine the optimal sensors number and configurations. A case study of a latticed shell model is implemented to verify the feasibility of the proposed algorithm and four different PSO algorithms. The effective independence method is also taken as a contrast experiment. The comparison results show that the optimal placement schemes obtained by the PSO algorithms are valid, and the proposed IPSO algorithm has better enhancement in convergence speed and precision.

  1. Optimal placement of distributed generation in distribution networks ...

    African Journals Online (AJOL)

    This paper proposes the application of Particle Swarm Optimization (PSO) technique to find the optimal size and optimum location for the placement of DG in the radial distribution networks for active power compensation by reduction in real power losses and enhancement in voltage profile. In the first segment, the optimal ...

  2. Computer modeling for optimal placement of gloveboxes

    Energy Technology Data Exchange (ETDEWEB)

    Hench, K.W.; Olivas, J.D. [Los Alamos National Lab., NM (United States); Finch, P.R. [New Mexico State Univ., Las Cruces, NM (United States)

    1997-08-01

    Reduction of the nuclear weapons stockpile and the general downsizing of the nuclear weapons complex has presented challenges for Los Alamos. One is to design an optimized fabrication facility to manufacture nuclear weapon primary components (pits) in an environment of intense regulation and shrinking budgets. Historically, the location of gloveboxes in a processing area has been determined without benefit of industrial engineering studies to ascertain the optimal arrangement. The opportunity exists for substantial cost savings and increased process efficiency through careful study and optimization of the proposed layout by constructing a computer model of the fabrication process. This paper presents an integrative two- stage approach to modeling the casting operation for pit fabrication. The first stage uses a mathematical technique for the formulation of the facility layout problem; the solution procedure uses an evolutionary heuristic technique. The best solutions to the layout problem are used as input to the second stage - a computer simulation model that assesses the impact of competing layouts on operational performance. The focus of the simulation model is to determine the layout that minimizes personnel radiation exposures and nuclear material movement, and maximizes the utilization of capacity for finished units.

  3. Computer modeling for optimal placement of gloveboxes

    International Nuclear Information System (INIS)

    Hench, K.W.; Olivas, J.D.; Finch, P.R.

    1997-08-01

    Reduction of the nuclear weapons stockpile and the general downsizing of the nuclear weapons complex has presented challenges for Los Alamos. One is to design an optimized fabrication facility to manufacture nuclear weapon primary components (pits) in an environment of intense regulation and shrinking budgets. Historically, the location of gloveboxes in a processing area has been determined without benefit of industrial engineering studies to ascertain the optimal arrangement. The opportunity exists for substantial cost savings and increased process efficiency through careful study and optimization of the proposed layout by constructing a computer model of the fabrication process. This paper presents an integrative two- stage approach to modeling the casting operation for pit fabrication. The first stage uses a mathematical technique for the formulation of the facility layout problem; the solution procedure uses an evolutionary heuristic technique. The best solutions to the layout problem are used as input to the second stage - a computer simulation model that assesses the impact of competing layouts on operational performance. The focus of the simulation model is to determine the layout that minimizes personnel radiation exposures and nuclear material movement, and maximizes the utilization of capacity for finished units

  4. Determining the brand awareness of product placement in video games

    OpenAIRE

    Král, Marek

    2015-01-01

    This bachelor thesis focusses on the determination of the brand awareness of product placement in video games. The theoretical part includes information about marketing, product placement and video games. The practical part consists of evaluation of the market research about product placements in video games. Conclusion suggests the most important factors influencing the level brand awareness.

  5. MVMO-based approach for optimal placement and tuning of ...

    African Journals Online (AJOL)

    ... optimal placement and coordinated tuning of power system supplementary damping controllers (POCDCs). The effectiveness of the approach is evaluated based on the classical IEEE 39-bus (New England) test system. Numerical results include performance comparisons with other metaheuristic optimization techniques, ...

  6. Optimal caliper placement: manual vs automated methods.

    Science.gov (United States)

    Yazdi, B; Zanker, P; Wanger, P; Sonek, J; Pintoffl, K; Hoopmann, M; Kagan, K O

    2014-02-01

    To examine the inter- and intra-operator repeatability of manual placement of callipers in the assessment of basic biometric measurements and to compare the results to an automated calliper placement system. Stored ultrasound images of 95 normal fetuses between 19 and 25 weeks' gestation were used. Five operators (two experts, one resident and two students) were asked to measure the BPD, OFD and FL two times manually and automatically. For each operator, intra-operator repeatability of the manual and automated measurements was assessed by within operator standard deviation. For the assessment of the interoperator repeatability, the mean of the four manual measurements by the two experts was used as the gold standard.The relative bias of the manual measurement of the three non-expert operators and the operator-independent automated measurement were compared with the gold standard measurement by means and 95% confidence interval. In 88.4% of the 95 cases, the automated measurement algorithm was able to obtain appropriate measurements of the BPD, OFD, AC and FL. Within operator standard deviations of the manual measurements ranged between 0.15 and 1.56, irrespective of the experience of the operator.Using the automated biometric measurement system, there was no difference between the measurements of each operator. As far as the inter-operator repeatability is concerned, the difference between the manual measurements of the two students, the resident, and the gold standard was between -0.10 and 2.53 mm. The automated measurements tended to be closer to the gold standard but did not reach statistical significance. In about 90% of the cases, it was possible to obtain basic biometric measurements with an automated system. The use of automated measurements resulted in a significant improvement of the intra-operator but not of the inter-operator repeatability, but measurements were not significantly closer to the gold standard of expert examiners. This article is protected

  7. Bond graph to digraph conversion: A sensor placement optimization ...

    Indian Academy of Sciences (India)

    Home; Journals; Sadhana; Volume 39; Issue 5. Bond graph to digraph conversion: A sensor placement optimization for fault detection and isolation ... Author Affiliations. Alem Saïd1 Benazzouz Djamel1. Solid Mechanics and Systems Laboratory (LMSS), University M'Hamed Bougara Boumerdes, Boumerdes 35000, Algeria ...

  8. Optimal Placement of Phasor Measurement Units with New Considerations

    DEFF Research Database (Denmark)

    Su, Chi; Chen, Zhe

    2010-01-01

    of these factors is taken into account in the proposed PMU placement method in this paper, which is the number of adjacent branches to the PMU located buses. The concept of full topological observability is adopted and a version of binary particle swarm optimization (PSO) algorithm is utilized. Results from...

  9. Optimal capacitor placement in smart distribution systems to improve ...

    African Journals Online (AJOL)

    An energy efficient power distribution network can provide cost-effective and collaborative platform for supporting present and future smart distribution system requirements. Energy efficiency in distribution systems is achieved through reconfiguration of distributed generation and optimal capacitor placement. Though several ...

  10. Optimal Trajectories Generation in Robotic Fiber Placement Systems

    Science.gov (United States)

    Gao, Jiuchun; Pashkevich, Anatol; Caro, Stéphane

    2017-06-01

    The paper proposes a methodology for optimal trajectories generation in robotic fiber placement systems. A strategy to tune the parameters of the optimization algorithm at hand is also introduced. The presented technique transforms the original continuous problem into a discrete one where the time-optimal motions are generated by using dynamic programming. The developed strategy for the optimization algorithm tuning allows essentially reducing the computing time and obtaining trajectories satisfying industrial constraints. Feasibilities and advantages of the proposed methodology are confirmed by an application example.

  11. [Determinants of urgency of nursing home placement].

    Science.gov (United States)

    Kishida, Kensaku; Tanizaki, Shizuko

    2008-05-01

    The aim of this paper was to identify factors affecting the urgency of nursing home placement after introduction of public long-term care insurance. The subjects were families including at least one disabled elderly person and one another family member in two cities in Chugoku Prefecture. The measure of the urgency of placement was 0 if the family did not submit any application for placement, 1 if the care managers judged that the elderly person should enter in the future when she/he really needs placement, 2 if the care managers judged that she/he might be able to wait for a short while, and 3 if the care managers judged that she/he should enter as early as possible. Our estimation method was by ordered logit model. The dependent variable was the measure of the urgency and the independent variables were several attributes of the families. In the estimation, we considered the possibility that the coefficients depend on categories of dependent variable. We obtained data for 146 waiting families and 494 others (total 640). There were differences in the urgency of placement among waiting elderly as follows "she/he should enter as early as possible" (28.8%); "she/he can wait for a while" (32.2%), "she/he should enter in the future when she/he really needs placement" (39.0%). The results of multivariate analyses showed that the urgency of placement correlated significantly with the severity of the elderly persons disabilities, the number of primary caregivers' self-symptoms, the family members' negative attitude toward caregiving, residing in city A, not having one's own house and limited use of short-stay facilities due to the circumstances of the providers. When judging the urgency of placement, we should consider not only whether the applicant has submitted a request for a nursing home or not, but also differences among the waiting families. The urgency of placement correlates significantly with severity of disability of the elderly person, the number of primary

  12. Optimization of well placement geothermal reservoirs using artificial intelligence

    Science.gov (United States)

    Akın, Serhat; Kok, Mustafa V.; Uraz, Irtek

    2010-06-01

    This research proposes a framework for determining the optimum location of an injection well using an inference method, artificial neural networks and a search algorithm to create a search space and locate the global maxima. A complex carbonate geothermal reservoir (Kizildere Geothermal field, Turkey) production history is used to evaluate the proposed framework. Neural networks are used as a tool to replicate the behavior of commercial simulators, by capturing the response of the field given a limited number of parameters such as temperature, pressure, injection location, and injection flow rate. A study on different network designs indicates that a combination of neural network and an optimization algorithm (explicit search with variable stepping) to capture local maxima can be used to locate a region or a location for optimum well placement. Results also indicate shortcomings and possible pitfalls associated with the approach. With the provided flexibility of the proposed workflow, it is possible to incorporate various parameters including injection flow rate, temperature, and location. For the field of study, optimum injection well location is found to be in the southeastern part of the field. Specific locations resulting from the workflow indicated a consistent search space, having higher values in that particular region. When studied with fixed flow rates (2500 and 4911 m 3/day), a search run through the whole field located two locations which are in the very same region resulting in consistent predictions. Further study carried out by incorporating effect of different flow rates indicates that the algorithm can be run in a particular region of interest and different flow rates may yield different locations. This analysis resulted with a new location in the same region and an optimum injection rate of 4000 m 3/day). It is observed that use of neural network, as a proxy to numerical simulator is viable for narrowing down or locating the area of interest for

  13. Optimizing robot placement for visit-point tasks

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Y.K.; Watterberg, P.A.

    1996-06-01

    We present a manipulator placement algorithm for minimizing the length of the manipulator motion performing a visit-point task such as spot welding. Given a set of points for the tool of a manipulator to visit, our algorithm finds the shortest robot motion required to visit the points from each possible base configuration. The base configurations resulting in the shortest motion is selected as the optimal robot placement. The shortest robot motion required for visiting multiple points from a given base configuration is computed using a variant of the traveling salesman algorithm in the robot joint space and a point-to-point path planner that plans collision free robot paths between two configurations. Our robot placement algorithm is expected to reduce the robot cycle time during visit- point tasks, as well as speeding up the robot set-up process when building a manufacturing line.

  14. Optimization of the Document Placement in the RFID Cabinet

    Directory of Open Access Journals (Sweden)

    Kiedrowicz Maciej

    2016-01-01

    Full Text Available The study is devoted to the issue of optimization of the document placement in a single RFID cabinet. It has been assumed that the optimization problem means the reduction of archivization time with respect to the information on all documents with RFID tags. Since the explicit form of the criterion function remains unknown, for the purpose of its approximation, the regression analysis method has been used. The method uses data from a computer simulation of the process of archiving data about documents. To solve the optimization problem, the modified gradient projection method has been used.

  15. Sensor Placement Optimization of Vibration Test on Medium-Speed Mill

    Directory of Open Access Journals (Sweden)

    Lihua Zhu

    2015-01-01

    Full Text Available Condition assessment and decision making are important tasks of vibration test on dynamic machines, and the accuracy of dynamic response can be achieved by the sensors placed on the structure reasonably. The common methods and evaluation criteria of optimal sensor placement (OSP were summarized. In order to test the vibration characteristic of medium-speed mill in the thermal power plants, the optimal placement of 12 candidate measuring points in X, Y, and Z directions on the mill was discussed for different targeted modal shapes, respectively. The OSP of medium-speed mill was conducted using the effective independence method (EfI and QR decomposition algorithm. The results showed that the order of modal shapes had an important influence on the optimization results. The difference of these two methods on the sensor placement optimization became smaller with the decrease of the number of target modes. The final scheme of OSP was determined based on the optimal results and the actual test requirements. The field test results were basically consistent with the finite element analysis results, which indicated the sensor placement optimization for vibration test on the medium-speed mill was feasible.

  16. Simulation based optimization on automated fibre placement process

    Science.gov (United States)

    Lei, Shi

    2018-02-01

    In this paper, a software simulation (Autodesk TruPlan & TruFiber) based method is proposed to optimize the automate fibre placement (AFP) process. Different types of manufacturability analysis are introduced to predict potential defects. Advanced fibre path generation algorithms are compared with respect to geometrically different parts. Major manufacturing data have been taken into consideration prior to the tool paths generation to achieve high success rate of manufacturing.

  17. Optimal accelerometer placement on a robot arm for pose estimation

    Science.gov (United States)

    Wijayasinghe, Indika B.; Sanford, Joseph D.; Abubakar, Shamsudeen; Saadatzi, Mohammad Nasser; Das, Sumit K.; Popa, Dan O.

    2017-05-01

    The performance of robots to carry out tasks depends in part on the sensor information they can utilize. Usually, robots are fitted with angle joint encoders that are used to estimate the position and orientation (or the pose) of its end-effector. However, there are numerous situations, such as in legged locomotion, mobile manipulation, or prosthetics, where such joint sensors may not be present at every, or any joint. In this paper we study the use of inertial sensors, in particular accelerometers, placed on the robot that can be used to estimate the robot pose. Studying accelerometer placement on a robot involves many parameters that affect the performance of the intended positioning task. Parameters such as the number of accelerometers, their size, geometric placement and Signal-to-Noise Ratio (SNR) are included in our study of their effects for robot pose estimation. Due to the ubiquitous availability of inexpensive accelerometers, we investigated pose estimation gains resulting from using increasingly large numbers of sensors. Monte-Carlo simulations are performed with a two-link robot arm to obtain the expected value of an estimation error metric for different accelerometer configurations, which are then compared for optimization. Results show that, with a fixed SNR model, the pose estimation error decreases with increasing number of accelerometers, whereas for a SNR model that scales inversely to the accelerometer footprint, the pose estimation error increases with the number of accelerometers. It is also shown that the optimal placement of the accelerometers depends on the method used for pose estimation. The findings suggest that an integration-based method favors placement of accelerometers at the extremities of the robot links, whereas a kinematic-constraints-based method favors a more uniformly distributed placement along the robot links.

  18. A Framework for Optimizing the Placement of Tidal Turbines

    Science.gov (United States)

    Nelson, K. S.; Roberts, J.; Jones, C.; James, S. C.

    2013-12-01

    Power generation with marine hydrokinetic (MHK) current energy converters (CECs), often in the form of underwater turbines, is receiving growing global interest. Because of reasonable investment, maintenance, reliability, and environmental friendliness, this technology can contribute to national (and global) energy markets and is worthy of research investment. Furthermore, in remote areas, small-scale MHK energy from river, tidal, or ocean currents can provide a local power supply. However, little is known about the potential environmental effects of CEC operation in coastal embayments, estuaries, or rivers, or of the cumulative impacts of these devices on aquatic ecosystems over years or decades of operation. There is an urgent need for practical, accessible tools and peer-reviewed publications to help industry and regulators evaluate environmental impacts and mitigation measures, while establishing best sitting and design practices. Sandia National Laboratories (SNL) and Sea Engineering, Inc. (SEI) have investigated the potential environmental impacts and performance of individual tidal energy converters (TECs) in Cobscook Bay, ME; TECs are a subset of CECs that are specifically deployed in tidal channels. Cobscook Bay is the first deployment location of Ocean Renewable Power Company's (ORPC) TidGenTM unit. One unit is currently in place with four more to follow. Together, SNL and SEI built a coarse-grid, regional-scale model that included Cobscook Bay and all other landward embayments using the modeling platform SNL-EFDC. Within SNL-EFDC tidal turbines are represented using a unique set of momentum extraction, turbulence generation, and turbulence dissipation equations at TEC locations. The global model was then coupled to a local-scale model that was centered on the proposed TEC deployment locations. An optimization frame work was developed that used the refined model to determine optimal device placement locations that maximized array performance. Within the

  19. Efficient Sensor Placement Optimization Using Gradient Descent and Probabilistic Coverage

    Directory of Open Access Journals (Sweden)

    Vahab Akbarzadeh

    2014-08-01

    Full Text Available We are proposing an adaptation of the gradient descent method to optimize the position and orientation of sensors for the sensor placement problem. The novelty of the proposed method lies in the combination of gradient descent optimization with a realistic model, which considers both the topography of the environment and a set of sensors with directional probabilistic sensing. The performance of this approach is compared with two other black box optimization methods over area coverage and processing time. Results show that our proposed method produces competitive results on smaller maps and superior results on larger maps, while requiring much less computation than the other optimization methods to which it has been compared.

  20. An Optimization Model for Product Placement on Product Listing Pages

    Directory of Open Access Journals (Sweden)

    Yan-Kwang Chen

    2014-01-01

    Full Text Available The design of product listing pages is a key component of Website design because it has significant influence on the sales volume on a Website. This study focuses on product placement in designing product listing pages. Product placement concerns how venders of online stores place their products over the product listing pages for maximization of profit. This problem is very similar to the offline shelf management problem. Since product information sources on a Web page are typically communicated through the text and image, visual stimuli such as color, shape, size, and spatial arrangement often have an effect on the visual attention of online shoppers and, in turn, influence their eventual purchase decisions. In view of the above, this study synthesizes the visual attention literature and theory of shelf-space allocation to develop a mathematical programming model with genetic algorithms for finding optimal solutions to the focused issue. The validity of the model is illustrated with example problems.

  1. Optimal placement of FACTS devices using optimization techniques: A review

    Science.gov (United States)

    Gaur, Dipesh; Mathew, Lini

    2018-03-01

    Modern power system is dealt with overloading problem especially transmission network which works on their maximum limit. Today’s power system network tends to become unstable and prone to collapse due to disturbances. Flexible AC Transmission system (FACTS) provides solution to problems like line overloading, voltage stability, losses, power flow etc. FACTS can play important role in improving static and dynamic performance of power system. FACTS devices need high initial investment. Therefore, FACTS location, type and their rating are vital and should be optimized to place in the network for maximum benefit. In this paper, different optimization methods like Particle Swarm Optimization (PSO), Genetic Algorithm (GA) etc. are discussed and compared for optimal location, type and rating of devices. FACTS devices such as Thyristor Controlled Series Compensator (TCSC), Static Var Compensator (SVC) and Static Synchronous Compensator (STATCOM) are considered here. Mentioned FACTS controllers effects on different IEEE bus network parameters like generation cost, active power loss, voltage stability etc. have been analyzed and compared among the devices.

  2. Optimal Line and Tube Placement in Very Preterm Neonates: An Audit of Practice.

    Science.gov (United States)

    Finn, Daragh; Kinoshita, Hannah; Livingstone, Vicki; Dempsey, Eugene M

    2017-11-17

    Placement of endotracheal tubes (ETTs) and umbilical catheters (UCs) is essential in very preterm infant care. The aim of this study was to assess the effect of an educational initiative to optimize correct placement of ETTs and UCs in very preterm infants. A pre-post study design, evaluating optimal radiological position of ETTs and UCs in the first 72 h of life in infants positioning, and weight-based calculations to estimate insertion depths for endotracheal intubation. A prospective evaluation of radiological placement of ETTs and UCs was then conducted over a 12-month period. During the study period, 211 infants had at least one of the three procedures performed. One hundred and fifty-seven infants were included in the pre-education group, and 54 in the post-education group. All three procedures were performed in 50.3% (79/157) in the pre-education group, and 55.6% (30/54) in the post-education group. There was no significant difference in accurate placement following the introduction of the educational sessions; depth of ETTs (50% vs. 47%), umbilical arterial catheter (UAC) (40% vs. 43%,), and umbilical venous catheter (UVC)(14% vs. 23%). Despite education of staff on methods for appropriate ETT, UVC and UAC insertion length, the rate of accurate initial insertion depth remained suboptimal. Newer methods of determining optimal position need to be evaluated.

  3. Spatial and temporal optimization in habitat placement for a threatened plant: the case of the western prairie fringed orchid

    Science.gov (United States)

    John Hof; Carolyn Hull Sieg; Michael Bevers

    1999-01-01

    This paper investigates an optimization approach to determining the placement and timing of habitat protection for the western prairie fringed orchid. This plant’s population dynamics are complex, creating a challenging optimization problem. The sensitivity of the orchid to random climate conditions is handled probabilistically. The plant’s seed, protocorm and above-...

  4. Optimal placement and sizing of multiple distributed generating units in distribution

    Directory of Open Access Journals (Sweden)

    D. Rama Prabha

    2016-06-01

    Full Text Available Distributed generation (DG is becoming more important due to the increase in the demands for electrical energy. DG plays a vital role in reducing real power losses, operating cost and enhancing the voltage stability which is the objective function in this problem. This paper proposes a multi-objective technique for optimally determining the location and sizing of multiple distributed generation (DG units in the distribution network with different load models. The loss sensitivity factor (LSF determines the optimal placement of DGs. Invasive weed optimization (IWO is a population based meta-heuristic algorithm based on the behavior of weeds. This algorithm is used to find optimal sizing of the DGs. The proposed method has been tested for different load models on IEEE-33 bus and 69 bus radial distribution systems. This method has been compared with other nature inspired optimization methods. The simulated results illustrate the good applicability and performance of the proposed method.

  5. Automatic, optimized interface placement in forward flux sampling simulations

    Science.gov (United States)

    Kratzer, Kai; Arnold, Axel; Allen, Rosalind J.

    2013-04-01

    Forward flux sampling (FFS) provides a convenient and efficient way to simulate rare events in equilibrium or non-equilibrium systems. FFS ratchets the system from an initial state to a final state via a series of interfaces in phase space. The efficiency of FFS depends sensitively on the positions of the interfaces. We present two alternative methods for placing interfaces automatically and adaptively in their optimal locations, on-the-fly as an FFS simulation progresses, without prior knowledge or user intervention. These methods allow the FFS simulation to advance efficiently through bottlenecks in phase space by placing more interfaces where the probability of advancement is lower. The methods are demonstrated both for a single-particle test problem and for the crystallization of Yukawa particles. By removing the need for manual interface placement, our methods both facilitate the setting up of FFS simulations and improve their performance, especially for rare events which involve complex trajectories through phase space, with many bottlenecks.

  6. Optimal Node Placement in Underwater Wireless Sensor Networks

    KAUST Repository

    Felamban, M.

    2013-03-25

    Wireless Sensor Networks (WSN) are expected to play a vital role in the exploration and monitoring of underwater areas which are not easily reachable by humans. However, underwater communication via acoustic waves is subject to several performance limitations that are very different from those used for terresstrial networks. In this paper, we investigate node placement for building an initial underwater WSN infrastructure. We formulate this problem as a nonlinear mathematical program with the objective of minimizing the total transmission loss under a given number of sensor nodes and targeted coverage volume. The obtained solution is the location of each node represented via a truncated octahedron to fill out the 3D space. Experiments are conducted to verify the proposed formulation, which is solved using Matlab optimization tool. Simulation is also conducted using an ns-3 simulator, and the simulation results are consistent with the obtained results from mathematical model with less than 10% error.

  7. Application of flower pollination algorithm for optimal placement and sizing of distributed generation in Distribution systems

    Directory of Open Access Journals (Sweden)

    P. Dinakara Prasad Reddy

    2016-05-01

    Full Text Available Distributed generator (DG resources are small, self contained electric generating plants that can provide power to homes, businesses or industrial facilities in distribution feeders. By optimal placement of DG we can reduce power loss and improve the voltage profile. However, the values of DGs are largely dependent on their types, sizes and locations as they were installed in distribution feeders. The main contribution of the paper is to find the optimal locations of DG units and sizes. Index vector method is used for optimal DG locations. In this paper new optimization algorithm i.e. flower pollination algorithm is proposed to determine the optimal DG size. This paper uses three different types of DG units for compensation. The proposed methods have been tested on 15-bus, 34-bus, and 69-bus radial distribution systems. MATLAB, version 8.3 software is used for simulation.

  8. Optimal Line and Tube Placement in Very Preterm Neonates: An Audit of Practice

    Directory of Open Access Journals (Sweden)

    Daragh Finn

    2017-11-01

    Full Text Available Background: Placement of endotracheal tubes (ETTs and umbilical catheters (UCs is essential in very preterm infant care. The aim of this study was to assess the effect of an educational initiative to optimize correct placement of ETTs and UCs in very preterm infants. Methods: A pre–post study design, evaluating optimal radiological position of ETTs and UCs in the first 72 h of life in infants <32 weeks gestational age (GA was performed. Baseline data was obtained from a preceding 34-month period. The study intervention consisted of information from the pre-intervention audit, surface anatomy images of the newborn for optimal UC positioning, and weight-based calculations to estimate insertion depths for endotracheal intubation. A prospective evaluation of radiological placement of ETTs and UCs was then conducted over a 12-month period. Results: During the study period, 211 infants had at least one of the three procedures performed. One hundred and fifty-seven infants were included in the pre-education group, and 54 in the post-education group. All three procedures were performed in 50.3% (79/157 in the pre-education group, and 55.6% (30/54 in the post-education group. There was no significant difference in accurate placement following the introduction of the educational sessions; depth of ETTs (50% vs. 47%, umbilical arterial catheter (UAC (40% vs. 43%,, and umbilical venous catheter (UVC(14% vs. 23%. Conclusion: Despite education of staff on methods for appropriate ETT, UVC and UAC insertion length, the rate of accurate initial insertion depth remained suboptimal. Newer methods of determining optimal position need to be evaluated.

  9. Placement optimization of actuators and sensors for gyroelastic body

    Directory of Open Access Journals (Sweden)

    Quan Hu

    2015-03-01

    Full Text Available Gyroelastic body refers to a flexible structure with a distribution of stored angular momentum provided by fly wheels or control moment gyroscopes. The angular momentum devices can exert active torques to the structure for vibration suppression or shape control. This article mainly focuses on the placement optimization issue of the actuators and sensors on the gyroelastic body. The control moment gyroscopes and angular rate sensors are adopted as actuators and sensors, respectively. The equations of motion of the gyroelastic body incorporating the detailed actuator dynamics are linearized to a loosely coupled state-space model. Two optimization approaches are developed for both constrained and unconstrained gyroelastic bodies. The first is based on the controllability and observability matrices of the system. It is only applicable to the collocated actuator and sensor pairs. The second criterion is formulated from the concept of controllable and observable subspaces. It is capable of handling the cases of both collocated and noncollocated actuator and sensor pairs. The illustrative examples of a cantilevered beam and an unconstrained plate demonstrate the clear physical meaning and rationality of the two proposed methods.

  10. Determining Optimal Decision Version

    Directory of Open Access Journals (Sweden)

    Olga Ioana Amariei

    2014-06-01

    Full Text Available In this paper we start from the calculation of the product cost, applying the method of calculating the cost of hour- machine (THM, on each of the three cutting machines, namely: the cutting machine with plasma, the combined cutting machine (plasma and water jet and the cutting machine with a water jet. Following the calculation of cost and taking into account the precision of manufacturing of each machine, as well as the quality of the processed surface, the optimal decisional version needs to be determined regarding the product manufacturing. To determine the optimal decisional version, we resort firstly to calculating the optimal version on each criterion, and then overall using multiattribute decision methods.

  11. Optimal Node Placement in Underwater Acoustic Sensor Network

    KAUST Repository

    Felemban, Muhamad

    2011-10-01

    Almost 70% of planet Earth is covered by water. A large percentage of underwater environment is unexplored. In the past two decades, there has been an increase in the interest of exploring and monitoring underwater life among scientists and in industry. Underwater operations are extremely difficult due to the lack of cheap and efficient means. Recently, Wireless Sensor Networks have been introduced in underwater environment applications. However, underwater communication via acoustic waves is subject to several performance limitations, which makes the relevant research issues very different from those on land. In this thesis, we investigate node placement for building an initial Underwater Wireless Sensor Network infrastructure. Firstly, we formulated the problem into a nonlinear mathematic program with objectives of minimizing the total transmission loss under a given number of sensor nodes and targeted volume. We conducted experiments to verify the proposed formulation, which is solved using Matlab optimization tool. We represented each node with a truncated octahedron to fill out the 3D space. The truncated octahedrons are tiled in the 3D space with each node in the center where locations of the nodes are given using 3D coordinates. Results are supported using ns-3 simulator. Results from simulation are consistent with the obtained results from mathematical model with less than 10% error.

  12. A New Optimization Framework To Solve The Optimal Feeder Reconfiguration And Capacitor Placement Problems

    Directory of Open Access Journals (Sweden)

    Mohammad-Reza Askari

    2015-07-01

    Full Text Available Abstract This paper introduces a new stochastic optimization framework based bat algorithm BA to solve the optimal distribution feeder reconfiguration DFR as well as the shunt capacitor placement and sizing in the distribution systems. The objective functions to be investigated are minimization of the active power losses and minimization of the total network costs an. In order to consider the uncertainties of the active and reactive loads in the problem point estimate method PEM with 2m scheme is employed as the stochastic tool. The feasibility and good performance of the proposed method are examined on the IEEE 69-bus test system.

  13. An approach for optimal PMU placement using binary particle ...

    African Journals Online (AJOL)

    This paper presents an approach to determine the optimal PMU locations in order to render the complete observability to given power system using BPSO. Quadratic programming approach is used in BPSO, and results so obtained are compared with. GAMS-MIP solver. A Method for Pseudo observability is also proposed ...

  14. The effects of initial conditions and control time on optimal actuator placement via a max-min Genetic Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Redmond, J. [Sandia National Labs., Albuquerque, NM (United States); Parker, G. [State Univ. of New York, Buffalo, NY (United States)

    1993-07-01

    This paper examines the role of the control objective and the control time in determining fuel-optimal actuator placement for structural vibration suppression. A general theory is developed that can be easily extended to include alternative performance metrics such as energy and time-optimal control. The performance metric defines a convex admissible control set which leads to a max-min optimization problem expressing optimal location as a function of initial conditions and control time. A solution procedure based on a nested Genetic Algorithm is presented and applied to an example problem. Results indicate that the optimal locations vary widely as a function of control time and initial conditions.

  15. Appropriate depth of placement of oral endotracheal tube and its possible determinants in Indian adult patients

    Directory of Open Access Journals (Sweden)

    Manu Varshney

    2011-01-01

    Full Text Available Background: Optimal depth of endotracheal tube (ET placement has been a serious concern because of the complications associated with its malposition. Aims: To find the optimal depth of placement of oral ET in Indian adult patients and its possible determinants viz. height, weight, arm span and vertebral column length. Settings and Design: This study was conducted in 200 ASA I and II patients requiring general anaesthesia and orotracheal intubation. Methods: After placing the ET with the designated black mark at vocal cords, various airway distances were measured from the right angle of mouth using a fibre optic bronchoscope. Statistical Analysis: The power of the study is 0.9. Mean (SD and median (range of various parameters and Pearson correlation coefficient was calculated. Results: The mean (SD lip-carina distance, i.e., total airway length was 24.32 (1.81 cm and 21.62 (1.34 cm in males and females, respectively. With black mark of ET between vocal cords, the mean (SD ET tip-carina distance of 3.69 (1.65 cm in males and 2.28 (1.55 cm females was found to be considerably less than the recommended safe distance. Conclusions: Fixing the tube at recommended 23 cm in males and 21 cm in females will lead to carinal stimulation or endobronchial placement in many Indian patients. The lip to carina distance best correlates with patient′s height. Positioning the ET tip 4 cm above carina as recommended will result in placement of tube cuff inside cricoid ring with currently available tubes. Optimal depth of ET placement can be estimated by the formula "(Height in cm/7-2.5."

  16. Development of a trauma system and optimal placement of trauma centers using geospatial mapping.

    Science.gov (United States)

    Horst, Michael A; Jammula, Shreya; Gross, Brian W; Bradburn, Eric H; Cook, Alan D; Altenburg, Juliet; Morgan, Madison; Von Nieda, Danielle; Rogers, Frederick B

    2018-03-01

    The care of patients at individual trauma centers (TCs) has been carefully optimized, but not the placement of TCs within the trauma systems. We sought to objectively determine the optimal placement of trauma centers in Pennsylvania using geospatial mapping. We used the Pennsylvania Trauma Systems Foundation (PTSF) and Pennsylvania Health Care Cost Containment Council (PHC4) registries for adult (age ≥15) trauma between 2003 and 2015 (n = 377,540 and n = 255,263). TCs and zip codes outside of PA were included to account for edge effects with trauma cases aggregated to the Zip Code Tabulation Area centroid of residence. Model assumptions included no previous TCs (clean slate); travel time intervals of 45, 60, 90, and 120 minutes; TC capacity based on trauma cases per bed size; and candidate hospitals ≥200 beds. We used Network Analyst Location-Allocation function in ArcGIS Desktop to generate models optimally placing 1 to 27 TCs (27 current PA TCs) and assessed model outcomes. At a travel time of 60 minutes and 27 sites, optimally placed models for PTSF and PHC4 covered 95.6% and 96.8% of trauma cases in comparison with the existing network reaching 92.3% or 90.6% of trauma cases based on PTSF or PHC4 inclusion. When controlled for existing coverage, the optimal numbers of TCs for PTSF and PHC4 were determined to be 22 and 16, respectively. The clean slate model clearly demonstrates that the optimal trauma system for the state of Pennsylvania differs significantly from the existing system. Geospatial mapping should be considered as a tool for informed decision-making when organizing a statewide trauma system. Epidemiological study/Care management, level III.

  17. Optimal Sensor Placement for Leak Location in Water Distribution Networks using Evolutionary Algorithms

    Directory of Open Access Journals (Sweden)

    Myrna V. Casillas

    2015-11-01

    Full Text Available In this paper, a sensor placement approach to improve the leak location in waterdistribution networks is proposed when the leak signature space (LSS method is used.The sensor placement problem is formulated as an integer optimization problem where thecriterion to be minimized is the number of overlapping signature domains computed fromthe original LSS representation. First, a semi-exhaustive search approach based on a lazyevaluation mechanism ensures optimal placement in the case of low complexity scenarios.For more complex cases, a stochastic optimization process is proposed, based on eitherthe genetic algorithms (GAs or particle swarm optimization (PSO. Experiments on twodifferent networks are used to evaluate the performance of the resolution methods, as well asthe efficiency achieved in the leak location when using the sensor placement results.

  18. Optimal Sensor Placement with Terrain-Based Constraints and Signal Propagation Effects

    National Research Council Canada - National Science Library

    Vecherin, Sergey N; Wilson, D. K; Pettit, Chris L

    2008-01-01

    The optimal sensor placement problem, as considered here, is to select the types and locations of sensors providing coverage at high value terrain locations while minimizing a specified cost function...

  19. Application Development for Optimizing Patient Placement on Aeromedical Evacuation Flights: Proof-of-Concept

    Science.gov (United States)

    2018-01-12

    AFRL-SA-WP-SR-2018-0001 Application Development for Optimizing Patient Placement on Aeromedical Evacuation Flights: Proof-of- Concept ...Development for Optimizing Patient Placement on Aeromedical Evacuation Flights: Proof-of- Concept 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c...distribution of these plans would streamline the plan development process. Thus, as a proof-of- concept , the study team conducted a multi-phased effort

  20. Optimal Placement and Sizing of TCSC using Gravitational Search Algorithm

    Directory of Open Access Journals (Sweden)

    Ontoseno Penangsang

    2012-12-01

    Full Text Available This paper represents the GSA that can be used to determine the optimal location and rating of FACTS devices. They are devices used to regulate and improve the power flow in the power system. The method used in this study was GSA. FACTS types used were TCSC implemented on 500kV Java-Bali Power System. Load flow results before optimization showed that the active power loss was 297.607MW. While the load flow results after optimization using GSA with 5-TCSC obtained were 287.926MW of active power loss, with 10-TCSC, it was obtained 281.143MW of active power loss. In addition, using 15-TCSC, the active power loss obtained was 279.405MW. GSA methods can be used to minimize power losses and transmission lines as well as to improve the value of the voltage in the range of 0.95+ 1.05pu compared with load flow results before optimization.

  1. Exploration of Objective Functions for Optimal Placement of Weather Stations

    Science.gov (United States)

    Snyder, A.; Dietterich, T.; Selker, J. S.

    2016-12-01

    Many regions of Earth lack ground-based sensing of weather variables. For example, most countries in Sub-Saharan Africa do not have reliable weather station networks. This absence of sensor data has many consequences ranging from public safety (poor prediction and detection of severe weather events), to agriculture (lack of crop insurance), to science (reduced quality of world-wide weather forecasts, climate change measurement, etc.). The Trans-African Hydro-Meteorological Observatory (TAHMO.org) project seeks to address these problems by deploying and operating a large network of weather stations throughout Sub-Saharan Africa. To design the TAHMO network, we must determine where to locate each weather station. We can formulate this as the following optimization problem: Determine a set of N sites that jointly optimize the value of an objective function. The purpose of this poster is to propose and assess several objective functions. In addition to standard objectives (e.g., minimizing the summed squared error of interpolated values over the entire region), we consider objectives that minimize the maximum error over the region and objectives that optimize the detection of extreme events. An additional issue is that each station measures more than 10 variables—how should we balance the accuracy of our interpolated maps for each variable? Weather sensors inevitably drift out of calibration or fail altogether. How can we incorporate robustness to failed sensors into our network design? Another important requirement is that the network should make it possible to detect failed sensors by comparing their readings with those of other stations. How can this requirement be met? Finally, we provide an initial assessment of the computational cost of optimizing these various objective functions. We invite everyone to join the discussion at our poster by proposing additional objectives, identifying additional issues to consider, and expanding our bibliography of relevant

  2. Bond graph to digraph conversion: A sensor placement optimization ...

    Indian Academy of Sciences (India)

    faults detection and isolation using a novel structural and qualitative approach. This approach is based on ... quantitative or incorrect values due to measurement errors, this last aspect being unavoidable in modelling physical ... method of solving the sensor placement problem for faults detection and isolation in the system.

  3. Optimal capacitor placement and sizing in radial electric powe

    Directory of Open Access Journals (Sweden)

    Ahmed Elsheikh

    2014-12-01

    Full Text Available The use of capacitors in power systems has many well-known benefits that include improvement of the system power factor, improvement of the system voltage profile, increasing the maximum flow through cables and transformers and reduction of losses due to the compensation of the reactive component of power flow. By decreasing the flow through cables, the systems’ loads can be increased without adding any new cables or overloading the existing cables. These benefits depend greatly on how capacitors are placed in the system. In this paper, the problem of how to optimally determine the locations to install capacitors and the sizes of capacitors to be installed in the buses of radial distribution systems is addressed. The proposed methodology uses loss sensitivity factors to identify the buses requiring compensation and then a discrete particle swarm optimization algorithm (PSO is used to determine the sizes of the capacitors to be installed. The proposed algorithm deals directly with discrete nature of the design variables. The results obtained are superior to those reported in Prakash and Sydulu (2007.

  4. A mixed integer linear programming approach for optimal DER portfolio, sizing, and placement in multi-energy microgrids

    International Nuclear Information System (INIS)

    Mashayekh, Salman; Stadler, Michael; Cardoso, Gonçalo; Heleno, Miguel

    2017-01-01

    Highlights: • This paper presents a MILP model for optimal design of multi-energy microgrids. • Our microgrid design includes optimal technology portfolio, placement, and operation. • Our model includes microgrid electrical power flow and heat transfer equations. • The case study shows advantages of our model over aggregate single-node approaches. • The case study shows the accuracy of the integrated linearized power flow model. - Abstract: Optimal microgrid design is a challenging problem, especially for multi-energy microgrids with electricity, heating, and cooling loads as well as sources, and multiple energy carriers. To address this problem, this paper presents an optimization model formulated as a mixed-integer linear program, which determines the optimal technology portfolio, the optimal technology placement, and the associated optimal dispatch, in a microgrid with multiple energy types. The developed model uses a multi-node modeling approach (as opposed to an aggregate single-node approach) that includes electrical power flow and heat flow equations, and hence, offers the ability to perform optimal siting considering physical and operational constraints of electrical and heating/cooling networks. The new model is founded on the existing optimization model DER-CAM, a state-of-the-art decision support tool for microgrid planning and design. The results of a case study that compares single-node vs. multi-node optimal design for an example microgrid show the importance of multi-node modeling. It has been shown that single-node approaches are not only incapable of optimal DER placement, but may also result in sub-optimal DER portfolio, as well as underestimation of investment costs.

  5. Optimal Sensor Placement for Health Monitoring of High-Rise Structure Based on Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Ting-Hua Yi

    2011-01-01

    Full Text Available Optimal sensor placement (OSP technique plays a key role in the structural health monitoring (SHM of large-scale structures. Based on the criterion of the OSP for the modal test, an improved genetic algorithm, called “generalized genetic algorithm (GGA”, is adopted to find the optimal placement of sensors. The dual-structure coding method instead of binary coding method is proposed to code the solution. Accordingly, the dual-structure coding-based selection scheme, crossover strategy and mutation mechanism are given in detail. The tallest building in the north of China is implemented to demonstrate the feasibility and effectiveness of the GGA. The sensor placements obtained by the GGA are compared with those by exiting genetic algorithm, which shows that the GGA can improve the convergence of the algorithm and get the better placement scheme.

  6. A new placement optimization method for viscoelastic dampers: Energy dissipation method

    Science.gov (United States)

    Qu, Ji-Ting

    2012-09-01

    A new mathematic model of location optimization for viscoelastic dampers is established through energy analysis based on force analogy method. Three working conditions (three lower limits of the new location index) as well as four ground motions are considered in this study, using MATLAB and SAP2000 in programming and verifying. This paper deals with the optimal placement of viscoelastic dampers and step-by-step time history analyses are carried out. Numerical analysis is illustrated to verify the effectiveness and feasibility of the new mathematic model for structural control. In addition, not only the optimal placement method using force analogy method can confirm dampers' locations all at once and be accurate to each span, but also it is without circular calculating. At last, a few helpful conclusions on viscoelastic dampers' optimal placement are made.

  7. Optimal placement of distributed generation in distribution networks

    African Journals Online (AJOL)

    user

    International Journal of Engineering, Science and Technology, Vol. 3, No. 3, 2011, pp. 47-55. 51. 3. Particle Swarm Optimization. 3.1 Introduction. Particle swarm optimization (PSO) is a population-based optimization method first proposed by Kennedy and Eberhart in 1995, inspired by social behavior of bird flocking or fish ...

  8. Optimal sensor placement for deployable antenna module health monitoring in SSPS using genetic algorithm

    Science.gov (United States)

    Yang, Chen; Zhang, Xuepan; Huang, Xiaoqi; Cheng, ZhengAi; Zhang, Xinghua; Hou, Xinbin

    2017-11-01

    The concept of space solar power satellite (SSPS) is an advanced system for collecting solar energy in space and transmitting it wirelessly to earth. However, due to the long service life, in-orbit damage may occur in the structural system of SSPS. Therefore, sensor placement layouts for structural health monitoring should be firstly considered in this concept. In this paper, based on genetic algorithm, an optimal sensor placement method for deployable antenna module health monitoring in SSPS is proposed. According to the characteristics of the deployable antenna module, the designs of sensor placement are listed. Furthermore, based on effective independence method and effective interval index, a combined fitness function is defined to maximize linear independence in targeted modes while simultaneously avoiding redundant information at nearby positions. In addition, by considering the reliability of sensors located at deployable mechanisms, another fitness function is constituted. Moreover, the solution process of optimal sensor placement by using genetic algorithm is clearly demonstrated. At last, a numerical example about the sensor placement layout in a deployable antenna module of SSPS is presented, which by synthetically considering all the above mentioned performances. All results can illustrate the effectiveness and feasibility of the proposed sensor placement method in SSPS.

  9. Optimal capacitor placement and sizing using combined fuzzy ...

    African Journals Online (AJOL)

    Then the sizing of the capacitors is modeled as an optimization problem and the objective function (loss minimization) is solved using Hybrid Particle Swarm Optimization (HPSO) technique. A case study with an IEEE 34 bus distribution feeder is presented to illustrate the applicability of the algorithm. A comparison is made ...

  10. A Dynamic Programming Algorithm for Finding the Optimal Placement of a Secondary Structure Topology in Cryo-EM Data.

    Science.gov (United States)

    Biswas, Abhishek; Ranjan, Desh; Zubair, Mohammad; He, Jing

    2015-09-01

    The determination of secondary structure topology is a critical step in deriving the atomic structures from the protein density maps obtained from electron cryomicroscopy technique. This step often relies on matching the secondary structure traces detected from the protein density map to the secondary structure sequence segments predicted from the amino acid sequence. Due to inaccuracies in both sources of information, a pool of possible secondary structure positions needs to be sampled. One way to approach the problem is to first derive a small number of possible topologies using existing matching algorithms, and then find the optimal placement for each possible topology. We present a dynamic programming method of Θ(Nq(2)h) to find the optimal placement for a secondary structure topology. We show that our algorithm requires significantly less computational time than the brute force method that is in the order of Θ(q(N) h).

  11. Optimal capacitor placement and sizing using combined fuzzy ...

    African Journals Online (AJOL)

    user

    , Hybrid Particle Swarm Optimization. 1. Introduction. Shunt capacitors are installed at suitable locations in large distribution system for the improvement of voltage profile and to reduce power losses in the distribution system. The studies have ...

  12. On the use of PGD for optimal control applied to automated fibre placement

    Science.gov (United States)

    Bur, N.; Joyot, P.

    2017-10-01

    Automated Fibre Placement (AFP) is an incipient manufacturing process for composite structures. Despite its concep-tual simplicity it involves many complexities related to the necessity of melting the thermoplastic at the interface tape-substrate, ensuring the consolidation that needs the diffusion of molecules and control the residual stresses installation responsible of the residual deformations of the formed parts. The optimisation of the process and the determination of the process window cannot be achieved in a traditional way since it requires a plethora of trials/errors or numerical simulations, because there are many parameters involved in the characterisation of the material and the process. Using reduced order modelling such as the so called Proper Generalised Decomposition method, allows the construction of multi-parametric solution taking into account many parameters. This leads to virtual charts that can be explored on-line in real time in order to perform process optimisation or on-line simulation-based control. Thus, for a given set of parameters, determining the power leading to an optimal temperature becomes easy. However, instead of controlling the power knowing the temperature field by particularizing an abacus, we propose here an approach based on optimal control: we solve by PGD a dual problem from heat equation and optimality criteria. To circumvent numerical issue due to ill-conditioned system, we propose an algorithm based on Uzawa's method. That way, we are able to solve the dual problem, setting the desired state as an extra-coordinate in the PGD framework. In a single computation, we get both the temperature field and the required heat flux to reach a parametric optimal temperature on a given zone.

  13. Optimization Placement of Static Var Compensator (Svc) on Electrical Transmission System 150 kV Based on Smart Computation

    Science.gov (United States)

    Hasbullah; Mulyadi, Y.; Febriana, Y.; Abdullah, A. G.

    2018-02-01

    To improve voltage profile, we can use FACTS equipment. One of them is SVC (Static Var Compensator). This study aims to determine the location and optimal capacity of SVC and to determine the effect after SVC installation. This research was conducted in 150 kV transmission system, West Java load regulator area of South Bandung and New Ujungberung subsystem. The research method used for power flow simulation is Newton Raphson and to determine the optimal position and capacity of SVC was using genetic algorithm in MATLAB R2014. After the SVC placement is optimized, then the system performance will increase such as the voltage of all buses is at the standard level and the decrease of power losses.

  14. Multi-objective PSO based optimal placement of solar power DG in radial distribution system

    Directory of Open Access Journals (Sweden)

    Mahesh Kumar

    2017-06-01

    Full Text Available Ever increasing trend of electricity demand, fossil fuel depletion and environmental issues request the integration of renewable energy into the distribution system. The optimal planning of renewable distributed generation (DG is much essential for ensuring maximum benefits. Hence, this paper proposes the optimal placement of probabilistic based solar power DG into the distribution system. The two objective functions such as power loss reduction and voltage stability index improvement are optimized. The power balance and voltage limits are kept as constraints of the problem. The non-sorting pare to-front based multi-objective particle swarm optimization (MOPSO technique is proposed on standard IEEE 33 radial distribution test system.

  15. Determining Student Competency in Field Placements: An Emerging Theoretical Model

    Science.gov (United States)

    Salm, Twyla L.; Johner, Randy; Luhanga, Florence

    2016-01-01

    This paper describes a qualitative case study that explores how twenty-three field advisors, representing three human service professions including education, nursing, and social work, experience the process of assessment with students who are struggling to meet minimum competencies in field placements. Five themes emerged from the analysis of…

  16. Evaluation of Effective Factors on Travel Time in Optimization of Bus Stops Placement Using Genetic Algorithm

    Science.gov (United States)

    Bargegol, Iraj; Ghorbanzadeh, Mahyar; Ghasedi, Meisam; Rastbod, Mohammad

    2017-10-01

    In congested cities, locating and proper designing of bus stops according to the unequal distribution of passengers is crucial issue economically and functionally, since this subject plays an important role in the use of bus system by passengers. Location of bus stops is a complicated subject; by reducing distances between stops, walking time decreases, but the total travel time may increase. In this paper, a specified corridor in the city of Rasht in north of Iran is studied. Firstly, a new formula is presented to calculate the travel time, by which the number of stops and consequently, the travel time can be optimized. An intended corridor with specified number of stops and distances between them is addressed, the related formulas to travel time are created, and its travel time is calculated. Then the corridor is modelled using a meta-heuristic method in order that the placement and the optimal distances of bus stops for that are determined. It was found that alighting and boarding time along with bus capacity are the most effective factors affecting travel time. Consequently, it is better to have more concentration on indicated factors for improving the efficiency of bus system.

  17. Optimal Placement of Energy Storage and Wind Power under Uncertainty

    Directory of Open Access Journals (Sweden)

    Pilar Meneses de Quevedo

    2016-07-01

    Full Text Available Due to the rapid growth in the amount of wind energy connected to distribution grids, they are exposed to higher network constraints, which poses additional challenges to system operation. Based on regulation, the system operator has the right to curtail wind energy in order to avoid any violation of system constraints. Energy storage systems (ESS are considered to be a viable solution to solve this problem. The aim of this paper is to provide the best locations of both ESS and wind power by optimizing distribution system costs taking into account network constraints and the uncertainty associated to the nature of wind, load and price. To do that, we use a mixed integer linear programming (MILP approach consisting of loss reduction, voltage improvement and minimization of generation costs. An alternative current (AC linear optimal power flow (OPF, which employs binary variables to define the location of the generation, is implemented. The proposed stochastic MILP approach has been applied to the IEEE 69-bus distribution network and the results show the performance of the model under different values of installed capacities of ESS and wind power.

  18. Proposal for optimal placement platform of bikes using queueing networks.

    Science.gov (United States)

    Mizuno, Shinya; Iwamoto, Shogo; Seki, Mutsumi; Yamaki, Naokazu

    2016-01-01

    In recent social experiments, rental motorbikes and rental bicycles have been arranged at nodes, and environments where users can ride these bikes have been improved. When people borrow bikes, they return them to nearby nodes. Some experiments have been conducted using the models of Hamachari of Yokohama, the Niigata Rental Cycle, and Bicing. However, from these experiments, the effectiveness of distributing bikes was unclear, and many models were discontinued midway. Thus, we need to consider whether these models are effectively designed to represent the distribution system. Therefore, we construct a model to arrange the nodes for distributing bikes using a queueing network. To adopt realistic values for our model, we use the Google Maps application program interface. Thus, we can easily obtain values of distance and transit time between nodes in various places in the world. Moreover, we apply the distribution of a population to a gravity model and we compute the effective transition probability for this queueing network. If the arrangement of the nodes and number of bikes at each node is known, we can precisely design the system. We illustrate our system using convenience stores as nodes and optimize the node configuration. As a result, we can optimize simultaneously the number of nodes, node places, and number of bikes for each node, and we can construct a base for a rental cycle business to use our system.

  19. Utility of transesophageal electrocardiography to guide optimal placement of a transesophageal pacing catheter in dogs.

    Science.gov (United States)

    Sanders, Robert A; Chapel, Emily; Garcia-Pereira, Fernando L; Venet, Katherine E

    2015-01-01

    To determine if the transesophageal atrial (A) wave amplitude or ventricular (V) wave amplitude can be used to guide optimal positioning of a transesophageal pacing catheter in dogs. Prospective clinical study. Fourteen client owned healthy dogs with a median weight of 15.4 kg (IQR = 10.6-22.4) and a median age of 12 months (IQR = 6-12). Transesophageal atrial pacing (TAP) using a 6 Fr pacing catheter was attempted in dogs under general anesthesia. The pacing catheter was inserted orally into the esophagus to a position caudal to the heart. With the pulse generator set at a rate 20 beats/minute(-1) above the intrinsic sinus rate, the catheter was slowly withdrawn until atrial pacing was noted on a surface electrocardiogram (ECG). Then the catheter was withdrawn in 1 cm increments until atrial capture was lost. Minimum pacing threshold (MPT) and transesophageal ECG were recorded at each site. Amplitudes of the A and V waves on transesophageal ECG were then measured and their relationship to MPT was evaluated. TAP was achieved in all dogs. In 9/14 dogs the site of lowest overall MPT was the same as the site of maximal A wave deflection. In dogs with at least three data points, linear regression analysis of the relationship between the estimated site of the lowest overall MPT compared to estimated site of the maximal A and V waveform amplitudes demonstrated a strong correlation (R(2) = 0.99). Transesophageal ECG A and V waveforms were correlated to MPT and could be used to direct the placement of a pacing catheter. However, the technique was technically challenging and was not considered to be clinically useful to guide the placement of a pacing catheter. © 2014 Association of Veterinary Anaesthetists and the American College of Veterinary Anesthesia and Analgesia.

  20. Optimal placement and sizing of wind / solar based DG sources in distribution system

    Science.gov (United States)

    Guan, Wanlin; Guo, Niao; Yu, Chunlai; Chen, Xiaoguang; Yu, Haiyang; Liu, Zhipeng; Cui, Jiapeng

    2017-06-01

    Proper placement and sizing of Distributed Generation (DG) in distribution system can obtain maximum potential benefits. This paper proposes quantum particle swarm algorithm (QPSO) based wind turbine generation unit (WTGU) and photovoltaic (PV) array placement and sizing approach for real power loss reduction and voltage stability improvement of distribution system. Performance modeling of wind and solar generation system are described and classified into PQ\\PQ (V)\\PI type models in power flow. Considering the WTGU and PV based DGs in distribution system is geographical restrictive, the optimal area and DG capacity limits of each bus in the setting area need to be set before optimization, the area optimization method is proposed . The method has been tested on IEEE 33-bus radial distribution systems to demonstrate the performance and effectiveness of the proposed method.

  1. Optimal Placement and Sizing of Fault Current Limiters in Distributed Generation Systems Using a Hybrid Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    N. Bayati

    2017-02-01

    Full Text Available Distributed Generation (DG connection in a power system tends to increase the short circuit level in the entire system which, in turn, could eliminate the protection coordination between the existing relays. Fault Current Limiters (FCLs are often used to reduce the short-circuit level of the network to a desirable level, provided that they are dully placed and appropriately sized. In this paper, a method is proposed for optimal placement of FCLs and optimal determination of their impedance values by which the relay operation time, the number and size of the FCL are minimized while maintaining the relay coordination before and after DG connection. The proposed method adopts the removal of low-impact FCLs and uses a hybrid Genetic Algorithm (GA optimization scheme to determine the optimal placement of FCLs and the values of their impedances. The suitability of the proposed method is demonstrated by examining the results of relay coordination in a typical DG network before and after DG connection.

  2. Optimized Virtual Machine Placement with Traffic-Aware Balancing in Data Center Networks

    Directory of Open Access Journals (Sweden)

    Tao Chen

    2016-01-01

    Full Text Available Virtualization has been an efficient method to fully utilize computing resources such as servers. The way of placing virtual machines (VMs among a large pool of servers greatly affects the performance of data center networks (DCNs. As network resources have become a main bottleneck of the performance of DCNs, we concentrate on VM placement with Traffic-Aware Balancing to evenly utilize the links in DCNs. In this paper, we first proposed a Virtual Machine Placement Problem with Traffic-Aware Balancing (VMPPTB and then proved it to be NP-hard and designed a Longest Processing Time Based Placement algorithm (LPTBP algorithm to solve it. To take advantage of the communication locality, we proposed Locality-Aware Virtual Machine Placement Problem with Traffic-Aware Balancing (LVMPPTB, which is a multiobjective optimization problem of simultaneously minimizing the maximum number of VM partitions of requests and minimizing the maximum bandwidth occupancy on uplinks of Top of Rack (ToR switches. We also proved it to be NP-hard and designed a heuristic algorithm (Least-Load First Based Placement algorithm, LLBP algorithm to solve it. Through extensive simulations, the proposed heuristic algorithm is proven to significantly balance the bandwidth occupancy on uplinks of ToR switches, while keeping the number of VM partitions of each request small enough.

  3. Optimal Placement of A Heat Pump in An Integrated Power and Heat Energy System

    DEFF Research Database (Denmark)

    Klyapovskiy, Sergey; You, Shi; Bindner, Henrik W.

    2017-01-01

    With the present trend towards Smart Grids and Smart Energy Systems it is important to look for the opportunities for integrated development between different energy sectors, such as electricity, heating, gas and transportation. This paper investigates the problem of optimal placement of a heat...... with the help of mathematical optimization that minimizes investments of both electric and heating utilities, achieving the reduction of the total investment. The optimization is performed in Matlab using built-in Genetic Algorithm function and Matpower software package for calculating power flow equations....

  4. Spatial Model for Determining the Optimum Placement of Logistics Centers in a Predefined Economic Area

    Directory of Open Access Journals (Sweden)

    Ramona Iulia Țarțavulea (Dieaconescu

    2016-08-01

    Full Text Available The process of globalization has stimulated the demand for logistics services at a level of speed and increased efficiency, which involves using of techniques, tools, technologies and modern models in supply chain management. The aim of this research paper is to present a model that can be used in order to achieve an optimized supply chain, associated with minimum transportation costs. The utilization of spatial modeling for determining the optimal locations for logistics centers in a predefined economic area is proposd in this paper. The principal methods used to design the model are mathematic optimization and linear programming. The output data of the model are the precise placement of one up to ten logistics centers, in terms of minimum operational costs for delivery from the optimum locations to consumer points. The results of the research indicate that by using the proposed model, an efficient supply chain that is consistent with optimization of transport can be designed, in order to streamline the delivery process and thus reduce operational costs

  5. Optimal Sensor Placement for Leak Location in Water Distribution Networks Using Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Luis E. Garza-Castañón

    2013-11-01

    Full Text Available This paper proposes a new sensor placement approach for leak location in water distribution networks (WDNs. The sensor placement problem is formulated as an integer optimization problem. The optimization criterion consists in minimizing the number of non-isolable leaks according to the isolability criteria introduced. Because of the large size and non-linear integer nature of the resulting optimization problem, genetic algorithms (GAs are used as the solution approach. The obtained results are compared with a semi-exhaustive search method with higher computational effort, proving that GA allows one to find near-optimal solutions with less computational load. Moreover, three ways of increasing the robustness of the GA-based sensor placement method have been proposed using a time horizon analysis, a distance-based scoring and considering different leaks sizes. A great advantage of the proposed methodology is that it does not depend on the isolation method chosen by the user, as long as it is based on leak sensitivity analysis. Experiments in two networks allow us to evaluate the performance of the proposed approach.

  6. Optimal sensor placement for leak location in water distribution networks using genetic algorithms.

    Science.gov (United States)

    Casillas, Myrna V; Puig, Vicenç; Garza-Castañón, Luis E; Rosich, Albert

    2013-11-04

    This paper proposes a new sensor placement approach for leak location in water distribution networks (WDNs). The sensor placement problem is formulated as an integer optimization problem. The optimization criterion consists in minimizing the number of non-isolable leaks according to the isolability criteria introduced. Because of the large size and non-linear integer nature of the resulting optimization problem, genetic algorithms (GAs) are used as the solution approach. The obtained results are compared with a semi-exhaustive search method with higher computational effort, proving that GA allows one to find near-optimal solutions with less computational load. Moreover, three ways of increasing the robustness of the GA-based sensor placement method have been proposed using a time horizon analysis, a distance-based scoring and considering different leaks sizes. A great advantage of the proposed methodology is that it does not depend on the isolation method chosen by the user, as long as it is based on leak sensitivity analysis. Experiments in two networks allow us to evaluate the performance of the proposed approach.

  7. A risk-based multi-objective model for optimal placement of sensors in water distribution system

    Science.gov (United States)

    Naserizade, Sareh S.; Nikoo, Mohammad Reza; Montaseri, Hossein

    2018-02-01

    In this study, a new stochastic model based on Conditional Value at Risk (CVaR) and multi-objective optimization methods is developed for optimal placement of sensors in water distribution system (WDS). This model determines minimization of risk which is caused by simultaneous multi-point contamination injection in WDS using CVaR approach. The CVaR considers uncertainties of contamination injection in the form of probability distribution function and calculates low-probability extreme events. In this approach, extreme losses occur at tail of the losses distribution function. Four-objective optimization model based on NSGA-II algorithm is developed to minimize losses of contamination injection (through CVaR of affected population and detection time) and also minimize the two other main criteria of optimal placement of sensors including probability of undetected events and cost. Finally, to determine the best solution, Preference Ranking Organization METHod for Enrichment Evaluation (PROMETHEE), as a subgroup of Multi Criteria Decision Making (MCDM) approach, is utilized to rank the alternatives on the trade-off curve among objective functions. Also, sensitivity analysis is done to investigate the importance of each criterion on PROMETHEE results considering three relative weighting scenarios. The effectiveness of the proposed methodology is examined through applying it to Lamerd WDS in the southwestern part of Iran. The PROMETHEE suggests 6 sensors with suitable distribution that approximately cover all regions of WDS. Optimal values related to CVaR of affected population and detection time as well as probability of undetected events for the best optimal solution are equal to 17,055 persons, 31 mins and 0.045%, respectively. The obtained results of the proposed methodology in Lamerd WDS show applicability of CVaR-based multi-objective simulation-optimization model for incorporating the main uncertainties of contamination injection in order to evaluate extreme value

  8. Optimal placement of switching equipment in reconfigurable distribution systems

    Directory of Open Access Journals (Sweden)

    Mijailović Vladica

    2011-01-01

    Full Text Available This paper presents a comparative analysis of some measures that can improve the reliability of medium-voltage (MV distribution feeder. Strictly speaking, the impact of certain types of switching equipment installed on the feeder and possibilities of backup supply from the adjacent feeders were analyzed. For each analyzed case, equations for the calculation of System Average Interruption Duration Index and energy not delivered to the customers are given. The effects of certain measures are calculated for one real MV-feeder for radial supply to customers and in cases of possible backup supply to the customers. Installation locations of certain types of switching equipment for the given concept of energy supply are determined according to the criterion of minimum value of System Average Interruption Duration Index and according to the criterion of minimum value of energy not delivered to the customers.

  9. Optimal power flow with optimal placement TCSC device on 500 kV Java-Bali electrical power system using genetic Algorithm-Taguchi method

    Science.gov (United States)

    Apribowo, Chico Hermanu Brillianto; Ibrahim, Muhammad Hamka; Wicaksono, F. X. Rian

    2018-02-01

    The growing burden of the load and the complexity of the power system has had an impact on the need for optimization of power system operation. Optimal power flow (OPF) with optimal location placement and rating of thyristor controlled series capacitor (TCSC) is an effective solution used to determine the economic cost of operating the plant and regulate the power flow in the power system. The purpose of this study is to minimize the total cost of generation by placing the location and the optimal rating of TCSC using genetic algorithm-design of experiment techniques (GA-DOE). Simulation on Java-Bali system 500 kV with the amount of TCSC used by 5 compensator, the proposed method can reduce the generation cost by 0.89% compared to OPF without using TCSC.

  10. Optimal pressure sensor placement in water distribution networks minimizing leak location uncertainty

    OpenAIRE

    Nejjari Akhi-Elarab, Fatiha; Sarrate Estruch, Ramon; Blesa Izquierdo, Joaquim

    2015-01-01

    In this paper an optimal sensor placement strategy based on pressure sensitivity matrix analysis and an exhaustive search strategy that maximizes some diagnosis specifications for a water distribution network is presented. An average worst leak expansion distance as a new leak location performance measure has been proposed. This metric is later used to assess the leak location uncertainty provided by a sensor configuration. The method is combined with a clustering technique in order to reduce...

  11. Temperature Simulation of Greenhouse with CFD Methods and Optimal Sensor Placement

    Directory of Open Access Journals (Sweden)

    Yanzheng Liu

    2014-03-01

    Full Text Available The accuracy of information monitoring is significant to increase the effect of Greenhouse Environment Control. In this paper, by taking simulation for the temperature field in the greenhouse as an example, the CFD (Computational Fluid Dynamics simulation model for measuring the microclimate environment of greenhouse with the principle of thermal environment formation was established, and the temperature distributions under the condition of mechanical ventilation was also simulated. The results showed that the CFD model and its solution simulated for greenhouse thermal environment could describe the changing process of temperature environment within the greenhouse; the most suitable turbulent simulation model was the standard k?? model. Under the condition of mechanical ventilation, the average deviation between the simulated value and the measured value was 0.6, which was 4.5 percent of the measured value. The distribution of temperature filed had obvious layering structures, and the temperature in the greenhouse model decreased gradually from the periphery to the center. Based on these results, the sensor number and the optimal sensor placement were determined with CFD simulation method.

  12. Optimizing virtual machine placement for energy and SLA in clouds using utility functions

    Directory of Open Access Journals (Sweden)

    Abdelkhalik Mosa

    2016-10-01

    Full Text Available Abstract Cloud computing provides on-demand access to a shared pool of computing resources, which enables organizations to outsource their IT infrastructure. Cloud providers are building data centers to handle the continuous increase in cloud users’ demands. Consequently, these cloud data centers consume, and have the potential to waste, substantial amounts of energy. This energy consumption increases the operational cost and the CO2 emissions. The goal of this paper is to develop an optimized energy and SLA-aware virtual machine (VM placement strategy that dynamically assigns VMs to Physical Machines (PMs in cloud data centers. This placement strategy co-optimizes energy consumption and service level agreement (SLA violations. The proposed solution adopts utility functions to formulate the VM placement problem. A genetic algorithm searches the possible VMs-to-PMs assignments with a view to finding an assignment that maximizes utility. Simulation results using CloudSim show that the proposed utility-based approach reduced the average energy consumption by approximately 6 % and the overall SLA violations by more than 38 %, using fewer VM migrations and PM shutdowns, compared to a well-known heuristics-based approach.

  13. Fast Optimal Replica Placement with Exhaustive Search Using Dynamically Reconfigurable Processor

    Directory of Open Access Journals (Sweden)

    Hidetoshi Takeshita

    2011-01-01

    Full Text Available This paper proposes a new replica placement algorithm that expands the exhaustive search limit with reasonable calculation time. It combines a new type of parallel data-flow processor with an architecture tuned for fast calculation. The replica placement problem is to find a replica-server set satisfying service constraints in a content delivery network (CDN. It is derived from the set cover problem which is known to be NP-hard. It is impractical to use exhaustive search to obtain optimal replica placement in large-scale networks, because calculation time increases with the number of combinations. To reduce calculation time, heuristic algorithms have been proposed, but it is known that no heuristic algorithm is assured of finding the optimal solution. The proposed algorithm suits parallel processing and pipeline execution and is implemented on DAPDNA-2, a dynamically reconfigurable processor. Experiments show that the proposed algorithm expands the exhaustive search limit by the factor of 18.8 compared to the conventional algorithm search limit running on a Neumann-type processor.

  14. Optimal Placement and Sizing of PV-STATCOM in Power Systems Using Empirical Data and Adaptive Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Reza Sirjani

    2018-03-01

    Full Text Available Solar energy is a source of free, clean energy which avoids the destructive effects on the environment that have long been caused by power generation. Solar energy technology rivals fossil fuels, and its development has increased recently. Photovoltaic (PV solar farms can only produce active power during the day, while at night, they are completely idle. At the same time, though, active power should be supported by reactive power. Reactive power compensation in power systems improves power quality and stability. The use during the night of a PV solar farm inverter as a static synchronous compensator (or PV-STATCOM device has recently been proposed which can improve system performance and increase the utility of a PV solar farm. In this paper, a method for optimal PV-STATCOM placement and sizing is proposed using empirical data. Considering the objectives of power loss and cost minimization as well as voltage improvement, two sub-problems of placement and sizing, respectively, are solved by a power loss index and adaptive particle swarm optimization (APSO. Test results show that APSO not only performs better in finding optimal solutions but also converges faster compared with bee colony optimization (BCO and lightening search algorithm (LSA. Installation of a PV solar farm, STATCOM, and PV-STATCOM in a system are each evaluated in terms of efficiency and cost.

  15. Optimal placement of water-lubricated rubber bearings for vibration reduction of flexible multistage rotor systems

    Science.gov (United States)

    Liu, Shibing; Yang, Bingen

    2017-10-01

    Flexible multistage rotor systems with water-lubricated rubber bearings (WLRBs) have a variety of engineering applications. Filling a technical gap in the literature, this effort proposes a method of optimal bearing placement that minimizes the vibration amplitude of a WLRB-supported flexible rotor system with a minimum number of bearings. In the development, a new model of WLRBs and a distributed transfer function formulation are used to define a mixed continuous-and-discrete optimization problem. To deal with the case of uncertain number of WLRBs in rotor design, a virtual bearing method is devised. Solution of the optimization problem by a real-coded genetic algorithm yields the locations and lengths of water-lubricated rubber bearings, by which the prescribed operational requirements for the rotor system are satisfied. The proposed method is applicable either to preliminary design of a new rotor system with the number of bearings unforeknown or to redesign of an existing rotor system with a given number of bearings. Numerical examples show that the proposed optimal bearing placement is efficient, accurate and versatile in different design cases.

  16. Optimal placement of wind turbines in a wind park using Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Marmidis, Grigorios; Lazarou, Stavros; Pyrgioti, Eleftheria [High Voltage Laboratory, Department of Electrical and Computer Engineering, University of Patras, 26500 Rio, Patras (Greece)

    2008-07-15

    In the present study, a novel procedure is introduced for the optimal placement and arrangement of wind turbines in a wind park. In this approach a statistical and mathematical method is used, which is called 'Monte Carlo simulation method'. The optimization is made by the mean of maximum energy production and minimum cost installation criteria. As a test case, a square site is subdivided into 100 square cells that can be possible turbine locations and as a result, the program presents us the optimal arrangement of the wind turbines in the wind park, based on the Monte Carlo simulation method. The results of this study are compared to the results of previous studies that handle the same issue. (author)

  17. Optimal sensor placement for maximum area coverage (MAC) for damage localization in composite structures

    Science.gov (United States)

    Thiene, M.; Sharif Khodaei, Z.; Aliabadi, M. H.

    2016-09-01

    In this paper an optimal sensor placement algorithm for attaining the maximum area coverage (MAC) within a sensor network is presented. The proposed novel approach takes into account physical properties of Lamb wave propagation (attenuation profile, direction dependant group velocity due to material anisotropy) and geometrical complexities (boundary reflections, presence of openings) of the structure. A feature of the proposed optimization approach lies in the fact that it is independent of characteristics of the damage detection algorithm (e.g. probability of detection) making it readily up-scalable to large complex composite structures such as aircraft stiffened composite panel. The proposed fitness function (MAC) is independent of damage parameters (type, severity, location). Statistical analysis carried out shows that the proposed optimum sensor network with MAC results in high probability of damage localization. Genetic algorithm is coupled with the fitness function to provide an efficient optimization strategy.

  18. Strain sensors optimal placement for vibration-based structural health monitoring. The effect of damage on the initially optimal configuration

    Science.gov (United States)

    Loutas, T. H.; Bourikas, A.

    2017-12-01

    We revisit the optimal sensor placement of engineering structures problem with an emphasis on in-plane dynamic strain measurements and to the direction of modal identification as well as vibration-based damage detection for structural health monitoring purposes. The approach utilized is based on the maximization of a norm of the Fisher Information Matrix built with numerically obtained mode shapes of the structure and at the same time prohibit the sensorization of neighbor degrees of freedom as well as those carrying similar information, in order to obtain a satisfactory coverage. A new convergence criterion of the Fisher Information Matrix (FIM) norm is proposed in order to deal with the issue of choosing an appropriate sensor redundancy threshold, a concept recently introduced but not further investigated concerning its choice. The sensor configurations obtained via a forward sequential placement algorithm are sub-optimal in terms of FIM norm values but the selected sensors are not allowed to be placed in neighbor degrees of freedom providing thus a better coverage of the structure and a subsequent better identification of the experimental mode shapes. The issue of how service induced damage affects the initially nominated as optimal sensor configuration is also investigated and reported. The numerical model of a composite sandwich panel serves as a representative aerospace structure upon which our investigations are based.

  19. Optimal placement of active braces by using PSO algorithm in near- and far-field earthquakes

    Science.gov (United States)

    Mastali, M.; Kheyroddin, A.; Samali, B.; Vahdani, R.

    2016-03-01

    One of the most important issues in tall buildings is lateral resistance of the load-bearing systems against applied loads such as earthquake, wind and blast. Dual systems comprising core wall systems (single or multi-cell core) and moment-resisting frames are used as resistance systems in tall buildings. In addition to adequate stiffness provided by the dual system, most tall buildings may have to rely on various control systems to reduce the level of unwanted motions stemming from severe dynamic loads. One of the main challenges to effectively control the motion of a structure is limitation in distributing the required control along the structure height optimally. In this paper, concrete shear walls are used as secondary resistance system at three different heights as well as actuators installed in the braces. The optimal actuator positions are found by using optimized PSO algorithm as well as arbitrarily. The control performance of buildings that are equipped and controlled using the PSO algorithm method placement is assessed and compared with arbitrary placement of controllers using both near- and far-field ground motions of Kobe and Chi-Chi earthquakes.

  20. [Method for optimal sensor placement in water distribution systems with nodal demand uncertainties].

    Science.gov (United States)

    Liu, Shu-Ming; Wu, Xue; Ouyang, Le-Yan

    2013-08-01

    The notion of identification fitness was proposed for optimizing sensor placement in water distribution systems. Nondominated Sorting Genetic Algorithm II was used to find the Pareto front between minimum overlap of possible detection times of two events and the best probability of detection, taking nodal demand uncertainties into account. This methodology was applied to an example network. The solutions show that the probability of detection and the number of possible locations are not remarkably affected by nodal demand uncertainties, but the sources identification accuracy declines with nodal demand uncertainties.

  1. Optimal Placement of Piezoelectric Plates to Control Multimode Vibrations of a Beam

    Directory of Open Access Journals (Sweden)

    Fabio Botta

    2013-01-01

    Full Text Available Damping of vibrations is often required to improve both the performance and the integrity of engineering structures, for example, gas turbine blades. In this paper, we explore the possibility of using piezoelectric plates to control the multimode vibrations of a cantilever beam. To develop an effective control strategy and optimize the placement of the active piezoelectric elements in terms of vibrations amplitude reduction, a procedure has been developed and a new analytical solution has been proposed. The results obtained have been corroborated by comparison with the results from a multiphysics finite elements package (COMSOL, results available in the literature, and experimental investigations carried out by the authors.

  2. Optimal Meter Placement for Distribution Network State Estimation: A Circuit Representation Based MILP Approach

    DEFF Research Database (Denmark)

    Chen, Xiaoshuang; Lin, Jin; Wan, Can

    2016-01-01

    State estimation (SE) in distribution networks is not as accurate as that in transmission networks. Traditionally, distribution networks (DNs) are lack of direct measurements due to the limitations of investments and the difficulties of maintenance. Therefore, it is critical to improve the accuracy...... of SE in distribution networks by placing additional physical meters. For state-of-the-art SE models, it is difficult to clearly quantify measurements' influences on SE errors, so the problems of optimal meter placement for reducing SE errors are mostly solved by heuristic or suboptimal algorithms...

  3. Optimal Sensor Placement in Bridge Structure Based on Immune Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Zhen-Rui PENG

    2014-10-01

    Full Text Available For the problem of optimal sensor placement (OSP, this paper introduces immune genetic algorithm (IGA, which combines the advantages of genetic algorithm (GA and immune algorithm (IA, to minimize sensors placed in the structure and to obtain more information of structural characteristics. The OSP mode is formulated and integer coding method is proposed to code an antibody to reduce the computational complexity of affinity. Additionally, taking an arch bridge as an example, the results indicate that the problem can be achieved based on IGA method, and IGA has the ability to guarantee the higher calculation accuracy, compared with genetic algorithm (GA.

  4. Neural network for optimal capacitor placement and its impact on power quality in electric distribution systems

    International Nuclear Information System (INIS)

    Mohamed, A.A.E.S.

    2013-01-01

    Capacitors are widely installed in distribution systems for reactive power compensation to achieve power and energy loss reduction, voltage regulation and system capacity release. The extent of these benefits depends greatly on how the capacitors are placed on the system. The problem of how to place capacitors on the system such that these benefits are achieved and maximized against the cost associated with the capacitor placement is termed the general capacitor placement problem. The capacitor placement problem has been formulated as the maximization of the savings resulted from reduction in both peak power and energy losses considering capacitor installation cost and maintaining the buses voltage within acceptable limits. After an appropriate analysis, the optimization problem was formulated in a quadratic form. For solving capacitor placement a new combinatorial heuristic and quadratic programming technique has been presented and applied in the MATLAB software. The proposed strategy was applied on two different radial distribution feeders. The results have been compared with previous works. The comparison showed the validity and the effectiveness of this strategy. Secondly, two artificial intelligence techniques for predicting the capacitor switching state in radial distribution feeders have been investigated; one is based on basis Radial Basis Neural Network (RBNN) and the other is based on Adaptive Neuro-Fuzzy Inference System (ANFIS). The ANFIS technique gives better results with a minimum total error compared to RBNN. The learning duration of ANFIS was very short than the neural network case. It implied that ANFIS reaches to the target faster than neural network. Thirdly, an artificial intelligence (RBNN) approach for estimation of transient overvoltage during capacitor switching has been studied. The artificial intelligence approach estimated the transient overvoltages with a minimum error in a short computational time. Finally, a capacitor switching

  5. Determining Student Competency in Field Placements: An Emerging Theoretical Model

    Directory of Open Access Journals (Sweden)

    Twyla L. Salm

    2016-06-01

    Full Text Available This paper describes a qualitative case study that explores how twenty-three field advisors, representing three human service professions including education, nursing, and social work, experience the process of assessment with students who are struggling to meet minimum competencies in field placements. Five themes emerged from the analysis of qualitative interviews. The field advisors primary concern was the level of professional competency achieved by practicum students. Related to competency were themes concerned with the field advisor's role in being accountable and protecting the reputation of his/her profession as well as the reputation of the professional program affiliated with the practicum student's professional education. The final theme – teacher-student relationship –emerged from the data, both as a stand-alone and global or umbrella theme. As an umbrella theme, teacher-student relationship permeated each of the other themes as the participants interpreted their experiences of the process of assessment through the mentor relationships. A theoretical model was derived from these findings and the description of the model is presented. Cet article décrit une étude de cas qualitative qui explore comment vingt-trois conseillers de stages, représentant trois professions de services sociaux comprenant l’éducation, les soins infirmiers et le travail social, ont vécu l’expérience du processus d’évaluation avec des étudiants qui ont des difficultés à acquérir les compétences minimales durant les stages. Cinq thèmes ont été identifiés lors de l’analyse des entrevues qualitatives. La préoccupation principale des conseillers de stages était le niveau de compétence professionnelle acquis par les stagiaires. Les thèmes liés à la compétence étaient le rôle des conseillers de stages dans leur responsabilité pour protéger la réputation de leur profession ainsi que la réputation d’un programme professionnel

  6. On the Design of Smart Parking Networks in the Smart Cities: An Optimal Sensor Placement Model.

    Science.gov (United States)

    Bagula, Antoine; Castelli, Lorenzo; Zennaro, Marco

    2015-06-30

    Smart parking is a typical IoT application that can benefit from advances in sensor, actuator and RFID technologies to provide many services to its users and parking owners of a smart city. This paper considers a smart parking infrastructure where sensors are laid down on the parking spots to detect car presence and RFID readers are embedded into parking gates to identify cars and help in the billing of the smart parking. Both types of devices are endowed with wired and wireless communication capabilities for reporting to a gateway where the situation recognition is performed. The sensor devices are tasked to play one of the three roles: (1) slave sensor nodes located on the parking spot to detect car presence/absence; (2) master nodes located at one of the edges of a parking lot to detect presence and collect the sensor readings from the slave nodes; and (3) repeater sensor nodes, also called "anchor" nodes, located strategically at specific locations in the parking lot to increase the coverage and connectivity of the wireless sensor network. While slave and master nodes are placed based on geographic constraints, the optimal placement of the relay/anchor sensor nodes in smart parking is an important parameter upon which the cost and efficiency of the parking system depends. We formulate the optimal placement of sensors in smart parking as an integer linear programming multi-objective problem optimizing the sensor network engineering efficiency in terms of coverage and lifetime maximization, as well as its economic gain in terms of the number of sensors deployed for a specific coverage and lifetime. We propose an exact solution to the node placement problem using single-step and two-step solutions implemented in the Mosel language based on the Xpress-MPsuite of libraries. Experimental results reveal the relative efficiency of the single-step compared to the two-step model on different performance parameters. These results are consolidated by simulation results

  7. On the Design of Smart Parking Networks in the Smart Cities: An Optimal Sensor Placement Model

    Directory of Open Access Journals (Sweden)

    Antoine Bagula

    2015-06-01

    Full Text Available Smart parking is a typical IoT application that can benefit from advances in sensor, actuator and RFID technologies to provide many services to its users and parking owners of a smart city. This paper considers a smart parking infrastructure where sensors are laid down on the parking spots to detect car presence and RFID readers are embedded into parking gates to identify cars and help in the billing of the smart parking. Both types of devices are endowed with wired and wireless communication capabilities for reporting to a gateway where the situation recognition is performed. The sensor devices are tasked to play one of the three roles: (1 slave sensor nodes located on the parking spot to detect car presence/absence; (2 master nodes located at one of the edges of a parking lot to detect presence and collect the sensor readings from the slave nodes; and (3 repeater sensor nodes, also called “anchor” nodes, located strategically at specific locations in the parking lot to increase the coverage and connectivity of the wireless sensor network. While slave and master nodes are placed based on geographic constraints, the optimal placement of the relay/anchor sensor nodes in smart parking is an important parameter upon which the cost and efficiency of the parking system depends. We formulate the optimal placement of sensors in smart parking as an integer linear programming multi-objective problem optimizing the sensor network engineering efficiency in terms of coverage and lifetime maximization, as well as its economic gain in terms of the number of sensors deployed for a specific coverage and lifetime. We propose an exact solution to the node placement problem using single-step and two-step solutions implemented in the Mosel language based on the Xpress-MPsuite of libraries. Experimental results reveal the relative efficiency of the single-step compared to the two-step model on different performance parameters. These results are consolidated by

  8. Optimal Sequential Diagnostic Strategy Generation Considering Test Placement Cost for Multimode Systems

    Directory of Open Access Journals (Sweden)

    Shigang Zhang

    2015-10-01

    Full Text Available Sequential fault diagnosis is an approach that realizes fault isolation by executing the optimal test step by step. The strategy used, i.e., the sequential diagnostic strategy, has great influence on diagnostic accuracy and cost. Optimal sequential diagnostic strategy generation is an important step in the process of diagnosis system construction, which has been studied extensively in the literature. However, previous algorithms either are designed for single mode systems or do not consider test placement cost. They are not suitable to solve the sequential diagnostic strategy generation problem considering test placement cost for multimode systems. Therefore, this problem is studied in this paper. A formulation is presented. Two algorithms are proposed, one of which is realized by system transformation and the other is newly designed. Extensive simulations are carried out to test the effectiveness of the algorithms. A real-world system is also presented. All the results show that both of them have the ability to solve the diagnostic strategy generation problem, and they have different characteristics.

  9. Efficient heuristic algorithm used for optimal capacitor placement in distribution systems

    Energy Technology Data Exchange (ETDEWEB)

    Segura, Silvio; Rider, Marcos J. [Department of Electric Energy Systems, University of Campinas, Campinas, Sao Paulo (Brazil); Romero, Ruben [Faculty of Engineering of Ilha Solteira, Paulista State University, Ilha Solteira, Sao Paulo (Brazil)

    2010-01-15

    An efficient heuristic algorithm is presented in this work in order to solve the optimal capacitor placement problem in radial distribution systems. The proposal uses the solution from the mathematical model after relaxing the integrality of the discrete variables as a strategy to identify the most attractive bus to add capacitors to each step of the heuristic algorithm. The relaxed mathematical model is a non-linear programming problem and is solved using a specialized interior point method. The algorithm still incorporates an additional strategy of local search that enables the finding of a group of quality solutions after small alterations in the optimization strategy. Proposed solution methodology has been implemented and tested in known electric systems getting a satisfactory outcome compared with metaheuristic methods. The tests carried out in electric systems known in specialized literature reveal the satisfactory outcome of the proposed algorithm compared with metaheuristic methods. (author)

  10. Optimizing Virtual Network Functions Placement in Virtual Data Center Infrastructure Using Machine Learning

    Science.gov (United States)

    Bolodurina, I. P.; Parfenov, D. I.

    2018-01-01

    We have elaborated a neural network model of virtual network flow identification based on the statistical properties of flows circulating in the network of the data center and characteristics that describe the content of packets transmitted through network objects. This enabled us to establish the optimal set of attributes to identify virtual network functions. We have established an algorithm for optimizing the placement of virtual data functions using the data obtained in our research. Our approach uses a hybrid method of visualization using virtual machines and containers, which enables to reduce the infrastructure load and the response time in the network of the virtual data center. The algorithmic solution is based on neural networks, which enables to scale it at any number of the network function copies.

  11. Optimal placement and sizing of fixed and switched capacitor banks under non sinusoidal operating conditions

    International Nuclear Information System (INIS)

    Ladjevardi, M.; Masoum, M.A.S.; Fuchs, E.F.

    2004-01-01

    An iterative nonlinear algorithm is generated for optimal sizing and placement of fixed and switched capacitor banks on radial distribution lines in the presence of linear and nonlinear loads. The HARMFLOW algorithm and the maximum sensitivities selection method are used to solve the constrained optimizations problem with discrete variables. To limit the burden of calculations and improve convergence, the problem is decomposed into two subproblems. Objective functions include minimum system losses and capacitor cost while IEEE 519 power quality limits are used as constraints. Results are presented and analyzed for the 18 bus IEEE distorted system. The advantage of the proposed algorithm compared to the previous work is the consideration of harmonic couplings and reactions of actual nonlinear loads of the distribution system

  12. Wiring economy and volume exclusion determine neuronal placement in the Drosophila brain.

    Science.gov (United States)

    Rivera-Alba, Marta; Vitaladevuni, Shiv N; Mishchenko, Yuriy; Mischenko, Yuriy; Lu, Zhiyuan; Takemura, Shin-Ya; Scheffer, Lou; Meinertzhagen, Ian A; Chklovskii, Dmitri B; de Polavieja, Gonzalo G

    2011-12-06

    Wiring economy has successfully explained the individual placement of neurons in simple nervous systems like that of Caenorhabditis elegans [1-3] and the locations of coarser structures like cortical areas in complex vertebrate brains [4]. However, it remains unclear whether wiring economy can explain the placement of individual neurons in brains larger than that of C. elegans. Indeed, given the greater number of neuronal interconnections in larger brains, simply minimizing the length of connections results in unrealistic configurations, with multiple neurons occupying the same position in space. Avoiding such configurations, or volume exclusion, repels neurons from each other, thus counteracting wiring economy. Here we test whether wiring economy together with volume exclusion can explain the placement of neurons in a module of the Drosophila melanogaster brain known as lamina cartridge [5-13]. We used newly developed techniques for semiautomated reconstruction from serial electron microscopy (EM) [14] to obtain the shapes of neurons, the location of synapses, and the resultant synaptic connectivity. We show that wiring length minimization and volume exclusion together can explain the structure of the lamina microcircuit. Therefore, even in brains larger than that of C. elegans, at least for some circuits, optimization can play an important role in individual neuron placement. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. Development of Decision-Making Automated System for Optimal Placement of Physical Access Control System’s Elements

    Science.gov (United States)

    Danilova, Olga; Semenova, Zinaida

    2018-04-01

    The objective of this study is a detailed analysis of physical protection systems development for information resources. The optimization theory and decision-making mathematical apparatus is used to formulate correctly and create an algorithm of selection procedure for security systems optimal configuration considering the location of the secured object’s access point and zones. The result of this study is a software implementation scheme of decision-making system for optimal placement of the physical access control system’s elements.

  14. Sensor Placement via Optimal Experiment Design in EMI Sensing of Metallic Objects

    Directory of Open Access Journals (Sweden)

    Lin-Ping Song

    2016-01-01

    Full Text Available This work, under the optimal experimental design framework, investigates the sensor placement problem that aims to guide electromagnetic induction (EMI sensing of multiple objects. We use the linearized model covariance matrix as a measure of estimation error to present a sequential experimental design (SED technique. The technique recursively minimizes data misfit to update model parameters and maximizes an information gain function for a future survey relative to previous surveys. The fundamental process of the SED seeks to increase weighted sensitivities to targets when placing sensors. The synthetic and field experiments demonstrate that SED can be used to guide the sensing process for an effective interrogation. It also can serve as a theoretic basis to improve empirical survey operation. We further study the sensitivity of the SED to the number of objects within the sensing range. The tests suggest that an appropriately overrepresented model about expected anomalies might be a feasible choice.

  15. Optimal Placement and Sizing of Renewable Distributed Generations and Capacitor Banks into Radial Distribution Systems

    Directory of Open Access Journals (Sweden)

    Mahesh Kumar

    2017-06-01

    Full Text Available In recent years, renewable types of distributed generation in the distribution system have been much appreciated due to their enormous technical and environmental advantages. This paper proposes a methodology for optimal placement and sizing of renewable distributed generation(s (i.e., wind, solar and biomass and capacitor banks into a radial distribution system. The intermittency of wind speed and solar irradiance are handled with multi-state modeling using suitable probability distribution functions. The three objective functions, i.e., power loss reduction, voltage stability improvement, and voltage deviation minimization are optimized using advanced Pareto-front non-dominated sorting multi-objective particle swarm optimization method. First a set of non-dominated Pareto-front data are called from the algorithm. Later, a fuzzy decision technique is applied to extract the trade-off solution set. The effectiveness of the proposed methodology is tested on the standard IEEE 33 test system. The overall results reveal that combination of renewable distributed generations and capacitor banks are dominant in power loss reduction, voltage stability and voltage profile improvement.

  16. Photovoltaic and Wind Turbine Integration Applying Cuckoo Search for Probabilistic Reliable Optimal Placement

    OpenAIRE

    R. A. Swief; T. S. Abdel-Salam; Noha H. El-Amary

    2018-01-01

    This paper presents an efficient Cuckoo Search Optimization technique to improve the reliability of electrical power systems. Various reliability objective indices such as Energy Not Supplied, System Average Interruption Frequency Index, System Average Interruption, and Duration Index are the main indices indicating reliability. The Cuckoo Search Optimization (CSO) technique is applied to optimally place the protection devices, install the distributed generators, and to determine the size of ...

  17. Rise and Shock: Optimal Defibrillator Placement in a High-rise Building.

    Science.gov (United States)

    Chan, Timothy C Y

    2017-01-01

    Out-of-hospital cardiac arrests (OHCA) in high-rise buildings experience lower survival and longer delays until paramedic arrival. Use of publicly accessible automated external defibrillators (AED) can improve survival, but "vertical" placement has not been studied. We aim to determine whether elevator-based or lobby-based AED placement results in shorter vertical distance travelled ("response distance") to OHCAs in a high-rise building. We developed a model of a single-elevator, n-floor high-rise building. We calculated and compared the average distance from AED to floor of arrest for the two AED locations. We modeled OHCA occurrences using floor-specific Poisson processes, the risk of OHCA on the ground floor (λ 1 ) and the risk on any above-ground floor (λ). The elevator was modeled with an override function enabling direct travel to the target floor. The elevator location upon override was modeled as a discrete uniform random variable. Calculations used the laws of probability. Elevator-based AED placement had shorter average response distance if the number of floors (n) in the building exceeded three quarters of the ratio of ground-floor OHCA risk to above-ground floor risk (λ 1 /λ) plus one half (n ≥ 3λ 1 /4λ + 0.5). Otherwise, a lobby-based AED had shorter average response distance. If OHCA risk on each floor was equal, an elevator-based AED had shorter average response distance. Elevator-based AEDs travel less vertical distance to OHCAs in tall buildings or those with uniform vertical risk, while lobby-based AEDs travel less vertical distance in buildings with substantial lobby, underground, and nearby street-level traffic and OHCA risk.

  18. A Simultaneous Biogeography based Optimal Placement of DG Units and Capacitor Banks in Distribution Systems with Nonlinear Loads

    Science.gov (United States)

    Sadeghi, Hassan; Ghaffarzadeh, Navid

    2016-09-01

    This paper uses a new algorithm namely biogeography based optimization (BBO) intended for the simultaneous placement of the distributed generation (DG) units and the capacitor banks in the distribution network. The procedure of optimization has been conducted in the presence of nonlinear loads (a cause of harmonic injection). The purpose of simultaneous optimal placement of the DG and the capacitor is the reduction of active and reactive losses. The difference in the values of loss reduction at different levels of the load have been included in the objective function and the considered objective function includes the constraints of voltage, size and the number of DG units and capacitor banks and the allowable range of the total harmonic distortion (THD) of the total voltage in accordance with the IEEE 519 standards. In this paper the placement has been performed on two load types ie constant and mixed power, moreover the effects of load models on the results and the effects of optimal placement on reduction of the THD levels have also been analyzed. The mentioned cases have been studied on a 33 bus radial distribution system.

  19. Optimization of Sound Absorbers Number and Placement in an Enclosed Room by Finite Element Simulation

    Science.gov (United States)

    Lau, S. F.; Zainulabidin, M. H.; Yahya, M. N.; Zaman, I.; Azmir, N. A.; Madlan, M. A.; Ismon, M.; Kasron, M. Z.; Ismail, A. E.

    2017-10-01

    Giving a room proper acoustic treatment is both art and science. Acoustic design brings comfort in the built environment and reduces noise level by using sound absorbers. There is a need to give a room acoustic treatment by installing absorbers in order to decrease the reverberant sound. However, they are usually high in price which cost much for installation and there is no system to locate the optimum number and placement of sound absorbers. It would be a waste if the room is overly treated with absorbers or cause improper treatment if the room is treated with insufficient absorbers. This study aims to determine the amount of sound absorbers needed and optimum location of sound absorbers placement in order to reduce the overall sound pressure level in specified room by using ANSYS APDL software. The size of sound absorbers needed is found to be 11 m 2 by using Sabine equation and different unit sets of absorbers are applied on walls, each with the same total areas to investigate the best configurations. All three sets (single absorber, 11 absorbers and 44 absorbers) has successfully treating the room by reducing the overall sound pressure level. The greatest reduction in overall sound pressure level is that of 44 absorbers evenly distributed around the walls, which has reduced as much as 24.2 dB and the least effective configuration is single absorber whereby it has reduced the overall sound pressure level by 18.4 dB.

  20. Integrated method to optimize well connection and platform placement on a multi-reservoir scenario

    Energy Technology Data Exchange (ETDEWEB)

    Sousa, Sergio Henrique Guerra de; Madeira, Marcelo Gomes; Franca, Martha Salles [Halliburton, Rio de Janeiro, RJ (Brazil); Mota, Rosane Oliveira; Silva, Edilon Ribeiro da; King, Vanessa Pereira Spear [Petroleo Brasileiro S.A. (PETROBRAS), Rio de Janeiro, RJ (Brazil)

    2012-07-01

    This paper describes a workflow created to optimize the platform placement and well-platform connections on a multi reservoir scenario using an integrated reservoir simulator paired with an optimization engine. The proposed methodology describes how a new platform, being incorporated into a pre-existing asset, can be better used to develop newly-discovered fields, while helping increase the production of existing fields by sharing their production load. The sharing of production facilities is highly important in Brazilian offshore assets because of their high price (a few billion dollars per facility) and the fact that total production is usually limited to the installed capacity of liquid processing, which is an important constraint on high water-cut well production rates typical to this region. The case study asset used to present the workflow consists of two deep water oil fields, each one developed by its own production platform, and a newly-discovered field with strong aquifer support that will be entirely developed with a new production platform. Because this new field should not include injector wells owing to the strong aquifer presence, the idea is to consider reconnecting existing wells from the two pre-existing fields to better use the production resources. In this scenario, the platform location is an important optimization issue, as a balance between supporting the production of the planned wells on the new field and the production of re-routed wells from the existing fields must be reached to achieve improved overall asset production. If the new platform is too far away from any interconnected production well, pressure-drop issues along the pipeline might actually decrease production from the existing fields rather than augment it. The main contribution of this work is giving the reader insights on how to model and optimize these complex decisions to generate high-quality scenarios. (author)

  1. Field-Based Optimal Placement of Antennas for Body-Worn Wireless Sensors

    Directory of Open Access Journals (Sweden)

    Łukasz Januszkiewicz

    2016-05-01

    Full Text Available We investigate a case of automated energy-budget-aware optimization of the physical position of nodes (sensors in a Wireless Body Area Network (WBAN. This problem has not been presented in the literature yet, as opposed to antenna and routing optimization, which are relatively well-addressed. In our research, which was inspired by a safety-critical application for firefighters, the sensor network consists of three nodes located on the human body. The nodes communicate over a radio link operating in the 2.4 GHz or 5.8 GHz ISM frequency band. Two sensors have a fixed location: one on the head (earlobe pulse oximetry and one on the arm (with accelerometers, temperature and humidity sensors, and a GPS receiver, while the position of the third sensor can be adjusted within a predefined region on the wearer’s chest. The path loss between each node pair strongly depends on the location of the nodes and is difficult to predict without performing a full-wave electromagnetic simulation. Our optimization scheme employs evolutionary computing. The novelty of our approach lies not only in the formulation of the problem but also in linking a fully automated optimization procedure with an electromagnetic simulator and a simplified human body model. This combination turns out to be a computationally effective solution, which, depending on the initial placement, has a potential to improve performance of our example sensor network setup by up to about 20 dB with respect to the path loss between selected nodes.

  2. Multiobjective optimal placement of switches and protective devices in electric power distribution systems using ant colony optimization

    Energy Technology Data Exchange (ETDEWEB)

    Tippachon, Wiwat; Rerkpreedapong, Dulpichet [Department of Electrical Engineering, Kasetsart University, 50 Phaholyothin Rd., Ladyao, Jatujak, Bangkok 10900 (Thailand)

    2009-07-15

    This paper presents a multiobjective optimization methodology to optimally place switches and protective devices in electric power distribution networks. Identifying the type and location of them is a combinatorial optimization problem described by a nonlinear and nondifferential function. The multiobjective ant colony optimization (MACO) has been applied to this problem to minimize the total cost while simultaneously minimize two distribution network reliability indices including system average interruption frequency index (SAIFI) and system interruption duration index (SAIDI). Actual distribution feeders are used in the tests, and test results have shown that the algorithm can determine the set of optimal nondominated solutions. It allows the utility to obtain the optimal type and location of devices to achieve the best system reliability with the lowest cost. (author)

  3. Application of genetic algorithms to optimize burnable poison placement in pressurized water reactors

    International Nuclear Information System (INIS)

    Yilmaz, Serkan; Ivanov, Kostadin; Levine, Samuel; Mahgerefteh, Moussa

    2006-01-01

    An efficient and a practical genetic algorithm (GA) tool was developed and applied successfully to Burnable Poison (BP) placement optimization problem in the reference Three Mile Island-1 (TMI-1) core. Core BP optimization problem means developing a BP loading map for a given core loading pattern that minimizes the total Gadolinium (Gd) amount in the core without violating any design constraints. The number of UO 2 /Gd 2 O 3 pins and Gd 2 O 3 concentrations for each fresh fuel location in the core are the decision variables. The objective function was to minimize the total amount of Gd in the core together with the residual Gd reactivity binding at the End-of-Cycle (EOC). The constraints are to keep the maximum peak pin power during the core depletion and soluble boron (SOB) concentration at the Beginning of Cycle (BOC) both less than their limit values. The innovation of this study was to search all of the possible UO 2 /Gd 2 O 3 fuel assembly designs with variable number of UO 2 /Gd 2 O 3 fuel pins and concentration of Gd 2 O 3 in the overall decision space. The use of different fitness functions guided the solution towards desired (good solutions) region in the solution space, which accelerated the GA solution. The main objective of this study was to develop a practical and efficient GA tool and to apply this tool to designing an optimum BP pattern for a given core loading

  4. Optimal placement of capacitors in a radial network using conic and mixed integer linear programming

    Energy Technology Data Exchange (ETDEWEB)

    Jabr, R.A. [Electrical, Computer and Communication Engineering Department, Notre Dame University, P.O. Box: 72, Zouk Mikhael, Zouk Mosbeh (Lebanon)

    2008-06-15

    This paper considers the problem of optimally placing fixed and switched type capacitors in a radial distribution network. The aim of this problem is to minimize the costs associated with capacitor banks, peak power, and energy losses whilst satisfying a pre-specified set of physical and technical constraints. The proposed solution is obtained using a two-phase approach. In phase-I, the problem is formulated as a conic program in which all nodes are candidates for placement of capacitor banks whose sizes are considered as continuous variables. A global solution of the phase-I problem is obtained using an interior-point based conic programming solver. Phase-II seeks a practical optimal solution by considering capacitor sizes as discrete variables. The problem in this phase is formulated as a mixed integer linear program based on minimizing the L1-norm of deviations from the phase-I state variable values. The solution to the phase-II problem is obtained using a mixed integer linear programming solver. The proposed method is validated via extensive comparisons with previously published results. (author)

  5. Solve: a non linear least-squares code and its application to the optimal placement of torsatron vertical field coils

    International Nuclear Information System (INIS)

    Aspinall, J.

    1982-01-01

    A computational method was developed which alleviates the need for lengthy parametric scans as part of a design process. The method makes use of a least squares algorithm to find the optimal value of a parameter vector. Optimal is defined in terms of a utility function prescribed by the user. The placement of the vertical field coils of a torsatron is such a non linear problem

  6. Optimal needle placement for the accurate magnetic material quantification based on uncertainty analysis in the inverse approach

    International Nuclear Information System (INIS)

    Abdallh, A; Crevecoeur, G; Dupré, L

    2010-01-01

    The measured voltage signals picked up by the needle probe method can be interpreted by a numerical method so as to identify the magnetic material properties of the magnetic circuit of an electromagnetic device. However, when solving this electromagnetic inverse problem, the uncertainties in the numerical method give rise to recovery errors since the calculated needle signals in the forward problem are sensitive to these uncertainties. This paper proposes a stochastic Cramér–Rao bound method for determining the optimal sensor placement in the experimental setup. The numerical method is computationally time efficient where the geometrical parameters need to be provided. We apply the method for the non-destructive magnetic material characterization of an EI inductor where we ascertain the optimal experiment design. This design corresponds to the highest possible resolution that can be obtained when solving the inverse problem. Moreover, the presented results are validated by comparison with the exact material characteristics. The results show that the proposed methodology is independent of the values of the material parameter so that it can be applied before solving the inverse problem, i.e. as a priori estimation stage

  7. Optimal training for emergency needle thoracostomy placement by prehospital personnel: didactic teaching versus a cadaver-based training program.

    Science.gov (United States)

    Grabo, Daniel; Inaba, Kenji; Hammer, Peter; Karamanos, Efstathios; Skiada, Dimitra; Martin, Matthew; Sullivan, Maura; Demetriades, Demetrios

    2014-09-01

    Tension pneumothorax can rapidly progress to cardiac arrest and death if not promptly recognized and appropriately treated. We sought to evaluate the effectiveness of traditional didactic slide-based lectures (SBLs) as compared with fresh tissue cadaver-based training (CBT) for placement of needle thoracostomy (NT). Forty randomly selected US Navy corpsmen were recruited to participate from incoming classes of the Navy Trauma Training Center at the LAC + USC Medical Center and were then randomized to one of two NT teaching methods. The following outcomes were compared between the two study arms: (1) time required to perform the procedure, (2) correct placement of the needle, and (3) magnitude of deviation from the correct position. During the study period, a total of 40 corpsmen were enrolled, 20 randomized to SBL and 20 to CBT arms. When outcomes were analyzed, time required to NT placement was not different between the two arms. Examination of the location of needle placement revealed marked differences between the two study groups. Only a minority of the SBL group (35%) placed the NT correctly in the second intercostal space. In comparison, the majority of corpsmen assigned to the CBT group demonstrated accurate placement in the second intercostal space (75%). In a CBT module, US Navy corpsmen were better trained to place NT accurately than their traditional didactic SBL counterparts. Further studies are indicated to identify the optimal components of effective simulation training for NT and other emergent interventions.

  8. Ensemble-based data assimilation and optimal sensor placement for scalar source reconstruction

    Science.gov (United States)

    Mons, Vincent; Wang, Qi; Zaki, Tamer

    2017-11-01

    Reconstructing the characteristics of a scalar source from limited remote measurements in a turbulent flow is a problem of great interest for environmental monitoring, and is challenging due to several aspects. Firstly, the numerical estimation of the scalar dispersion in a turbulent flow requires significant computational resources. Secondly, in actual practice, only a limited number of observations are available, which generally makes the corresponding inverse problem ill-posed. Ensemble-based variational data assimilation techniques are adopted to solve the problem of scalar source localization in a turbulent channel flow at Reτ = 180 . This approach combines the components of variational data assimilation and ensemble Kalman filtering, and inherits the robustness from the former and the ease of implementation from the latter. An ensemble-based methodology for optimal sensor placement is also proposed in order to improve the condition of the inverse problem, which enhances the performances of the data assimilation scheme. This work has been partially funded by the Office of Naval Research (Grant N00014-16-1-2542) and by the National Science Foundation (Grant 1461870).

  9. Optimal placement and decentralized robust vibration control for spacecraft smart solar panel structures

    International Nuclear Information System (INIS)

    Jiang, Jian-ping; Li, Dong-xu

    2010-01-01

    The decentralized robust vibration control with collocated piezoelectric actuator and strain sensor pairs is considered in this paper for spacecraft solar panel structures. Each actuator is driven individually by the output of the corresponding sensor so that only local feedback control is implemented, with each actuator, sensor and controller operating independently. Firstly, an optimal placement method for the location of the collocated piezoelectric actuator and strain gauge sensor pairs is developed based on the degree of observability and controllability indices for solar panel structures. Secondly, a decentralized robust H ∞ controller is designed to suppress the vibration induced by external disturbance. Finally, a numerical comparison between centralized and decentralized control systems is performed in order to investigate their effectiveness to suppress vibration of the smart solar panel. The simulation results show that the vibration can be significantly suppressed with permitted actuator voltages by the controllers. The decentralized control system almost has the same disturbance attenuation level as the centralized control system with a bit higher control voltages. More importantly, the decentralized controller composed of four three-order systems is a better practical implementation than a high-order centralized controller is

  10. Enabling High-performance Interactive Geoscience Data Analysis Through Data Placement and Movement Optimization

    Science.gov (United States)

    Zhu, F.; Yu, H.; Rilee, M. L.; Kuo, K. S.; Yu, L.; Pan, Y.; Jiang, H.

    2017-12-01

    Since the establishment of data archive centers and the standardization of file formats, scientists are required to search metadata catalogs for data needed and download the data files to their local machines to carry out data analysis. This approach has facilitated data discovery and access for decades, but it inevitably leads to data transfer from data archive centers to scientists' computers through low-bandwidth Internet connections. Data transfer becomes a major performance bottleneck in such an approach. Combined with generally constrained local compute/storage resources, they limit the extent of scientists' studies and deprive them of timely outcomes. Thus, this conventional approach is not scalable with respect to both the volume and variety of geoscience data. A much more viable solution is to couple analysis and storage systems to minimize data transfer. In our study, we compare loosely coupled approaches (exemplified by Spark and Hadoop) and tightly coupled approaches (exemplified by parallel distributed database management systems, e.g., SciDB). In particular, we investigate the optimization of data placement and movement to effectively tackle the variety challenge, and boost the popularization of parallelization to address the volume challenge. Our goal is to enable high-performance interactive analysis for a good portion of geoscience data analysis exercise. We show that tightly coupled approaches can concentrate data traffic between local storage systems and compute units, and thereby optimizing bandwidth utilization to achieve a better throughput. Based on our observations, we develop a geoscience data analysis system that tightly couples analysis engines with storages, which has direct access to the detailed map of data partition locations. Through an innovation data partitioning and distribution scheme, our system has demonstrated scalable and interactive performance in real-world geoscience data analysis applications.

  11. Optimal Capacitor Bank Capacity and Placement in Distribution Systems with High Distributed Solar Power Penetration

    Energy Technology Data Exchange (ETDEWEB)

    Hodge, Brian S [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Mather, Barry A [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Cho, Gyu-Jung [Sungkyunkwan University, Korea; Oh, Yun-Sik [Sungkyunkwan University, Korea; Kim, Min-Sung [Sungkyunkwan University, Korea; Kim, Ji-Soo [Sungkyunkwan University, Korea; Kim, Chul-Hwan [Sungkyunkwan University, Korea

    2018-02-01

    Capacitor banks have been generally installed and utilized to support distribution voltage during period of higher load or on longer, higher impedance, feeders. Installations of distributed energy resources in distribution systems are rapidly increasing, and many of these generation resources have variable and uncertain power output. These generators can significantly change the voltage profile across a feeder, and therefore when a new capacitor bank is needed analysis of optimal capacity and location of the capacitor bank is required. In this paper, we model a particular distribution system including essential equipment. An optimization method is adopted to determine the best capacity and location sets of the newly installed capacitor banks, in the presence of distributed solar power generation. Finally we analyze the optimal capacitor banks configuration through the optimization and simulation results.

  12. Multiscale Collaborative Optimization of Processing Parameters for Carbon Fiber/Epoxy Laminates Fabricated by High-Speed Automated Fiber Placement

    Directory of Open Access Journals (Sweden)

    Zhenyu Han

    2016-01-01

    Full Text Available Processing optimization is an important means to inhibit manufacturing defects efficiently. However, processing optimization used by experiments or macroscopic theories in high-speed automated fiber placement (AFP suffers from some restrictions, because multiscale effect of laying tows and their manufacturing defects could not be considered. In this paper, processing parameters, including compaction force, laying speed, and preheating temperature, are optimized by multiscale collaborative optimization in AFP process. Firstly, rational model between cracks and strain energy is revealed in order that the formative possibility of cracks could be assessed by using strain energy or its density. Following that, an antisequential hierarchical multiscale collaborative optimization method is presented to resolve multiscale effect of structure and mechanical properties for laying tows or cracks in high-speed automated fiber placement process. According to the above method and taking carbon fiber/epoxy tow as an example, multiscale mechanical properties of laying tow under different processing parameters are investigated through simulation, which includes recoverable strain energy (ALLSE of macroscale, strain energy density (SED of mesoscale, and interface absorbability and matrix fluidity of microscale. Finally, response surface method (RSM is used to optimize the processing parameters. Two groups of processing parameters, which have higher desirability, are obtained to achieve the purpose of multiscale collaborative optimization.

  13. Method for Vibration Response Simulation and Sensor Placement Optimization of a Machine Tool Spindle System with a Bearing Defect

    Science.gov (United States)

    Cao, Hongrui; Niu, Linkai; He, Zhengjia

    2012-01-01

    Bearing defects are one of the most important mechanical sources for vibration and noise generation in machine tool spindles. In this study, an integrated finite element (FE) model is proposed to predict the vibration responses of a spindle bearing system with localized bearing defects and then the sensor placement for better detection of bearing faults is optimized. A nonlinear bearing model is developed based on Jones' bearing theory, while the drawbar, shaft and housing are modeled as Timoshenko's beam. The bearing model is then integrated into the FE model of drawbar/shaft/housing by assembling equations of motion. The Newmark time integration method is used to solve the vibration responses numerically. The FE model of the spindle-bearing system was verified by conducting dynamic tests. Then, the localized bearing defects were modeled and vibration responses generated by the outer ring defect were simulated as an illustration. The optimization scheme of the sensor placement was carried out on the test spindle. The results proved that, the optimal sensor placement depends on the vibration modes under different boundary conditions and the transfer path between the excitation and the response. PMID:23012514

  14. Optimal Placement of Actors in WSANs Based on Imposed Delay Constraints

    Directory of Open Access Journals (Sweden)

    Chunxi Yang

    2014-01-01

    Full Text Available Wireless Sensor and Actor Networks (WSANs refer to a group of sensors and actors linked by wireless medium to probe environment and perform specific actions. Such certain actions should always be taken before a deadline when an event of interest is detected. In order to provide such services, the whole monitor area is divided into several virtual areas and nodes in the same area form a cluster. Clustering of the WSANs is often pursued to give that each actor acts as a cluster-head. The number of actors is related to the size and the deployment of WSANs cluster. In this paper, we find a method to determine the accurate number of actors which enables them to receive data and take actions in an imposed time-delay. The k-MinTE and the k-MaxTE clustering algorithm are proposed to form the minimum and maximum size of cluster, respectively. In those clustering algorithms, actors are deployed in such a way that sensors could route data to actors within k hops. Then, clusters are arranged by the regular hexagon. At last, we evaluate the placement of actors and results show that our approach is effective.

  15. Optimal Design for Placements of Tsunami Observing Systems to Accurately Characterize the Inducing Earthquake

    Science.gov (United States)

    Mulia, Iyan E.; Gusman, Aditya Riadi; Satake, Kenji

    2017-12-01

    Recently, there are numerous tsunami observation networks deployed in several major tsunamigenic regions. However, guidance on where to optimally place the measurement devices is limited. This study presents a methodological approach to select strategic observation locations for the purpose of tsunami source characterizations, particularly in terms of the fault slip distribution. Initially, we identify favorable locations and determine the initial number of observations. These locations are selected based on extrema of empirical orthogonal function (EOF) spatial modes. To further improve the accuracy, we apply an optimization algorithm called a mesh adaptive direct search to remove redundant measurement locations from the EOF-generated points. We test the proposed approach using multiple hypothetical tsunami sources around the Nankai Trough, Japan. The results suggest that the optimized observation points can produce more accurate fault slip estimates with considerably less number of observations compared to the existing tsunami observation networks.

  16. Multi-Objective Distribution Network Operation Based on Distributed Generation Optimal Placement Using New Antlion Optimizer Considering Reliability

    Directory of Open Access Journals (Sweden)

    KHANBABAZADEH Javad

    2016-10-01

    Full Text Available Distribution network designers and operators are trying to deliver electrical energy with high reliability and quality to their subscribers. Due to high losses in the distribution systems, using distributed generation can improves reliability, reduces losses and improves voltage profile of distribution network. Therefore, the choice of the location of these resources and also determining the amount of their generated power to maximize the benefits of this type of resource is an important issue which is discussed from different points of view today. In this paper, a new multi-objective optimal location and sizing of distributed generation resources is performed to maximize its benefits on the 33 bus distribution test network considering reliability and using a new Antlion Optimizer (ALO. The benefits for DG are considered as system losses reduction, system reliability improvement and benefits from the sale electricity and voltage profile improvement. For each of the mentioned benefits, the ALO algorithm is used to optimize the location and sizing of distributed generation resources. In order to verify the proposed approach, the obtained results have been analyzed and compared with the results of particle swarm optimization (PSO algorithm. The results show that the ALO has shown better performance in optimization problem solution versus PSO.

  17. Implementation of strength pareto evolutionary algorithm II in the multiobjective burnable poison placement optimization of KWU pressurized water reactor

    Energy Technology Data Exchange (ETDEWEB)

    Gharari, Rahman [Nuclear Science and Technology Research Institute (NSTRI), Tehran (Iran, Islamic Republic of); Poursalehi, Navid; Abbasi, Mohmmadreza; Aghale, Mahdi [Nuclear Engineering Dept, Shahid Beheshti University, Tehran (Iran, Islamic Republic of)

    2016-10-15

    In this research, for the first time, a new optimization method, i.e., strength Pareto evolutionary algorithm II (SPEA-II), is developed for the burnable poison placement (BPP) optimization of a nuclear reactor core. In the BPP problem, an optimized placement map of fuel assemblies with burnable poison is searched for a given core loading pattern according to defined objectives. In this work, SPEA-II coupled with a nodal expansion code is used for solving the BPP problem of Kraftwerk Union AG (KWU) pressurized water reactor. Our optimization goal for the BPP is to achieve a greater multiplication factor (K-e-f-f) for gaining possible longer operation cycles along with more flattening of fuel assembly relative power distribution, considering a safety constraint on the radial power peaking factor. For appraising the proposed methodology, the basic approach, i.e., SPEA, is also developed in order to compare obtained results. In general, results reveal the acceptance performance and high strength of SPEA, particularly its new version, i.e., SPEA-II, in achieving a semioptimized loading pattern for the BPP optimization of KWU pressurized water reactor.

  18. Implementation of strength pareto evolutionary algorithm II in the multiobjective burnable poison placement optimization of KWU pressurized water reactor

    International Nuclear Information System (INIS)

    Gharari, Rahman; Poursalehi, Navid; Abbasi, Mohmmadreza; Aghale, Mahdi

    2016-01-01

    In this research, for the first time, a new optimization method, i.e., strength Pareto evolutionary algorithm II (SPEA-II), is developed for the burnable poison placement (BPP) optimization of a nuclear reactor core. In the BPP problem, an optimized placement map of fuel assemblies with burnable poison is searched for a given core loading pattern according to defined objectives. In this work, SPEA-II coupled with a nodal expansion code is used for solving the BPP problem of Kraftwerk Union AG (KWU) pressurized water reactor. Our optimization goal for the BPP is to achieve a greater multiplication factor (K-e-f-f) for gaining possible longer operation cycles along with more flattening of fuel assembly relative power distribution, considering a safety constraint on the radial power peaking factor. For appraising the proposed methodology, the basic approach, i.e., SPEA, is also developed in order to compare obtained results. In general, results reveal the acceptance performance and high strength of SPEA, particularly its new version, i.e., SPEA-II, in achieving a semioptimized loading pattern for the BPP optimization of KWU pressurized water reactor

  19. Comparing the selection and placement of best management practices in improving water quality using a multiobjective optimization and targeting method.

    Science.gov (United States)

    Chiang, Li-Chi; Chaubey, Indrajeet; Maringanti, Chetan; Huang, Tao

    2014-03-11

    Suites of Best Management Practices (BMPs) are usually selected to be economically and environmentally efficient in reducing nonpoint source (NPS) pollutants from agricultural areas in a watershed. The objective of this research was to compare the selection and placement of BMPs in a pasture-dominated watershed using multiobjective optimization and targeting methods. Two objective functions were used in the optimization process, which minimize pollutant losses and the BMP placement areas. The optimization tool was an integration of a multi-objective genetic algorithm (GA) and a watershed model (Soil and Water Assessment Tool-SWAT). For the targeting method, an optimum BMP option was implemented in critical areas in the watershed that contribute the greatest pollutant losses. A total of 171 BMP combinations, which consist of grazing management, vegetated filter strips (VFS), and poultry litter applications were considered. The results showed that the optimization is less effective when vegetated filter strips (VFS) are not considered, and it requires much longer computation times than the targeting method to search for optimum BMPs. Although the targeting method is effective in selecting and placing an optimum BMP, larger areas are needed for BMP implementation to achieve the same pollutant reductions as the optimization method.

  20. Comparing the Selection and Placement of Best Management Practices in Improving Water Quality Using a Multiobjective Optimization and Targeting Method

    Directory of Open Access Journals (Sweden)

    Li-Chi Chiang

    2014-03-01

    Full Text Available Suites of Best Management Practices (BMPs are usually selected to be economically and environmentally efficient in reducing nonpoint source (NPS pollutants from agricultural areas in a watershed. The objective of this research was to compare the selection and placement of BMPs in a pasture-dominated watershed using multiobjective optimization and targeting methods. Two objective functions were used in the optimization process, which minimize pollutant losses and the BMP placement areas. The optimization tool was an integration of a multi-objective genetic algorithm (GA and a watershed model (Soil and Water Assessment Tool—SWAT. For the targeting method, an optimum BMP option was implemented in critical areas in the watershed that contribute the greatest pollutant losses. A total of 171 BMP combinations, which consist of grazing management, vegetated filter strips (VFS, and poultry litter applications were considered. The results showed that the optimization is less effective when vegetated filter strips (VFS are not considered, and it requires much longer computation times than the targeting method to search for optimum BMPs. Although the targeting method is effective in selecting and placing an optimum BMP, larger areas are needed for BMP implementation to achieve the same pollutant reductions as the optimization method.

  1. Implementation of Strength Pareto Evolutionary Algorithm II in the Multiobjective Burnable Poison Placement Optimization of KWU Pressurized Water Reactor

    Directory of Open Access Journals (Sweden)

    Rahman Gharari

    2016-10-01

    Full Text Available In this research, for the first time, a new optimization method, i.e., strength Pareto evolutionary algorithm II (SPEA-II, is developed for the burnable poison placement (BPP optimization of a nuclear reactor core. In the BPP problem, an optimized placement map of fuel assemblies with burnable poison is searched for a given core loading pattern according to defined objectives. In this work, SPEA-II coupled with a nodal expansion code is used for solving the BPP problem of Kraftwerk Union AG (KWU pressurized water reactor. Our optimization goal for the BPP is to achieve a greater multiplication factor (Keff for gaining possible longer operation cycles along with more flattening of fuel assembly relative power distribution, considering a safety constraint on the radial power peaking factor. For appraising the proposed methodology, the basic approach, i.e., SPEA, is also developed in order to compare obtained results. In general, results reveal the acceptance performance and high strength of SPEA, particularly its new version, i.e., SPEA-II, in achieving a semioptimized loading pattern for the BPP optimization of KWU pressurized water reactor.

  2. Artificial Intelligence based technique for BTS placement

    International Nuclear Information System (INIS)

    Alenoghena, C O; Emagbetere, J O; 1 Minna (Nigeria))" data-affiliation=" (Department of Telecommunications Engineering, Federal University of Techn.1 Minna (Nigeria))" >Aibinu, A M

    2013-01-01

    The increase of the base transceiver station (BTS) in most urban areas can be traced to the drive by network providers to meet demand for coverage and capacity. In traditional network planning, the final decision of BTS placement is taken by a team of radio planners, this decision is not fool proof against regulatory requirements. In this paper, an intelligent based algorithm for optimal BTS site placement has been proposed. The proposed technique takes into consideration neighbour and regulation considerations objectively while determining cell site. The application will lead to a quantitatively unbiased evaluated decision making process in BTS placement. An experimental data of a 2km by 3km territory was simulated for testing the new algorithm, results obtained show a 100% performance of the neighbour constrained algorithm in BTS placement optimization. Results on the application of GA with neighbourhood constraint indicate that the choices of location can be unbiased and optimization of facility placement for network design can be carried out

  3. Optimal Sizing and Placement of Power-to-Gas Systems in Future Active Distribution Networks

    DEFF Research Database (Denmark)

    Diaz de Cerio Mendaza, Iker; Bhattarai, Bishnu Prasad; Kouzelis, Konstantinos

    2015-01-01

    of medium voltage distribution networks does not normally follow a common pattern, finding a singular and very particular layouts in each case. This fact, makes the placement and dimensioning of such flexible loads a complicated task for the distribution system operator in the future. This paper describes...

  4. Photovoltaic and Wind Turbine Integration Applying Cuckoo Search for Probabilistic Reliable Optimal Placement

    Directory of Open Access Journals (Sweden)

    R. A. Swief

    2018-01-01

    Full Text Available This paper presents an efficient Cuckoo Search Optimization technique to improve the reliability of electrical power systems. Various reliability objective indices such as Energy Not Supplied, System Average Interruption Frequency Index, System Average Interruption, and Duration Index are the main indices indicating reliability. The Cuckoo Search Optimization (CSO technique is applied to optimally place the protection devices, install the distributed generators, and to determine the size of distributed generators in radial feeders for reliability improvement. Distributed generator affects reliability and system power losses and voltage profile. The volatility behaviour for both photovoltaic cells and the wind turbine farms affect the values and the selection of protection devices and distributed generators allocation. To improve reliability, the reconfiguration will take place before installing both protection devices and distributed generators. Assessment of consumer power system reliability is a vital part of distribution system behaviour and development. Distribution system reliability calculation will be relayed on probabilistic reliability indices, which can expect the disruption profile of a distribution system based on the volatility behaviour of added generators and load behaviour. The validity of the anticipated algorithm has been tested using a standard IEEE 69 bus system.

  5. Optimizing Placement of Weather Stations: Exploring Objective Functions of Meaningful Combinations of Multiple Weather Variables

    Science.gov (United States)

    Snyder, A.; Dietterich, T.; Selker, J. S.

    2017-12-01

    Many regions of the world lack ground-based weather data due to inadequate or unreliable weather station networks. For example, most countries in Sub-Saharan Africa have unreliable, sparse networks of weather stations. The absence of these data can have consequences on weather forecasting, prediction of severe weather events, agricultural planning, and climate change monitoring. The Trans-African Hydro-Meteorological Observatory (TAHMO.org) project seeks to address these problems by deploying and operating a large network of weather stations throughout Sub-Saharan Africa. To design the TAHMO network, we must determine where to place weather stations within each country. We should consider how we can create accurate spatio-temporal maps of weather data and how to balance the desired accuracy of each weather variable of interest (precipitation, temperature, relative humidity, etc.). We can express this problem as a joint optimization of multiple weather variables, given a fixed number of weather stations. We use reanalysis data as the best representation of the "true" weather patterns that occur in the region of interest. For each possible combination of sites, we interpolate the reanalysis data between selected locations and calculate the mean average error between the reanalysis ("true") data and the interpolated data. In order to formulate our multi-variate optimization problem, we explore different methods of weighting each weather variable in our objective function. These methods include systematic variation of weights to determine which weather variables have the strongest influence on the network design, as well as combinations targeted for specific purposes. For example, we can use computed evapotranspiration as a metric that combines many weather variables in a way that is meaningful for agricultural and hydrological applications. We compare the errors of the weather station networks produced by each optimization problem formulation. We also compare these

  6. Determining an optimal supply chain strategy

    Directory of Open Access Journals (Sweden)

    Intaher M. Ambe

    2012-11-01

    Full Text Available In today’s business environment, many companies want to become efficient and flexible, but have struggled, in part, because they have not been able to formulate optimal supply chain strategies. Often this is as a result of insufficient knowledge about the costs involved in maintaining supply chains and the impact of the supply chain on their operations. Hence, these companies find it difficult to manufacture at a competitive cost and respond quickly and reliably to market demand. Mismatched strategies are the root cause of the problems that plague supply chains, and supply-chain strategies based on a one-size-fits-all strategy often fail. The purpose of this article is to suggest instruments to determine an optimal supply chain strategy. This article, which is conceptual in nature, provides a review of current supply chain strategies and suggests a framework for determining an optimal strategy.

  7. Jacobian approach to optimal determination of perturbation ...

    African Journals Online (AJOL)

    In this work, the optimal determination of the perturbation factor (λ) or perturbation parameter for gradient method is considered. The spectrum analysis of the associated Jacobian of the associated matrix has laid the basis for the judicious selection of the perturbation factor. Numerical work is carried out to prove our ...

  8. An Optimal Design for Placements of Tsunami Observing Systems Around the Nankai Trough, Japan

    Science.gov (United States)

    Mulia, I. E.; Gusman, A. R.; Satake, K.

    2017-12-01

    Presently, there are numerous tsunami observing systems deployed in several major tsunamigenic regions throughout the world. However, documentations on how and where to optimally place such measurement devices are limited. This study presents a methodological approach to select the best and fewest observation points for the purpose of tsunami source characterizations, particularly in the form of fault slip distributions. We apply the method to design a new tsunami observation network around the Nankai Trough, Japan. In brief, our method can be divided into two stages: initialization and optimization. The initialization stage aims to identify favorable locations of observation points, as well as to determine the initial number of observations. These points are generated based on extrema of an empirical orthogonal function (EOF) spatial modes derived from 11 hypothetical tsunami events in the region. In order to further improve the accuracy, we apply an optimization algorithm called a mesh adaptive direct search (MADS) to remove redundant measurements from the initially generated points by the first stage. A combinatorial search by the MADS will improve the accuracy and reduce the number of observations simultaneously. The EOF analysis of the hypothetical tsunamis using first 2 leading modes with 4 extrema on each mode results in 30 observation points spread along the trench. This is obtained after replacing some clustered points within the radius of 30 km with only one representative. Furthermore, the MADS optimization can improve the accuracy of the EOF-generated points by approximately 10-20% with fewer observations (23 points). Finally, we compare our result with the existing observation points (68 stations) in the region. The result shows that the optimized design with fewer number of observations can produce better source characterizations with approximately 20-60% improvement of accuracies at all the 11 hypothetical cases. It should be note, however, that our

  9. Determining Window Placement and Configuration for the Small Pressurized Rover (SPR)

    Science.gov (United States)

    Thompson, Shelby; Litaker, Harry; Howard, Robert

    2009-01-01

    This slide presentation reviews the process of the evaluation of window placement and configuration for the cockpit of the Lunar Electric Rover (LER). The purpose of the evaluation was to obtain human-in-the-loop data on window placement and configuration for the cockpit of the LER.

  10. Optimal base station placement for wireless sensor networks with successive interference cancellation.

    Science.gov (United States)

    Shi, Lei; Zhang, Jianjun; Shi, Yi; Ding, Xu; Wei, Zhenchun

    2015-01-14

    We consider the base station placement problem for wireless sensor networks with successive interference cancellation (SIC) to improve throughput. We build a mathematical model for SIC. Although this model cannot be solved directly, it enables us to identify a necessary condition for SIC on distances from sensor nodes to the base station. Based on this relationship, we propose to divide the feasible region of the base station into small pieces and choose a point within each piece for base station placement. The point with the largest throughput is identified as the solution. The complexity of this algorithm is polynomial. Simulation results show that this algorithm can achieve about 25% improvement compared with the case that the base station is placed at the center of the network coverage area when using SIC.

  11. An Analysis of the Optimal Placement of Beacon in Bluetooth-INS Indoor Localization

    OpenAIRE

    Zhao, Xinyu; Ruan, Ling; Zhang, Ling; Long, Yi; Cheng, Fei

    2018-01-01

    The placement of Bluetooth beacon has immediate impact on the accuracy and stability of indoor positioning. Affected by the shelter of building and human, Bluetooth shows uncertain spatial transmission characteristics. Therefore, the scientific deployment of the beacon nodes is closely related to the indoor space environment. In the study of positioning technology using Bluetooth, some scholars have discussed the deployment of Bluetooth beacon in different scenarios. In the principle of avoid...

  12. Optical network unit placement in Fiber-Wireless (FiWi) access network by Moth-Flame optimization algorithm

    Science.gov (United States)

    Singh, Puja; Prakash, Shashi

    2017-07-01

    Hybrid wireless-optical broadband access network (WOBAN) or Fiber-Wireless (FiWi) is the integration of wireless access network and optical network. This hybrid multi-domain network adopts the advantages of wireless and optical domains and serves the demand of technology savvy users. FiWi exhibits the properties of cost effectiveness, robustness, flexibility, high capacity, reliability and is self organized. Optical Network Unit (ONU) placement problem in FiWi contributes in simplifying the network design and enhances the performance in terms of cost efficiency and increased throughput. Several individual-based algorithms, such as Simulated Annealing (SA), Tabu Search, etc. have been suggested for ONU placement, but these algorithms suffer from premature convergence (trapping in a local optima). The present research work undertakes the deployment of FiWi and proposes a novel nature-inspired heuristic paradigm called Moth-Flame optimization (MFO) algorithm for multiple optical network units' placement. MFO is a population based algorithm. Population-based algorithms are better in handling local optima avoidance. The simulation results are compared with the existing Greedy and Simulated Annealing algorithms to optimize the position of ONUs. To the best of our knowledge, MFO algorithm has been used for the first time in this domain, moreover it has been able to provide very promising and competitive results. The performance of MFO algorithm has been analyzed by varying the 'b' parameter. MFO algorithm results in faster convergence than the existing strategies of Greedy and SA and returns a lower value of overall cost function. The results exhibit the dependence of the objective function on the distribution of wireless users also.

  13. Optimization of Lightweight Axles for an Innovative Carving Skateboard Based on Carbon Fiber Placement

    Directory of Open Access Journals (Sweden)

    Marc Fleischmann

    2018-02-01

    Full Text Available In 2003, the BMW Group developed a longboard called “StreetCarver”. The idea behind this product was to bring the perfect carving feeling of surf- and snowboarding on the streets by increasing the maneuverability of classical skateboard trucks. The outcome was a chassis based on complex kinematics. The negative side effect was the StreetCarver’s exceptional high weight of almost 8 kg. The main reason for this heaviness was the choice of traditional metallic engineering materials. In this research, modern fiber reinforced composites were used to lower the chassis’ mass by up to 50% to reach the weight of a common longboard. To accomplish that goal, carbon fibers were placed along pre-simulated load paths of the structural components in a so-called Tailored- Fiber-Placement process. This technology allows an angle-independent single-roving placement and leads not only to the reduction of weight but also helps to save valuable fiber material by avoiding cutting waste.

  14. Optimal Geometrical Set for Automated Marker Placement to Virtualized Real-Time Facial Emotions.

    Directory of Open Access Journals (Sweden)

    Vasanthan Maruthapillai

    Full Text Available In recent years, real-time face recognition has been a major topic of interest in developing intelligent human-machine interaction systems. Over the past several decades, researchers have proposed different algorithms for facial expression recognition, but there has been little focus on detection in real-time scenarios. The present work proposes a new algorithmic method of automated marker placement used to classify six facial expressions: happiness, sadness, anger, fear, disgust, and surprise. Emotional facial expressions were captured using a webcam, while the proposed algorithm placed a set of eight virtual markers on each subject's face. Facial feature extraction methods, including marker distance (distance between each marker to the center of the face and change in marker distance (change in distance between the original and new marker positions, were used to extract three statistical features (mean, variance, and root mean square from the real-time video sequence. The initial position of each marker was subjected to the optical flow algorithm for marker tracking with each emotional facial expression. Finally, the extracted statistical features were mapped into corresponding emotional facial expressions using two simple non-linear classifiers, K-nearest neighbor and probabilistic neural network. The results indicate that the proposed automated marker placement algorithm effectively placed eight virtual markers on each subject's face and gave a maximum mean emotion classification rate of 96.94% using the probabilistic neural network.

  15. Swarm intelligence algorithms for integrated optimization of piezoelectric actuator and sensor placement and feedback gains

    International Nuclear Information System (INIS)

    Dutta, Rajdeep; Ganguli, Ranjan; Mani, V

    2011-01-01

    Swarm intelligence algorithms are applied for optimal control of flexible smart structures bonded with piezoelectric actuators and sensors. The optimal locations of actuators/sensors and feedback gain are obtained by maximizing the energy dissipated by the feedback control system. We provide a mathematical proof that this system is uncontrollable if the actuators and sensors are placed at the nodal points of the mode shapes. The optimal locations of actuators/sensors and feedback gain represent a constrained non-linear optimization problem. This problem is converted to an unconstrained optimization problem by using penalty functions. Two swarm intelligence algorithms, namely, Artificial bee colony (ABC) and glowworm swarm optimization (GSO) algorithms, are considered to obtain the optimal solution. In earlier published research, a cantilever beam with one and two collocated actuator(s)/sensor(s) was considered and the numerical results were obtained by using genetic algorithm and gradient based optimization methods. We consider the same problem and present the results obtained by using the swarm intelligence algorithms ABC and GSO. An extension of this cantilever beam problem with five collocated actuators/sensors is considered and the numerical results obtained by using the ABC and GSO algorithms are presented. The effect of increasing the number of design variables (locations of actuators and sensors and gain) on the optimization process is investigated. It is shown that the ABC and GSO algorithms are robust and are good choices for the optimization of smart structures

  16. Optimal sensor placement for large structures using the nearest neighbour index and a hybrid swarm intelligence algorithm

    International Nuclear Information System (INIS)

    Lian, Jijian; He, Longjun; Ma, Bin; Peng, Wenxiang; Li, Huokun

    2013-01-01

    Research on optimal sensor placement (OSP) has become very important due to the need to obtain effective testing results with limited testing resources in health monitoring. In this study, a new methodology is proposed to select the best sensor locations for large structures. First, a novel fitness function derived from the nearest neighbour index is proposed to overcome the drawbacks of the effective independence method for OSP for large structures. This method maximizes the contribution of each sensor to modal observability and simultaneously avoids the redundancy of information between the selected degrees of freedom. A hybrid algorithm combining the improved discrete particle swarm optimization (DPSO) with the clonal selection algorithm is then implemented to optimize the proposed fitness function effectively. Finally, the proposed method is applied to an arch dam for performance verification. The results show that the proposed hybrid swarm intelligence algorithm outperforms a genetic algorithm with decimal two-dimension array encoding and DPSO in the capability of global optimization. The new fitness function is advantageous in terms of sensor distribution and ensuring a well-conditioned information matrix and orthogonality of modes, indicating that this method may be used to provide guidance for OSP in various large structures. (paper)

  17. A Cross-Entropy-Based Admission Control Optimization Approach for Heterogeneous Virtual Machine Placement in Public Clouds

    Directory of Open Access Journals (Sweden)

    Li Pan

    2016-03-01

    Full Text Available Virtualization technologies make it possible for cloud providers to consolidate multiple IaaS provisions into a single server in the form of virtual machines (VMs. Additionally, in order to fulfill the divergent service requirements from multiple users, a cloud provider needs to offer several types of VM instances, which are associated with varying configurations and performance, as well as different prices. In such a heterogeneous virtual machine placement process, one significant problem faced by a cloud provider is how to optimally accept and place multiple VM service requests into its cloud data centers to achieve revenue maximization. To address this issue, in this paper, we first formulate such a revenue maximization problem during VM admission control as a multiple-dimensional knapsack problem, which is known to be NP-hard to solve. Then, we propose to use a cross-entropy-based optimization approach to address this revenue maximization problem, by obtaining a near-optimal eligible set for the provider to accept into its data centers, from the waiting VM service requests in the system. Finally, through extensive experiments and measurements in a simulated environment with the settings of VM instance classes derived from real-world cloud systems, we show that our proposed cross-entropy-based admission control optimization algorithm is efficient and effective in maximizing cloud providers’ revenue in a public cloud computing environment.

  18. Mechanical Elongation of the Small Intestine: Evaluation of Techniques for Optimal Screw Placement in a Rodent Model

    Directory of Open Access Journals (Sweden)

    P. A. Hausbrandt

    2013-01-01

    Full Text Available Introduction. The aim of this study was to evaluate techniques and establish an optimal method for mechanical elongation of small intestine (MESI using screws in a rodent model in order to develop a potential therapy for short bowel syndrome (SBS. Material and Methods. Adult female Sprague Dawley rats (n=24 with body weight from 250 to 300 g (Σ=283 were evaluated using 5 different groups in which the basic denominator for the technique involved the fixation of a blind loop of the intestine on the abdominal wall with the placement of a screw in the lumen secured to the abdominal wall. Results. In all groups with accessible screws, the rodents removed the implants despite the use of washers or suits to prevent removal. Subcutaneous placement of the screw combined with antibiotic treatment and dietary modifications was finally successful. In two animals autologous transplantation of the lengthened intestinal segment was successful. Discussion. While the rodent model may provide useful basic information on mechanical intestinal lengthening, further investigations should be performed in larger animals to make use of the translational nature of MESI in human SBS treatment.

  19. Optimized Placement of Wind Turbines in Large-Scale Offshore Wind Farm using Particle Swarm Optimization Algorithm

    DEFF Research Database (Denmark)

    Hou, Peng; Hu, Weihao; Soltani, Mohsen

    2015-01-01

    Levelized Production Cost (LPC) as the objective function. The optimization procedure is performed by Particle Swarm Optimization (PSO) algorithm with the purpose of maximizing the energy yields while minimizing the total investment. The simulation results indicate that the proposed method is effective...

  20. Voltage stability index based optimal placement of static VAR compensator and sizing using Cuckoo search algorithm

    Science.gov (United States)

    Venkateswara Rao, B.; Kumar, G. V. Nagesh; Chowdary, D. Deepak; Bharathi, M. Aruna; Patra, Stutee

    2017-07-01

    This paper furnish the new Metaheuristic algorithm called Cuckoo Search Algorithm (CSA) for solving optimal power flow (OPF) problem with minimization of real power generation cost. The CSA is found to be the most efficient algorithm for solving single objective optimal power flow problems. The CSA performance is tested on IEEE 57 bus test system with real power generation cost minimization as objective function. Static VAR Compensator (SVC) is one of the best shunt connected device in the Flexible Alternating Current Transmission System (FACTS) family. It has capable of controlling the voltage magnitudes of buses by injecting the reactive power to system. In this paper SVC is integrated in CSA based Optimal Power Flow to optimize the real power generation cost. SVC is used to improve the voltage profile of the system. CSA gives better results as compared to genetic algorithm (GA) in both without and with SVC conditions.

  1. Genetic Algorithm Approaches for Actuator Placement

    Science.gov (United States)

    Crossley, William A.

    2000-01-01

    This research investigated genetic algorithm approaches for smart actuator placement to provide aircraft maneuverability without requiring hinged flaps or other control surfaces. The effort supported goals of the Multidisciplinary Design Optimization focus efforts in NASA's Aircraft au program. This work helped to properly identify various aspects of the genetic algorithm operators and parameters that allow for placement of discrete control actuators/effectors. An improved problem definition, including better definition of the objective function and constraints, resulted from this research effort. The work conducted for this research used a geometrically simple wing model; however, an increasing number of potential actuator placement locations were incorporated to illustrate the ability of the GA to determine promising actuator placement arrangements. This effort's major result is a useful genetic algorithm-based approach to assist in the discrete actuator/effector placement problem.

  2. Determining the Most Appropriate Physical Education Placement for Students with Disabilities

    Science.gov (United States)

    Columna, Luis; Davis, Timothy; Lieberman, Lauren; Lytle, Rebecca

    2010-01-01

    Adapted physical education (APE) is designed to meet the unique needs of children with disabilities within the least restrictive environment. Placement in the right environment can help the child succeed, but the wrong environment can create a very negative experience. This article presents a systematic approach to making decisions when…

  3. Accounting for connectivity and spatial correlation in the optimal placement of wildlife habitat

    Science.gov (United States)

    John Hof; Curtis H. Flather

    1996-01-01

    This paper investigates optimization approaches to simultaneously modelling habitat fragmentation and spatial correlation between patch populations. The problem is formulated with habitat connectivity affecting population means and variances, with spatial correlations accounted for in covariance calculations. Population with a pre-specifled confidence level is then...

  4. A new fuzzy framework for the optimal placement of phasor measurement units under normal and abnormal conditions

    Directory of Open Access Journals (Sweden)

    Ragab A. El-Sehiemy

    2017-12-01

    Full Text Available This paper presents a new procedure for finding the optimal placement of the phasor measurement units (PMUs in modern power grids to achieve full network observability under normal operating conditions, and also abnormal operating conditions such as a single line or PMU outage, while considering the availability of PMU measuring channels. The proposed modeling framework is implemented using the fuzzy binary linear programming (FBLP technique. Linear fuzzy models are proposed for the objective function and constraints alike. The proposed procedure is applied to five benchmark systems such as the IEEE 14-bus, 30-bus, 39-bus, 57-bus, and 118-bus. The numerical results demonstrate that the proposed framework is capable of finding a fine-tuned optimal solution with a simple model and acceptable solution characteristics compared with early works in the literature. Besides, four evaluation indices are introduced to assure the various criteria under study such as the observability depth, measurement redundancy, and robustness of the method under contingencies. The results show that full network observability can be met under normal conditions using 20% PMUs penetration; however, under contingencies, approximately 50% PMUs penetration is required. The novelty of the proposed procedure has proven the capability of the proposed linear fuzzy models to find better optimal number of PMUs with lower number of channels compared to other algorithms under various operating conditions. Hence, the proposed work represents a potential tool to monitor power systems, and it will help the operators in a smart grid environment. Keywords: Binary linear programming, Fuzzy models, Observability, Optimization, Phasor measurement unit, Smart grids

  5. Determination of the Prosumer's Optimal Bids

    Science.gov (United States)

    Ferruzzi, Gabriella; Rossi, Federico; Russo, Angela

    2015-12-01

    This paper considers a microgrid connected with a medium-voltage (MV) distribution network. It is assumed that the microgrid, which is managed by a prosumer, operates in a competitive environment and participates in the day-ahead market. Then, as the first step of the short-term management problem, the prosumer must determine the bids to be submitted to the market. The offer strategy is based on the application of an optimization model, which is solved for different hourly price profiles of energy exchanged with the main grid. The proposed procedure is applied to a microgrid and four different its configurations were analyzed. The configurations consider the presence of thermoelectric units that only produce electricity, a boiler or/and cogeneration power plants for the thermal loads, and an electric storage system. The numerical results confirmed the numerous theoretical considerations that have been made.

  6. Optimal Placement of Piezoelectric Macro Fiber Composite Patches on Composite Plates for Vibration Suppression

    OpenAIRE

    Padoin, Eduardo; Fonseca, Jun Sergio Ono; Perondi, Eduardo André; Menuzzi, Odair

    2015-01-01

    AbstractThis work presents a new methodology for the parametric optimization of piezoelectric actuators installed in laminated composite structures, with the objective of controlling structural vibrations. Problem formulation is the optimum location of a Macro Fiber Composite (MFC) actuator patch by means the maximization of the controllability index. The control strategy is based on a Linear Quadratic Regulator (LQR) approach. For the structural analysis, the modeling of the interaction betw...

  7. Optimal Sensor placement for acoustic range-based underwater robotic positioning

    Digital Repository Service at National Institute of Oceanography (India)

    Glotzbach, T.; Moreno-Salinas, D.; Aranda, J.; Pascoal, A.M.

    of transponders. In what follow, we give a very brief overview of range-based positioning systems. To estimate the position of an underwater agent by means of acoustic range measurements, one needs several objects (reference objects or ROs henceforward... between target position and optimal acoustic sensor positions. For real sea operations, where the accuracy of range measuring devices is plagued by intermittent failures, outliers, and multipath propagation effects, it is important to have...

  8. Web thickness determines the therapeutic effect of endoscopic keel placement on anterior glottic web.

    Science.gov (United States)

    Chen, Jian; Shi, Fang; Chen, Min; Yang, Yue; Cheng, Lei; Wu, Haitao

    2017-10-01

    This work is a retrospective analysis to investigate the critical risk factor for the therapeutic effect of endoscopic keel placement on anterior glottic web. Altogether, 36 patients with anterior glottic web undergoing endoscopic lysis and silicone keel placement were enrolled. Their voice qualities were evaluated using the voice handicap index-10 (VHI-10) questionnaire, and improved significantly 3 months after surgery (21.53 ± 3.89 vs 9.81 ± 6.68, P web recurrence during the at least 1-year follow-up. Therefore, patients were classified according to the Cohen classification or web thickness, and the recurrence rates were compared. The distribution of recurrence rates for Cohen type 1 ~ 4 were 28.6, 16.7, 33.3, and 40%, respectively. The difference was not statistically significant (P = 0.461). When classified by web thickness, only 2 of 27 (7.41%) thin type cases relapsed whereas 8 of 9 (88.9%) cases in the thick group reformed webs (P web thickness rather than the Cohen grades. Endoscopic lysis and keel placement is only effective for cases with thin glottic webs. Patients with thick webs should be treated by other means.

  9. Determining optimal population monitoring for rare butterflies.

    Science.gov (United States)

    Haddad, Nick M; Hudgens, Brian; Damiani, Chris; Gross, Kevin; Kuefler, Daniel; Pollock, Ken

    2008-08-01

    Determining population viability of rare insects depends on precise, unbiased estimates of population size and other demographic parameters. We used data on the endangered St. Francis' satyr butterfly (Neonympha mitchellii francisci) to evaluate 2 approaches (mark-recapture and transect counts) for population analysis of rare butterflies. Mark-recapture analysis provided by far the greatest amount of demographic information, including estimates (and standard errors) of population size, detection, survival, and recruitment probabilities. Mark-recapture analysis can also be used to estimate dispersal and temporal variation in rates, although we did not do this here. Models of seasonal flight phenologies derived from transect counts (Insect Count Analyzer) provided an index of population size and estimates of survival and statistical uncertainty. Pollard-Yates population indices derived from transect counts did not provide estimates of demographic parameters. This index may be highly biased if detection and survival probabilities vary spatially and temporally. In terms of statistical performance, mark-recapture and Pollard-Yates indices were least variable. Mark-recapture estimates were less likely to fail than Insect Count Analyzer, but mark-recapture estimates became less precise as sampling intensity decreased. In general, count-based approaches are less costly and less likely to cause harm to rare insects than mark-recapture. The optimal monitoring approach must reconcile these trade-offs. Thus, mark-recapture should be favored when demographic estimates are needed, when financial resources enable frequent sampling, and when marking does not harm the insect populations. The optimal sampling strategy may use 2 sampling methods together in 1 overall sampling plan: limited mark-recapture sampling to estimate survival and detection probabilities and frequent but less expensive transect counts.

  10. Optimal Placement of Piezoelectric Macro Fiber Composite Patches on Composite Plates for Vibration Suppression

    Directory of Open Access Journals (Sweden)

    Eduardo Padoin

    Full Text Available AbstractThis work presents a new methodology for the parametric optimization of piezoelectric actuators installed in laminated composite structures, with the objective of controlling structural vibrations. Problem formulation is the optimum location of a Macro Fiber Composite (MFC actuator patch by means the maximization of the controllability index. The control strategy is based on a Linear Quadratic Regulator (LQR approach. For the structural analysis, the modeling of the interaction between the MFC and the structure is made taking into account the active material as one of the orthotropic laminate shell layers. The actuation itself is modeled as an initial strain arising from the application of an electric potential which deforms the rest of the structure. Thereby, modeling the electric field and the electromechanical coupling within the actuator is avoided because these effects are considered analytically. Numerical simulations show that the structural model presents good agreement with numerical and experimental results. Furthermore, the results show that optimizing the location of the actuator in the structure helps the control algorithm to reduce induced structural vibration.

  11. Design of a correlated validated CFD and genetic algorithm model for optimized sensors placement for indoor air quality monitoring

    Science.gov (United States)

    Mousavi, Monireh Sadat; Ashrafi, Khosro; Motlagh, Majid Shafie Pour; Niksokhan, Mohhamad Hosein; Vosoughifar, HamidReza

    2018-02-01

    In this study, coupled method for simulation of flow pattern based on computational methods for fluid dynamics with optimization technique using genetic algorithms is presented to determine the optimal location and number of sensors in an enclosed residential complex parking in Tehran. The main objective of this research is costs reduction and maximum coverage with regard to distribution of existing concentrations in different scenarios. In this study, considering all the different scenarios for simulation of pollution distribution using CFD simulations has been challenging due to extent of parking and number of cars available. To solve this problem, some scenarios have been selected based on random method. Then, maximum concentrations of scenarios are chosen for performing optimization. CFD simulation outputs are inserted as input in the optimization model using genetic algorithm. The obtained results stated optimal number and location of sensors.

  12. Optimal Placement Method of RFID Readers in Industrial Rail Transport for Uneven Rail Traflc Volume Management

    Science.gov (United States)

    Rakhmangulov, Aleksandr; Muravev, Dmitri; Mishkurov, Pavel

    2016-11-01

    The issue of operative data reception on location and movement of railcars is significant the constantly growing requirements of the provision of timely and safe transportation. The technical solution for efficiency improvement of data collection on rail rolling stock is the implementation of an identification system. Nowadays, there are several such systems, distinguished in working principle. In the authors' opinion, the most promising for rail transportation is the RFID technology, proposing the equipping of the railway tracks by the stationary points of data reading (RFID readers) from the onboard sensors on the railcars. However, regardless of a specific type and manufacturer of these systems, their implementation is affiliated with the significant financing costs for large, industrial, rail transport systems, owning the extensive network of special railway tracks with a large number of stations and loading areas. To reduce the investment costs for creation, the identification system of rolling stock on the special railway tracks of industrial enterprises has developed the method based on the idea of priority installation of the RFID readers on railway hauls, where rail traffic volumes are uneven in structure and power, parameters of which is difficult or impossible to predict on the basis of existing data in an information system. To select the optimal locations of RFID readers, the mathematical model of the staged installation of such readers has developed depending on the non-uniformity value of rail traffic volumes, passing through the specific railway hauls. As a result of that approach, installation of the numerous RFID readers at all station tracks and loading areas of industrial railway stations might be not necessary,which reduces the total cost of the rolling stock identification and the implementation of the method for optimal management of transportation process.

  13. Improvement of in-hospital telemetry monitoring in coronary care units: an intervention study for achieving optimal electrode placement and attachment, hygiene and delivery of critical information to patients.

    Science.gov (United States)

    Pettersen, Trond R; Fålun, Nina; Norekvål, Tone M

    2014-12-01

    In-hospital telemetry monitoring is important for diagnosis and treatment of patients at risk of developing life-threatening arrhythmias. It is widely used in critical and non-critical care wards. Nurses are responsible for correct electrode placement, thus ensuring optimal quality of the monitoring. The aims of this study were to determine whether a complex educational intervention improves (a) optimal electrode placement, (b) hygiene, and (c) delivery of critical information to patients (reason for monitoring, limitations in cellular phone use, and not to leave the ward without informing a member of staff). A prospective interventional study design was used, with data collection occurring over two six-week periods: before implementation of the intervention (n=201) and after the intervention (n=165). Standard abstraction forms were used to obtain data on patients' clinical characteristics, and 10 variables related to electrode placement and attachment, hygiene and delivery of critical information. At pre-intervention registration, 26% of the electrodes were misplaced. Twelve per cent of the patients received information about limiting their cellular phone use while monitored, 70% were informed of the purpose of monitoring, and 71% used a protective cover for their unit. Post-intervention, outcome measures for the three variables improved significantly: use of protective cover (ptelemetry monitoring in coronary care units, and other units that monitor patients with telemetry. © The European Society of Cardiology 2013.

  14. Techniques and Results for Determining Window Placement and Configuration for the Small Pressurized Rover (SPR)

    Science.gov (United States)

    Thompson, Shelby; Litaker, Harry; Howard, Robert

    2009-01-01

    A natural component to driving any type of vehicle, be it Earth-based or space-based, is visibility. In its simplest form visibility is a measure of the distance at which an object can be seen. With the National Aeronautics and Space Administration s (NASA) Space Shuttle and the International Space Station (ISS), there are human factors design guidelines for windows. However, for planetary exploration related vehicles, especially land-based vehicles, relatively little has been written on the importance of windows. The goal of the current study was to devise a proper methodology and to obtain preliminary human-in-the-loop data on window placement and location for the small pressurized rover (SPR). Nine participants evaluated multiple areas along the vehicle s front "nose", while actively maneuvering through several lunar driving simulations. Subjective data was collected on seven different aspects measuring areas of necessity, frequency of views, and placement/configuration of windows using questionnaires and composite drawings. Results indicated a desire for a large horizontal field-of-view window spanning the front of the vehicle for most driving situations with slightly reduced window areas for the lower front, lower corners, and side views.

  15. The social nestwork: tree structure determines nest placement in Kenyan weaverbird colonies.

    Directory of Open Access Journals (Sweden)

    Maria Angela Echeverry-Galvis

    Full Text Available Group living is a life history strategy employed by many organisms. This strategy is often difficult to study because the exact boundaries of a group can be unclear. Weaverbirds present an ideal model for the study of group living, because their colonies occupy a space with discrete boundaries: a single tree. We examined one aspect of group living. nest placement, in three Kenyan weaverbird species: the Black-capped Weaver (Pseudonigrita cabanisi, Grey-capped Weaver (P. arnaudi and White-browed Sparrow Weaver (Ploceropasser mahali. We asked which environmental, biological, and/or abiotic factors influenced their nest arrangement and location in a given tree. We used machine learning to analyze measurements taken from 16 trees and 516 nests outside the breeding season at the Mpala Research Station in Laikipia Kenya, along with climate data for the area. We found that tree architecture, number of nests per tree, and nest-specific characteristics were the main variables driving nest placement. Our results suggest that different Kenyan weaverbird species have similar priorities driving the selection of where a nest is placed within a given tree. Our work illustrates the advantage of using machine learning techniques to investigate biological questions.

  16. Detector placement optimization for cargo containers using deterministic adjoint transport examination for SNM detection

    International Nuclear Information System (INIS)

    McLaughlin, Trevor D.; Sjoden, Glenn E.; Manalo, Kevin L.

    2011-01-01

    With growing concerns over port security and the potential for illicit trafficking of SNM through portable cargo shipping containers, efforts are ongoing to reduce the threat via container monitoring. This paper focuses on answering an important question of how many detectors are necessary for adequate coverage of a cargo container considering the detection of neutrons and gamma rays. Deterministic adjoint transport calculations are performed with compressed helium- 3 polyethylene moderated neutron detectors and sodium activated cesium-iodide gamma-ray scintillation detectors on partial and full container models. Results indicate that the detector capability is dependent on source strength and potential shielding. Using a surrogate weapons grade plutonium leakage source, it was determined that for a 20 foot ISO container, five neutron detectors and three gamma detectors are necessary for adequate coverage. While a large CsI(Na) gamma detector has the potential to monitor the entire height of the container for SNM, the He-3 neutron detector is limited to roughly 1.25 m in depth. Detector blind spots are unavoidable inside the container volume unless additional measures are taken for adequate coverage. (author)

  17. Optimal Path Determination for Flying Vehicle to Search an Object

    Science.gov (United States)

    Heru Tjahjana, R.; Heri Soelistyo U, R.; Ratnasari, L.; Irawanto, B.

    2018-01-01

    In this paper, a method to determine optimal path for flying vehicle to search an object is proposed. Background of the paper is controlling air vehicle to search an object. Optimal path determination is one of the most popular problem in optimization. This paper describe model of control design for a flying vehicle to search an object, and focus on the optimal path that used to search an object. In this paper, optimal control model is used to control flying vehicle to make the vehicle move in optimal path. If the vehicle move in optimal path, then the path to reach the searched object also optimal. The cost Functional is one of the most important things in optimal control design, in this paper the cost functional make the air vehicle can move as soon as possible to reach the object. The axis reference of flying vehicle uses N-E-D (North-East-Down) coordinate system. The result of this paper are the theorems which say that the cost functional make the control optimal and make the vehicle move in optimal path are proved analytically. The other result of this paper also shows the cost functional which used is convex. The convexity of the cost functional is use for guarantee the existence of optimal control. This paper also expose some simulations to show an optimal path for flying vehicle to search an object. The optimization method which used to find the optimal control and optimal path vehicle in this paper is Pontryagin Minimum Principle.

  18. Optimization of bioenergy crop selection and placement based on a stream health indicator using an evolutionary algorithm.

    Science.gov (United States)

    Herman, Matthew R; Nejadhashemi, A Pouyan; Daneshvar, Fariborz; Abouali, Mohammad; Ross, Dennis M; Woznicki, Sean A; Zhang, Zhen

    2016-10-01

    The emission of greenhouse gases continues to amplify the impacts of global climate change. This has led to the increased focus on using renewable energy sources, such as biofuels, due to their lower impact on the environment. However, the production of biofuels can still have negative impacts on water resources. This study introduces a new strategy to optimize bioenergy landscapes while improving stream health for the region. To accomplish this, several hydrological models including the Soil and Water Assessment Tool, Hydrologic Integrity Tool, and Adaptive Neruro Fuzzy Inference System, were linked to develop stream health predictor models. These models are capable of estimating stream health scores based on the Index of Biological Integrity. The coupling of the aforementioned models was used to guide a genetic algorithm to design watershed-scale bioenergy landscapes. Thirteen bioenergy managements were considered based on the high probability of adaptation by farmers in the study area. Results from two thousand runs identified an optimum bioenergy crops placement that maximized the stream health for the Flint River Watershed in Michigan. The final overall stream health score was 50.93, which was improved from the current stream health score of 48.19. This was shown to be a significant improvement at the 1% significant level. For this final bioenergy landscape the most often used management was miscanthus (27.07%), followed by corn-soybean-rye (19.00%), corn stover-soybean (18.09%), and corn-soybean (16.43%). The technique introduced in this study can be successfully modified for use in different regions and can be used by stakeholders and decision makers to develop bioenergy landscapes that maximize stream health in the area of interest. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Optimal damper placement research

    Science.gov (United States)

    Smirnov, Vladimir; Kuzhin, Bulat

    2017-10-01

    Nowadays increased noise and vibration pollution on technopark and research laboratories territories, which is negatively influencing on production of high-precision measuring instruments. The problem is actual for transport hubs, which experience influence of machines, vehicles, trains and planes. Energy efficiency is one of major functions in modern road transport development. The problem of environmental pollution, lack of energy resources and energy efficiency requires research, production and implementation of energy efficient materials that would be the foundation of environmentally sustainable transport infrastructure in road traffic. Improving the efficiency of energy use is a leading option to gain better energy security, improve industry profitability and competitiveness, and reduce the overall energy sector impacts on climate change. This paper has next indirect goals. Research impact of vibration on constructions, such as bus and train stations, terminals, which are mostly exposed to oscillation. Extend the buildings operation by decreasing the negative influence. Reduce expenses on maintenance and repair works. It is important not to forget about seismic protection, which is actual nowadays, when the safety stands first. Analysis of devastating earthquakes for last few years proves reasonableness of application such systems. The article is dedicated to learning dependence of damper location on natural frequency. As a model for analyze was simulated concrete construction with variable profile. We used program complex Patran for analyzing the model.

  20. A Method for Determining Optimal Residential Energy Efficiency Packages

    Energy Technology Data Exchange (ETDEWEB)

    Polly, B. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Gestwick, M. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Bianchi, M. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Anderson, R. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Horowitz, S. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Christensen, C. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Judkoff, R. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2011-04-01

    This report describes an analysis method for determining optimal residential energy efficiency retrofit packages and, as an illustrative example, applies the analysis method to a 1960s-era home in eight U.S. cities covering a range of International Energy Conservation Code (IECC) climate regions. The method uses an optimization scheme that considers average energy use (determined from building energy simulations) and equivalent annual cost to recommend optimal retrofit packages specific to the building, occupants, and location.

  1. Optimal Breeding Time Determination in Bitch Using Vaginal Cytology

    African Journals Online (AJOL)

    Optimal Breeding Time Determination in Bitch Using Vaginal Cytology: Case Report. ... This result once again emphasized the accuracy of vaginal cytology as a useful tool to determine an optimal breeding time in bitch. Hence, vaginal cytology, though can not detect ovulation day, will continue to be patronized by small ...

  2. Modeling of delamination in carbon/epoxy composite laminates under four point bending for damage detection and sensor placement optimization

    Science.gov (United States)

    Adu, Stephen Aboagye

    composite coupon under simply supported boundary conditions. Theoretically calculated bending stiffness's and maximum deflection were compared with that of the experimental case and the numerical. After the FEA model was properly benchmarked with test data and exact solution, data obtained from the FEM model were used for sensor placement optimization.

  3. Non-Invasive Fetal Monitoring: A Maternal Surface ECG Electrode Placement-Based Novel Approach for Optimization of Adaptive Filter Control Parameters Using the LMS and RLS Algorithms.

    Science.gov (United States)

    Martinek, Radek; Kahankova, Radana; Nazeran, Homer; Konecny, Jaromir; Jezewski, Janusz; Janku, Petr; Bilik, Petr; Zidek, Jan; Nedoma, Jan; Fajkus, Marcel

    2017-05-19

    This paper is focused on the design, implementation and verification of a novel method for the optimization of the control parameters (such as step size μ and filter order N ) of LMS and RLS adaptive filters used for noninvasive fetal monitoring. The optimization algorithm is driven by considering the ECG electrode positions on the maternal body surface in improving the performance of these adaptive filters. The main criterion for optimal parameter selection was the Signal-to-Noise Ratio (SNR). We conducted experiments using signals supplied by the latest version of our LabVIEW-Based Multi-Channel Non-Invasive Abdominal Maternal-Fetal Electrocardiogram Signal Generator, which provides the flexibility and capability of modeling the principal distribution of maternal/fetal ECGs in the human body. Our novel algorithm enabled us to find the optimal settings of the adaptive filters based on maternal surface ECG electrode placements. The experimental results further confirmed the theoretical assumption that the optimal settings of these adaptive filters are dependent on the ECG electrode positions on the maternal body, and therefore, we were able to achieve far better results than without the use of optimization. These improvements in turn could lead to a more accurate detection of fetal hypoxia. Consequently, our approach could offer the potential to be used in clinical practice to establish recommendations for standard electrode placement and find the optimal adaptive filter settings for extracting high quality fetal ECG signals for further processing. Ultimately, diagnostic-grade fetal ECG signals would ensure the reliable detection of fetal hypoxia.

  4. Delayed ischemic cecal perforation despite optimal decompression after placement of a self-expanding metal stent: report of a case

    DEFF Research Database (Denmark)

    Knop, Filip Krag; Pilsgaard, Bo; Meisner, Søren

    2004-01-01

    Endoscopic deployment of self-expanding metal stents offers an alternative to surgical intervention in rectocolonic obstructions. Reported clinical failures in the literature are all related to the site of stent placement. We report a case of serious intra-abdominal disease after technically and ...

  5. Use of sulfur hexafluoride airflow studies to determine the appropriate number and placement of air monitors in an alpha inhalation exposure laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Newton, G.J.; Hoover, M.D.

    1995-12-01

    Determination of the appropriate number and placement of air monitors in the workplace is quite subjective and is generally one of the more difficult tasks in radiation protection. General guidance for determining the number and placement of air sampling and monitoring instruments has been provided by technical reports such as Mishima, J. These two documents and other published guidelines suggest that some insight into sampler placement can be obtained by conducting airflow studies involving the dilution and clearance of the relatively inert tracer gas sulfur hexafluoride (SF{sub 6}) in sampler placement studies and describes the results of a study done within the ITRI alpha inhalation exposure laboratories. The objectives of the study were to document an appropriate method for conducting SF{sub 6} dispersion studies, and to confirm the appropriate number and placement of air monitors and air samplers within a typical ITRI inhalation exposure laboratory. The results of this study have become part of the technical bases for air sampling and monitoring in the test room.

  6. Optimizing UV Index determination from broadband irradiances

    Science.gov (United States)

    Tereszchuk, Keith A.; Rochon, Yves J.; McLinden, Chris A.; Vaillancourt, Paul A.

    2018-03-01

    A study was undertaken to improve upon the prognosticative capability of Environment and Climate Change Canada's (ECCC) UV Index forecast model. An aspect of that work, and the topic of this communication, was to investigate the use of the four UV broadband surface irradiance fields generated by ECCC's Global Environmental Multiscale (GEM) numerical prediction model to determine the UV Index. The basis of the investigation involves the creation of a suite of routines which employ high-spectral-resolution radiative transfer code developed to calculate UV Index fields from GEM forecasts. These routines employ a modified version of the Cloud-J v7.4 radiative transfer model, which integrates GEM output to produce high-spectral-resolution surface irradiance fields. The output generated using the high-resolution radiative transfer code served to verify and calibrate GEM broadband surface irradiances under clear-sky conditions and their use in providing the UV Index. A subsequent comparison of irradiances and UV Index under cloudy conditions was also performed. Linear correlation agreement of surface irradiances from the two models for each of the two higher UV bands covering 310.70-330.0 and 330.03-400.00 nm is typically greater than 95 % for clear-sky conditions with associated root-mean-square relative errors of 6.4 and 4.0 %. However, underestimations of clear-sky GEM irradiances were found on the order of ˜ 30-50 % for the 294.12-310.70 nm band and by a factor of ˜ 30 for the 280.11-294.12 nm band. This underestimation can be significant for UV Index determination but would not impact weather forecasting. Corresponding empirical adjustments were applied to the broadband irradiances now giving a correlation coefficient of unity. From these, a least-squares fitting was derived for the calculation of the UV Index. The resultant differences in UV indices from the high-spectral-resolution irradiances and the resultant GEM broadband irradiances are typically within 0

  7. PI controller design of a wind turbine: evaluation of the pole-placement method and tuning using constrained optimization

    DEFF Research Database (Denmark)

    Mirzaei, Mahmood; Tibaldi, Carlo; Hansen, Morten Hartvig

    2016-01-01

    PI/PID controllers are the most common wind turbine controllers. Normally a first tuning is obtained using methods such as pole-placement or Ziegler-Nichols and then extensive aeroelastic simulations are used to obtain the best tuning in terms of regulation of the outputs and reduction of the loads...... to automatically tune the wind turbine controller subject to robustness constraints. The properties of the system such as the maximum sensitivity and complementary sensitivity functions (Ms and Mt), along with some of the responses of the system, are used to investigate the controller performance and formulate...

  8. Method for Determining Optimal Residential Energy Efficiency Retrofit Packages

    Energy Technology Data Exchange (ETDEWEB)

    Polly, B.; Gestwick, M.; Bianchi, M.; Anderson, R.; Horowitz, S.; Christensen, C.; Judkoff, R.

    2011-04-01

    Businesses, government agencies, consumers, policy makers, and utilities currently have limited access to occupant-, building-, and location-specific recommendations for optimal energy retrofit packages, as defined by estimated costs and energy savings. This report describes an analysis method for determining optimal residential energy efficiency retrofit packages and, as an illustrative example, applies the analysis method to a 1960s-era home in eight U.S. cities covering a range of International Energy Conservation Code (IECC) climate regions. The method uses an optimization scheme that considers average energy use (determined from building energy simulations) and equivalent annual cost to recommend optimal retrofit packages specific to the building, occupants, and location. Energy savings and incremental costs are calculated relative to a minimum upgrade reference scenario, which accounts for efficiency upgrades that would occur in the absence of a retrofit because of equipment wear-out and replacement with current minimum standards.

  9. Optimization of Phasor Measurement Unit (PMU Placement in Supervisory Control and Data Acquisition (SCADA-Based Power System for Better State-Estimation Performance

    Directory of Open Access Journals (Sweden)

    Mohammad Shoaib Shahriar

    2018-03-01

    Full Text Available Present-day power systems are mostly equipped with conventional meters and intended for the installation of highly accurate phasor measurement units (PMUs to ensure better protection, monitoring and control of the network. PMU is a deliberate choice due to its unique capacity in providing accurate phasor readings of bus voltages and currents. However, due to the high expense and a requirement for communication facilities, the installation of a limited number of PMUs in a network is common practice. This paper presents an optimal approach to selecting the locations of PMUs to be installed with the objective of ensuring maximum accuracy of the state estimation (SE. The optimization technique ensures that the critical locations of the system will be covered by PMU meters which lower the negative impact of bad data on state-estimation performance. One of the well-known intelligent optimization techniques, the genetic algorithm (GA, is used to search for the optimal set of PMUs. The proposed technique is compared with a heuristic approach of PMU placement. The weighted least square (WLS, with a modified Jacobian to deal with the phasor quantities, is used to compute the estimation accuracy. IEEE 30-bus and 118-bus systems are used to demonstrate the suggested technique.

  10. Intraoperative MRI for optimizing electrode placement for deep brain stimulation of the subthalamic nucleus in Parkinson disease.

    Science.gov (United States)

    Cui, Zhiqiang; Pan, Longsheng; Song, Huifang; Xu, Xin; Xu, Bainan; Yu, Xinguang; Ling, Zhipei

    2016-01-01

    OBJECT The degree of clinical improvement achieved by deep brain stimulation (DBS) is largely dependent on the accuracy of lead placement. This study reports on the evaluation of intraoperative MRI (iMRI) for adjusting deviated electrodes to the accurate anatomical position during DBS surgery and acute intracranial changes. METHODS Two hundred and six DBS electrodes were implanted in the subthalamic nucleus (STN) in 110 patients with Parkinson disease. All patients underwent iMRI after implantation to define the accuracy of lead placement. Fifty-six DBS electrode positions in 35 patients deviated from the center of the STN, according to the result of the initial postplacement iMRI scans. Thus, we adjusted the electrode positions for placement in the center of the STN and verified this by means of second or third iMRI scans. Recording was performed in adjusted parameters in the x-, y-, and z-axes. RESULTS Fifty-six (27%) of 206 DBS electrodes were adjusted as guided by iMRI. Electrode position was adjusted on the basis of iMRI 62 times. The sum of target coordinate adjustment was -0.5 mm in the x-axis, -4 mm in the y-axis, and 15.5 mm in the z-axis; the total of distance adjustment was 74.5 mm in the x-axis, 88 mm in the y-axis, and 42.5 mm in the z-axis. After adjustment with the help of iMRI, all electrodes were located in the center of the STN. Intraoperative MRI revealed 2 intraparenchymal hemorrhages in 2 patients, brain shift in all patients, and leads penetrating the lateral ventricle in 3 patients. CONCLUSIONS The iMRI technique can guide surgeons as they adjust deviated electrodes to improve the accuracy of implanting the electrodes into the correct anatomical position. The iMRI technique can also immediately demonstrate acute changes such as hemorrhage and brain shift during DBS surgery.

  11. Optimization of burnishing parameters and determination of select ...

    Indian Academy of Sciences (India)

    Optimization of burnishing parameters and determination of select surface characteristics in engineering materials. P RAVINDRA BABU1, K ANKAMMA2, T SIVA PRASAD3,. A V S RAJU4 and N ESWARA PRASAD5,∗. 1Mechanical Engineering Department, Gudlavalleru Engineering College,. Gudlavalleru 521 356, India.

  12. The Determination of Optimal Parameters of Fuzzy PI Sugeno Controller

    Science.gov (United States)

    Kudinov, Y. I.; Kudinov, I. Yu; Volkova, A. A.; Durgarjan, I. S.; Pashchenko, F. F.

    2017-11-01

    Describe the procedure for determining by means of Matlab and Simulink optimal parameters of the fuzzy PI controller Sugeno, where some indicators of the quality of the transition process in a closed system control with this controller satisfies the specified conditions.

  13. Determining the optimal spacing of deepening of vertical mine

    Energy Technology Data Exchange (ETDEWEB)

    Durov, Ye.M.

    1983-01-01

    Light is shed on a technique for determining the optimal spacing of shafts for deepening for the examined parameters of operational and deepening operations. The presented results of studies may be used in designing new shafts, in preparing levels and in reconstruction of existing shafts with slanted and steep stratum bedding.

  14. Problems in determining the optimal use of road safety measures

    DEFF Research Database (Denmark)

    Elvik, Rune

    2014-01-01

    This paper discusses some problems in determining the optimal use of road safety measures. The first of these problems is how best to define the baseline option, i.e. what will happen if no new safety measures are introduced. The second problem concerns choice of a method for selection of targets...... for intervention that ensures maximum safety benefits. The third problem is how to develop policy options to minimise the risk of indivisibilities and irreversible choices. The fourth problem is how to account for interaction effects between road safety measures when determining their optimal use. The fifth...... problem is how to obtain the best mix of short-term and long-term measures in a safety programme. The sixth problem is how fixed parameters for analysis, including the monetary valuation of road safety, influence the results of analyses. It is concluded that it is at present not possible to determine...

  15. Optimizing foster family placement for infants and toddlers: A randomized controlled trial on the effect of the foster family intervention.

    Science.gov (United States)

    Van Andel, Hans; Post, Wendy; Jansen, Lucres; Van der Gaag, Rutger Jan; Knorth, Erik; Grietens, Hans

    2016-01-01

    The relationship between foster children and their foster carers comes with many risks and may be very stressful both for parents and children. We developed an intervention (foster family intervention [FFI]) to tackle these risks. The intervention focuses on foster children below the age of 5 years. The objective was to investigate the effects of FFI on the interactions between foster parents and foster children. A randomized control trial was carried out with a sample of 123 preschool aged children (mean age 18.8 months; 51% boys) and their foster carers. A pretest was carried out 6 to 8 weeks after placement and a posttest one half year later. Interactions were videotaped and coded using the Emotional Availability Scales (EAS). Foster carers were asked to fill in the Dutch version of the Parenting Stress Index. Morning and evening samples of children's salivary cortisol were taken. In the posttest, significantly positive effects were found on the following EAS subscales: Sensitivity, Structuring, Nonintrusiveness, and Responsiveness. We found no significant differences on stress levels of foster carers and children (Nijmeegse Ouderlijke Stress Index domains and salivary cortisol). This study shows that the FFI has a significant positive effect on parenting skills as measured with EAS and on Responsiveness of the foster child. Findings are discussed in terms of impact and significance relating to methodology and design of the study and to clinical relevance. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  16. Energy group structure determination using particle swarm optimization

    International Nuclear Information System (INIS)

    Yi, Ce; Sjoden, Glenn

    2013-01-01

    Highlights: ► Particle swarm optimization is applied to determine broad group structure. ► A graph representation of the broad group structure problem is introduced. ► The approach is tested on a fuel-pin model. - Abstract: Multi-group theory is widely applied for the energy domain discretization when solving the Linear Boltzmann Equation. To reduce the computational cost, fine group cross libraries are often down-sampled into broad group cross section libraries. Cross section data collapsing generally involves two steps: Firstly, the broad group structure has to be determined; secondly, a weighting scheme is used to evaluate the broad cross section library based on the fine group cross section data and the broad group structure. A common scheme is to average the fine group cross section weighted by the fine group flux. Cross section collapsing techniques have been intensively researched. However, most studies use a pre-determined group structure, open based on experience, to divide the neutron energy spectrum into thermal, epi-thermal, fast, etc. energy range. In this paper, a swarm intelligence algorithm, particle swarm optimization (PSO), is applied to optimize the broad group structure. A graph representation of the broad group structure determination problem is introduced. And the swarm intelligence algorithm is used to solve the graph model. The effectiveness of the approach is demonstrated using a fuel-pin model

  17. STUDENT PLACEMENT

    African Journals Online (AJOL)

    User

    students express lack of interest in the field they are placed, it ... be highly motivated to learn than students placed in a department ... the following research questions. Research Questions. •. Did the criteria used by Mekelle. University for placement of students into different departments affect the academic performance of ...

  18. Load Concentration Factor Based Analytical Method for Optimal Placement of Multiple Distribution Generators for Loss Minimization and Voltage Profile Improvement

    NARCIS (Netherlands)

    Shahzad, Mohsin; Ahmad, Ishtiaq; Gawlik, Wolfgang; Palensky, P.

    2016-01-01

    This paper presents novel separate methods for finding optimal locations, sizes of multiple distributed generators (DGs) simultaneously and operational power factor in order to minimize power loss and improve the voltage profile in the distribution system. A load concentration factor (LCF) is

  19. Particle swarm optimization for determining shortest distance to voltage collapse

    Energy Technology Data Exchange (ETDEWEB)

    Arya, L.D.; Choube, S.C. [Electrical Engineering Department, S.G.S.I.T.S. Indore, MP 452 003 (India); Shrivastava, M. [Electrical Engineering Department, Government Engineering College Ujjain, MP 456 010 (India); Kothari, D.P. [Centre for Energy Studies, Indian Institute of Technology, Delhi (India)

    2007-12-15

    This paper describes an algorithm for computing shortest distance to voltage collapse or determination of CSNBP using PSO technique. A direction along CSNBP gives conservative results from voltage security view point. This information is useful to the operator to steer the system away from this point by taking corrective actions. The distance to a closest bifurcation is a minimum of the loadability given a slack bus or participation factors for increasing generation as the load increases. CSNBP determination has been formulated as an optimization problem to be used in PSO technique. PSO is a new evolutionary algorithm (EA) which is population based inspired by the social behavior of animals such as fish schooling and birds flocking. It can handle optimization problems with any complexity since mechanization is simple with few parameters to be tuned. The developed algorithm has been implemented on two standard test systems. (author)

  20. On the application of artificial bee colony (ABC algorithm for optimization of well placements in fractured reservoirs; efficiency comparison with the particle swarm optimization (PSO methodology

    Directory of Open Access Journals (Sweden)

    Behzad Nozohour-leilabady

    2016-03-01

    Full Text Available The application of a recent optimization technique, the artificial bee colony (ABC, was investigated in the context of finding the optimal well locations. The ABC performance was compared with the corresponding results from the particle swarm optimization (PSO algorithm, under essentially similar conditions. Treatment of out-of-boundary solution vectors was accomplished via the Periodic boundary condition (PBC, which presumably accelerates convergence towards the global optimum. Stochastic searches were initiated from several random staring points, to minimize starting-point dependency in the established results. The optimizations were aimed at maximizing the Net Present Value (NPV objective function over the considered oilfield production durations. To deal with the issue of reservoir heterogeneity, random permeability was applied via normal/uniform distribution functions. In addition, the issue of increased number of optimization parameters was address, by considering scenarios with multiple injector and producer wells, and cases with deviated wells in a real reservoir model. The typical results prove ABC to excel PSO (in the cases studied after relatively short optimization cycles, indicating the great premise of ABC methodology to be used for well-optimization purposes.

  1. A rapid application of GA-MODFLOW combined approach to optimization of well placement and operation for drought-ready groundwater reservoir design

    Science.gov (United States)

    Park, C.; Kim, Y.; Jang, H.

    2016-12-01

    Poor temporal distribution of precipitation increases winter drought risks in mountain valley areas in Korea. Since perennial streams or reservoirs for water use are rare in the areas, groundwater is usually a major water resource. Significant amount of the precipitation contributing groundwater recharge mostly occurs during the summer season. However, a volume of groundwater recharge is limited by rapid runoff because of the topographic characteristics such as steep hill and slope. A groundwater reservoir using artificial recharge method with rain water reuse can be a suitable solution to secure water resource for the mountain valley areas. Successful groundwater reservoir design depends on optimization of well placement and operation. This study introduces a combined approach using GA (Genetic Algorithm) and MODFLOW and its rapid application. The methodology is based on RAD (Rapid Application Development) concept in order to minimize the cost of implementation. DEAP (Distributed Evolutionary Algorithms in Python), a framework for prototyping and testing evolutionary algorithms, is applied for quick code development and CUDA (Compute Unified Device Architecture), a parallel computing platform using GPU (Graphics Processing Unit), is introduced to reduce runtime. The application was successfully applied to Samdeok-ri, Gosung, Korea. The site is located in a mountain valley area and unconfined aquifers are major source of water use. The results of the application produced the best location and optimized operation schedule of wells including pumping and injecting.

  2. Product Placement in Cartoons

    Directory of Open Access Journals (Sweden)

    Irena Oroz Štancl

    2014-06-01

    Full Text Available Product placement is a marketing approach for integrating products or services into selected media content. Studies have shown that the impact of advertising on children and youth are large, and that it can affect their preferences and attitudes. The aim of this article is to determine the existing level of product placement in cartoons that are broadcast on Croatian television stations. Content analysis of cartoons in a period of one month gave the following results: in 30% of cartoons product placement was found; most product placement were visual ads, in 89%, however, auditory product placement and plot connection was also found. Most ads were related to toys and it is significant that even 65% of cartoons are accompanied by a large amount of products available on the Croatian market. This is the result of two sales strategies: brand licensing (selling popular cartoon characters to toys, food or clothing companies and cartoon production based on existing line of toys with the sole aim of making their sales more effective.

  3. Optimization of automation: III. Development of optimization method for determining automation rate in nuclear power plants

    International Nuclear Information System (INIS)

    Lee, Seung Min; Kim, Jong Hyun; Kim, Man Cheol; Seong, Poong Hyun

    2016-01-01

    Highlights: • We propose an appropriate automation rate that enables the best human performance. • We analyze the shortest working time considering Situation Awareness Recovery (SAR). • The optimized automation rate is estimated by integrating the automation and ostracism rate estimation methods. • The process to derive the optimized automation rate is demonstrated through case studies. - Abstract: Automation has been introduced in various industries, including the nuclear field, because it is commonly believed that automation promises greater efficiency, lower workloads, and fewer operator errors through reducing operator errors and enhancing operator and system performance. However, the excessive introduction of automation has deteriorated operator performance due to the side effects of automation, which are referred to as Out-of-the-Loop (OOTL), and this is critical issue that must be resolved. Thus, in order to determine the optimal level of automation introduction that assures the best human operator performance, a quantitative method of optimizing the automation is proposed in this paper. In order to propose the optimization method for determining appropriate automation levels that enable the best human performance, the automation rate and ostracism rate, which are estimation methods that quantitatively analyze the positive and negative effects of automation, respectively, are integrated. The integration was conducted in order to derive the shortest working time through considering the concept of situation awareness recovery (SAR), which states that the automation rate with the shortest working time assures the best human performance. The process to derive the optimized automation rate is demonstrated through an emergency operation scenario-based case study. In this case study, four types of procedures are assumed through redesigning the original emergency operating procedure according to the introduced automation and ostracism levels. Using the

  4. Optimal Sizing and Placement of Battery Energy Storage in Distribution System Based on Solar Size for Voltage Regulation

    Energy Technology Data Exchange (ETDEWEB)

    Nazaripouya, Hamidreza [Univ. of California, Los Angeles, CA (United States); Wang, Yubo [Univ. of California, Los Angeles, CA (United States); Chu, Peter [Univ. of California, Los Angeles, CA (United States); Pota, Hemanshu R. [Univ. of California, Los Angeles, CA (United States); Gadh, Rajit [Univ. of California, Los Angeles, CA (United States)

    2016-07-26

    This paper proposes a new strategy to achieve voltage regulation in distributed power systems in the presence of solar energy sources and battery storage systems. The goal is to find the minimum size of battery storage and its corresponding location in the network based on the size and place of the integrated solar generation. The proposed method formulates the problem by employing the network impedance matrix to obtain an analytical solution instead of using a recursive algorithm such as power flow. The required modifications for modeling the slack and PV buses (generator buses) are utilized to increase the accuracy of the approach. The use of reactive power control to regulate the voltage regulation is not always an optimal solution as in distribution systems R/X is large. In this paper the minimum size and the best place of battery storage is achieved by optimizing the amount of both active and reactive power exchanged by battery storage and its gridtie inverter (GTI) based on the network topology and R/X ratios in the distribution system. Simulation results for the IEEE 14-bus system verify the effectiveness of the proposed approach.

  5. Optimal distributed generation placement in distribution system to improve reliability and critical loads pick up after natural disasters

    Directory of Open Access Journals (Sweden)

    Galiveeti Hemakumar Reddy

    2017-06-01

    Full Text Available The increase in frequency of natural disasters has necessitated the need of resilient distribution systems. Natural disasters lead to severe damage of power system infrastructure and the main grid may not be available to serve the loads. The integration of distributed generation (DG into distribution system partially restores the loads after natural disasters and improves the reliability during normal operating conditions. After a natural disaster, objective of the system operators is to restore the critical loads as a priority. This enables the need of considering critical load pick up as an objective function while placing the DGs. A location based constraint is, thus, required to make sure the DGs are available to pick up the loads after natural disasters. Fuzzy multi criteria decision making (FMCDM approach is used in this work to rank the load points and locations/feeder sections. This paper uses particle swarm optimization (PSO to evaluate the optimal size and location of DGs using the proposed objective function. The obtained results are compared with the results of reliability as an objective function.

  6. Perturbing engine performance measurements to determine optimal engine control settings

    Science.gov (United States)

    Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna

    2014-12-30

    Methods and systems for optimizing a performance of a vehicle engine are provided. The method includes determining an initial value for a first engine control parameter based on one or more detected operating conditions of the vehicle engine, determining a value of an engine performance variable, and artificially perturbing the determined value of the engine performance variable. The initial value for the first engine control parameter is then adjusted based on the perturbed engine performance variable causing the engine performance variable to approach a target engine performance variable. Operation of the vehicle engine is controlled based on the adjusted initial value for the first engine control parameter. These acts are repeated until the engine performance variable approaches the target engine performance variable.

  7. Spectroscopic determination of optimal hydration time of zircon surface

    Energy Technology Data Exchange (ETDEWEB)

    Ordonez R, E. [ININ, Departamento de Quimica, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico); Garcia R, G. [Instituto Tecnologico de Toluca, Division de Estudios del Posgrado, Av. Tecnologico s/n, Ex-Rancho La Virgen, 52140 Metepec, Estado de Mexico (Mexico); Garcia G, N., E-mail: eduardo.ordonez@inin.gob.m [Universidad Autonoma del Estado de Mexico, Facultad de Quimica, Av. Colon y Av. Tollocan, 50180 Toluca, Estado de Mexico (Mexico)

    2010-07-01

    When a mineral surface is immersed in an aqueous solution, it develops and electric charge produced by the amphoteric dissociation of hydroxyl groups created by the hydration of the solid surface. This is one influential surface property. The complete hydration process takes a time which is specific for each mineral species. The knowledge of the aqueous solution contact time for complete surface hydration is mandatory for further surface phenomena studies. This study deals with the optimal hydration time of the raw zircon (ZrSiO{sub 4}) surface comparing the classical potentiometric titrations with a fluorescence spectroscopy technique. The latter is easy and rea liable as it demands only one sample batch to determine the optimal time to ensure a total hydration of the zircon surface. The analytical results of neutron activation analysis showed the presence of trace quantities of Dy{sup 3+}, Eu{sup 3+} and Er{sup 3} in the bulk of zircon. The Dy{sup 3+} is structured in the zircon crystalline lattice and undergoes the same chemical reactions as zircon. Furthermore, the Dy{sup 3+} has a good fluorescent response whose intensity is enhanced by hydration molecules. The results show that, according to the potentiometric analysis, the hydration process for each batch (at least 8 sample batches) takes around 2 h, while the spectrometric method indicates only 5 minutes from only one batch. Both methods showed that the zircon surface have a 16 h optimal hydration time. (Author)

  8. Spectroscopic determination of optimal hydration time of zircon surface

    International Nuclear Information System (INIS)

    Ordonez R, E.; Garcia R, G.; Garcia G, N.

    2010-01-01

    When a mineral surface is immersed in an aqueous solution, it develops and electric charge produced by the amphoteric dissociation of hydroxyl groups created by the hydration of the solid surface. This is one influential surface property. The complete hydration process takes a time which is specific for each mineral species. The knowledge of the aqueous solution contact time for complete surface hydration is mandatory for further surface phenomena studies. This study deals with the optimal hydration time of the raw zircon (ZrSiO 4 ) surface comparing the classical potentiometric titrations with a fluorescence spectroscopy technique. The latter is easy and rea liable as it demands only one sample batch to determine the optimal time to ensure a total hydration of the zircon surface. The analytical results of neutron activation analysis showed the presence of trace quantities of Dy 3+ , Eu 3+ and Er 3 in the bulk of zircon. The Dy 3+ is structured in the zircon crystalline lattice and undergoes the same chemical reactions as zircon. Furthermore, the Dy 3+ has a good fluorescent response whose intensity is enhanced by hydration molecules. The results show that, according to the potentiometric analysis, the hydration process for each batch (at least 8 sample batches) takes around 2 h, while the spectrometric method indicates only 5 minutes from only one batch. Both methods showed that the zircon surface have a 16 h optimal hydration time. (Author)

  9. Optimal placement of unified power flow controllers to improve dynamic voltage stability using power system variable based voltage stability indices.

    Science.gov (United States)

    Albatsh, Fadi M; Ahmad, Shameem; Mekhilef, Saad; Mokhlis, Hazlie; Hassan, M A

    2015-01-01

    This study examines a new approach to selecting the locations of unified power flow controllers (UPFCs) in power system networks based on a dynamic analysis of voltage stability. Power system voltage stability indices (VSIs) including the line stability index (LQP), the voltage collapse proximity indicator (VCPI), and the line stability index (Lmn) are employed to identify the most suitable locations in the system for UPFCs. In this study, the locations of the UPFCs are identified by dynamically varying the loads across all of the load buses to represent actual power system conditions. Simulations were conducted in a power system computer-aided design (PSCAD) software using the IEEE 14-bus and 39- bus benchmark power system models. The simulation results demonstrate the effectiveness of the proposed method. When the UPFCs are placed in the locations obtained with the new approach, the voltage stability improves. A comparison of the steady-state VSIs resulting from the UPFCs placed in the locations obtained with the new approach and with particle swarm optimization (PSO) and differential evolution (DE), which are static methods, is presented. In all cases, the UPFC locations given by the proposed approach result in better voltage stability than those obtained with the other approaches.

  10. Determination of optimal conditions of oxytetracyclin production from streptomyces rimosus

    International Nuclear Information System (INIS)

    Zouaghi, Atef

    2007-01-01

    Streptomyces rimosus is an oxytetracycline (OTC) antibiotic producing bacteria that exhibited activities against gram positive and negative bacteria. OTC is used widely not only in medicine but also in production industry. The antibiotic production of streptomyces covers a very wide range of condition. However, antibiotic producers are particularly fastidious cultivated by proper selection of media such as carbon source. In present study we have optimised conditions of OTC production (Composition of production media, p H, shaking and temperature). The results have been shown that bran barley is the optimal media for OTC production at 28C pH5.8 at 150rpm for 5 days. For antibiotic determination, OTC was extracted with different organic solvent. Thin-layer chromatography system was used for separation and identification of OTC antibiotic. High performance liquid chromatographic (HPLC) method with ultraviolet detection for the analysis of OTC is applied to the determination of OTC purification. (Author). 24 refs

  11. User Manual and Supporting Information for Library of Codes for Centroidal Voronoi Point Placement and Associated Zeroth, First, and Second Moment Determination; TOPICAL

    International Nuclear Information System (INIS)

    BURKARDT, JOHN; GUNZBURGER, MAX; PETERSON, JANET; BRANNON, REBECCA M.

    2002-01-01

    The theory, numerical algorithm, and user documentation are provided for a new ''Centroidal Voronoi Tessellation (CVT)'' method of filling a region of space (2D or 3D) with particles at any desired particle density. ''Clumping'' is entirely avoided and the boundary is optimally resolved. This particle placement capability is needed for any so-called ''mesh-free'' method in which physical fields are discretized via arbitrary-connectivity discrete points. CVT exploits efficient statistical methods to avoid expensive generation of Voronoi diagrams. Nevertheless, if a CVT particle's Voronoi cell were to be explicitly computed, then it would have a centroid that coincides with the particle itself and a minimized rotational moment. The CVT code provides each particle's volume and centroid, and also the rotational moment matrix needed to approximate a particle by an ellipsoid (instead of a simple sphere). DIATOM region specification is supported

  12. New Algorithms for Global Optimization and Reaction Path Determination.

    Science.gov (United States)

    Weber, D; Bellinger, D; Engels, B

    2016-01-01

    We present new schemes to improve the convergence of an important global optimization problem and to determine reaction pathways (RPs) between identified minima. Those methods have been implemented into the CAST program (Conformational Analysis and Search Tool). The first part of this chapter shows how to improve convergence of the Monte Carlo with minimization (MCM, also known as Basin Hopping) method when applied to optimize water clusters or aqueous solvation shells using a simple model. Since the random movement on the potential energy surface (PES) is an integral part of MCM, we propose to employ a hydrogen bonding-based algorithm for its improvement. We show comparisons of the results obtained for random dihedral and for the proposed random, rigid-body water molecule movement, giving evidence that a specific adaption of the distortion process greatly improves the convergence of the method. The second part is about the determination of RPs in clusters between conformational arrangements and for reactions. Besides standard approaches like the nudged elastic band method, we want to focus on a new algorithm developed especially for global reaction path search called Pathopt. We started with argon clusters, a typical benchmark system, which possess a flat PES, then stepwise increase the magnitude and directionality of interactions. Therefore, we calculated pathways for a water cluster and characterize them by frequency calculations. Within our calculations, we were able to show that beneath local pathways also additional pathways can be found which possess additional features. © 2016 Elsevier Inc. All rights reserved.

  13. A literature review on optimum meter placement algorithms for distribution state estimation

    Energy Technology Data Exchange (ETDEWEB)

    Ramesh, L. [Jadavpur Univ., Kolkotta (India); Chowdhury, S.P.; Chowdhury, S.; Gaunt, C.T. [Cape Town Univ., (South Africa)

    2009-07-01

    A literature review of meter placement for the monitoring of power distribution systems was presented. The aim of the study was to compare different algorithms used for solving optimum meter placement. The percentage of algorithms used and the number of studies conducted to determine optimal placement were plotted on graphs in order to determine the performance accuracy for different meter placement algorithms. Measurements used for state estimation were collected through SCADA systems. The data requirements for real time monitoring and the control of distribution systems were identified using a rule-based meter placement method. Rules included placing meters at all switch and fuse locations that require monitoring; placing additional meters along feeder line sections; placing meters on open tie switches that are used for feeder switching. The genetic algorithm technique was used to consider both the investment costs and real-time monitoring capability of the meters. It was concluded that the branch-current-based 3-phase state estimation algorithm can be used to determine optimal meter placements for distribution systems. The method allowed for the placement of fewer meters. 24 refs., 1 tab., 3 figs.

  14. The State Fiscal Policy: Determinants and Optimization of Financial Flows

    Directory of Open Access Journals (Sweden)

    Sitash Tetiana D.

    2017-03-01

    Full Text Available The article outlines the determinants of the state fiscal policy at the present stage of global transformations. Using the principles of financial science it is determined that regulation of financial flows within the fiscal sphere, namely centralization and redistribution of the GDP, which results in the regulation of the financial capacity of economic agents, is of importance. It is emphasized that the urgent measure for improving the tax model is re-considering the provision of fiscal incentives, which are used to stimulate the accumulation of capital, investment activity, innovation, increase of the competitiveness of national products, expansion of exports, increase of the level of the population employment. The necessity of applying the instruments of fiscal regulation of financial flows, which should take place on the basis of institutional economics emphasizing the analysis of institutional changes, the evolution of institutions and their impact on the behavior of participants of economic relations. At the same time it is determined that the maximum effect of fiscal regulation of financial flows is ensured when application of fiscal instruments is aimed not only at achieving the target values of parameters of financial flows but at overcoming institutional deformations as well. It is determined that the optimal movement of financial flows enables creating favorable conditions for development and maintenance of financial balance in the society and achievement of the necessary level of competitiveness of the national economy.

  15. Optimal task mapping in safety-critical real-time parallel systems; Placement optimal de taches pour les systemes paralleles temps-reel critiques

    Energy Technology Data Exchange (ETDEWEB)

    Aussagues, Ch

    1998-12-11

    This PhD thesis is dealing with the correct design of safety-critical real-time parallel systems. Such systems constitutes a fundamental part of high-performance systems for command and control that can be found in the nuclear domain or more generally in parallel embedded systems. The verification of their temporal correctness is the core of this thesis. our contribution is mainly in the following three points: the analysis and extension of a programming model for such real-time parallel systems; the proposal of an original method based on a new operator of synchronized product of state machines task-graphs; the validation of the approach by its implementation and evaluation. The work addresses particularly the main problem of optimal task mapping on a parallel architecture, such that the temporal constraints are globally guaranteed, i.e. the timeliness property is valid. The results incorporate also optimally criteria for the sizing and correct dimensioning of a parallel system, for instance in the number of processing elements. These criteria are connected with operational constraints of the application domain. Our approach is based on the off-line analysis of the feasibility of the deadline-driven dynamic scheduling that is used to schedule tasks inside one processor. This leads us to define the synchronized-product, a system of linear, constraints is automatically generated and then allows to calculate a maximum load of a group of tasks and then to verify their timeliness constraints. The communications, their timeliness verification and incorporation to the mapping problem is the second main contribution of this thesis. FInally, the global solving technique dealing with both task and communication aspects has been implemented and evaluated in the framework of the OASIS project in the LETI research center at the CEA/Saclay. (author) 96 refs.

  16. Determining of the Optimal Device Lifetime using Mathematical Renewal Models

    Directory of Open Access Journals (Sweden)

    Knežo Dušan

    2016-05-01

    Full Text Available Paper deals with the operations and equipment of the machine in the process of organizing production. During operation machines require maintenance and repairs, while in case of failure or machine wears it is necessary to replace them with new ones. For the process of replacement of old machines with new ones the term renewal is used. Qualitative aspects of the renewal process observe renewal theory, which is mainly based on the theory of probability and mathematical statistics. Devices lifetimes are closely related to the renewal of the devices. Presented article is focused on mathematical deduction of mathematical renewal models and determining optimal lifetime of the devices from the aspect of expenditures on renewal process.

  17. Determining the Bayesian optimal sampling strategy in a hierarchical system.

    Energy Technology Data Exchange (ETDEWEB)

    Grace, Matthew D.; Ringland, James T.; Boggs, Paul T.; Pebay, Philippe Pierre

    2010-09-01

    Consider a classic hierarchy tree as a basic model of a 'system-of-systems' network, where each node represents a component system (which may itself consist of a set of sub-systems). For this general composite system, we present a technique for computing the optimal testing strategy, which is based on Bayesian decision analysis. In previous work, we developed a Bayesian approach for computing the distribution of the reliability of a system-of-systems structure that uses test data and prior information. This allows for the determination of both an estimate of the reliability and a quantification of confidence in the estimate. Improving the accuracy of the reliability estimate and increasing the corresponding confidence require the collection of additional data. However, testing all possible sub-systems may not be cost-effective, feasible, or even necessary to achieve an improvement in the reliability estimate. To address this sampling issue, we formulate a Bayesian methodology that systematically determines the optimal sampling strategy under specified constraints and costs that will maximally improve the reliability estimate of the composite system, e.g., by reducing the variance of the reliability distribution. This methodology involves calculating the 'Bayes risk of a decision rule' for each available sampling strategy, where risk quantifies the relative effect that each sampling strategy could have on the reliability estimate. A general numerical algorithm is developed and tested using an example multicomponent system. The results show that the procedure scales linearly with the number of components available for testing.

  18. Private placements

    International Nuclear Information System (INIS)

    Bugeaud, G. J. R.

    1998-01-01

    The principles underlying private placements in Alberta, and the nature of the processes employed by the Alberta Securities Commission in handling such transactions were discussed. The Alberta Securities Commission's mode of operation was demonstrated by the inclusion of various documents issued by the Commission concerning (1) special warrant transactions prior to listing, (2) a decision by the Executive Director refusing to issue a receipt for the final prospectus for a distribution of securities of a company and the reasons for the refusal, (3) the Commission's decision to interfere with the Executive Director's decision not to issue a receipt for the final prospectus, with full citation of the Commission's reasons for its decision, (4) and a series of proposed rules and companion policy statements regarding trades and distributions outside and in Alberta. Text of a sample 'short form prospectus' was also included

  19. Optimum distributed generation placement with voltage sag effect minimization

    International Nuclear Information System (INIS)

    Biswas, Soma; Goswami, Swapan Kumar; Chatterjee, Amitava

    2012-01-01

    Highlights: ► A new optimal distributed generation placement algorithm is proposed. ► Optimal number, sizes and locations of the DGs are determined. ► Technical factors like loss, voltage sag problem are minimized. ► The percentage savings are optimized. - Abstract: The present paper proposes a new formulation for the optimum distributed generator (DG) placement problem which considers a hybrid combination of technical factors, like minimization of the line loss, reduction in the voltage sag problem, etc., and economical factors, like installation and maintenance cost of the DGs. The new formulation proposed is inspired by the idea that the optimum placement of the DGs can help in reducing and mitigating voltage dips in low voltage distribution networks. The problem is configured as a multi-objective, constrained optimization problem, where the optimal number of DGs, along with their sizes and bus locations, are simultaneously obtained. This problem has been solved using genetic algorithm, a traditionally popular stochastic optimization algorithm. A few benchmark systems radial and networked (like 34-bus radial distribution system, 30 bus loop distribution system and IEEE 14 bus system) are considered as the case study where the effectiveness of the proposed algorithm is aptly demonstrated.

  20. A Monte Carlo simulation technique to determine the optimal portfolio

    Directory of Open Access Journals (Sweden)

    Hassan Ghodrati

    2014-03-01

    Full Text Available During the past few years, there have been several studies for portfolio management. One of the primary concerns on any stock market is to detect the risk associated with various assets. One of the recognized methods in order to measure, to forecast, and to manage the existing risk is associated with Value at Risk (VaR, which draws much attention by financial institutions in recent years. VaR is a method for recognizing and evaluating of risk, which uses the standard statistical techniques and the method has been used in other fields, increasingly. The present study has measured the value at risk of 26 companies from chemical industry in Tehran Stock Exchange over the period 2009-2011 using the simulation technique of Monte Carlo with 95% confidence level. The used variability in the present study has been the daily return resulted from the stock daily price change. Moreover, the weight of optimal investment has been determined using a hybrid model called Markowitz and Winker model in each determined stocks. The results showed that the maximum loss would not exceed from 1259432 Rials at 95% confidence level in future day.

  1. Electronic Attack Platform Placement Optimization

    Science.gov (United States)

    2014-09-01

    Processing in VBA ...............................................................33 2. Client-Server Using Two Different Excel Application...6 Figure 3. Screenshot of the VBA IDE contained within all Microsoft Office products...application using MS Excel’s Applicatin.OnTime method. .....................................33 Figure 20. WINSOCK API Functions needed to use TCP via VBA

  2. Sediment Placement Areas 2012

    Data.gov (United States)

    California Department of Resources — Dredge material placement sites (DMPS), including active, inactive, proposed and historical placement sites. Dataset covers US Army Corps of Engineers San Francisco...

  3. Climate, duration, and N placement determine N2 O emissions in reduced tillage systems: a meta-analysis.

    Science.gov (United States)

    van Kessel, Chris; Venterea, Rodney; Six, Johan; Adviento-Borbe, Maria Arlene; Linquist, Bruce; van Groenigen, Kees Jan

    2013-01-01

    No-tillage and reduced tillage (NT/RT) management practices are being promoted in agroecosystems to reduce erosion, sequester additional soil C and reduce production costs. The impact of NT/RT on N2 O emissions, however, has been variable with both increases and decreases in emissions reported. Herein, we quantitatively synthesize studies on the short- and long-term impact of NT/RT on N2 O emissions in humid and dry climatic zones with emissions expressed on both an area- and crop yield-scaled basis. A meta-analysis was conducted on 239 direct comparisons between conventional tillage (CT) and NT/RT. In contrast to earlier studies, averaged across all comparisons, NT/RT did not alter N2 O emissions compared with CT. However, NT/RT significantly reduced N2 O emissions in experiments >10 years, especially in dry climates. No significant correlation was found between soil texture and the effect of NT/RT on N2 O emissions. When fertilizer-N was placed at ≥5 cm depth, NT/RT significantly reduced area-scaled N2 O emissions, in particular under humid climatic conditions. Compared to CT under dry climatic conditions, yield-scaled N2 O increased significantly (57%) when NT/RT was implemented <10 years, but decreased significantly (27%) after ≥10 years of NT/RT. There was a significant decrease in yield-scaled N2 O emissions in humid climates when fertilizer-N was placed at ≥5 cm depth. Therefore, in humid climates, deep placement of fertilizer-N is recommended when implementing NT/RT. In addition, NT/RT practices need to be sustained for a prolonged time, particularly in dry climates, to become an effective mitigation strategy for reducing N2 O emissions. © 2012 Blackwell Publishing Ltd.

  4. Decision Models for Determining the Optimal Life Test Sampling Plans

    Science.gov (United States)

    Nechval, Nicholas A.; Nechval, Konstantin N.; Purgailis, Maris; Berzins, Gundars; Strelchonok, Vladimir F.

    2010-11-01

    Life test sampling plan is a technique, which consists of sampling, inspection, and decision making in determining the acceptance or rejection of a batch of products by experiments for examining the continuous usage time of the products. In life testing studies, the lifetime is usually assumed to be distributed as either a one-parameter exponential distribution, or a two-parameter Weibull distribution with the assumption that the shape parameter is known. Such oversimplified assumptions can facilitate the follow-up analyses, but may overlook the fact that the lifetime distribution can significantly affect the estimation of the failure rate of a product. Moreover, sampling costs, inspection costs, warranty costs, and rejection costs are all essential, and ought to be considered in choosing an appropriate sampling plan. The choice of an appropriate life test sampling plan is a crucial decision problem because a good plan not only can help producers save testing time, and reduce testing cost; but it also can positively affect the image of the product, and thus attract more consumers to buy it. This paper develops the frequentist (non-Bayesian) decision models for determining the optimal life test sampling plans with an aim of cost minimization by identifying the appropriate number of product failures in a sample that should be used as a threshold in judging the rejection of a batch. The two-parameter exponential and Weibull distributions with two unknown parameters are assumed to be appropriate for modelling the lifetime of a product. A practical numerical application is employed to demonstrate the proposed approach.

  5. Determination of optimal levels of Resource Use in Clonal Robusta ...

    African Journals Online (AJOL)

    In spite of the current decline in World Market Prices, coffee remains Uganda's major export crop. However, coffee yield in Uganda remains sub optimal. One of the contributing factors could be an imbalance in the use of the factors of production. Optimal combinations of factors of production for clonal coffee production in ...

  6. Reliability of panoramic radiography in determination of neurosensory disturbances related to dental implant placement in posterior mandible.

    Science.gov (United States)

    Kütük, Nükhet; Gönen, Zeynep Burçin; Yaşar, M Taha; Demirbaş, Ahmet Emin; Alkan, Alper

    2014-12-01

    During implantology procedures, one of the most serious complications is the damage of the inferior alveolar nerve, which may result in neurosensory disturbances (NSD). Panoramic radiographs have been considered for a primary evaluation to determine the bone height and implant-mandibular canal distance. One thousand five hundred ninety-seven panoramic radiographs of patients, who were treated with 3608 dental implants in Erciyes University, Oral and Maxillofacial Hospital between 2007 and 2012, were examined. Forty-eight implants were determined to be near the mandibular canal using a 2-dimensional software program. A total of 48 implants were closer than 2 mm to the mandibular canal. A range of 0 to 1.9 mm distance was detected between the mandibular canal and these implants. Fourteen implants (29.16%) placed in a distance less than 1 mm to the mandibular canal, and 34 (70.83%) between 1 and 2 mm. One patient had NSD. Determination of the dental implant length using panoramic radiography is a reliable technique to prevent neurosensory complications. However computed tomography or cone-beam computed tomography based planning of dental implants may be required for borderline cases.

  7. Heuristic Optimization Techniques for Determining Optimal Reserve Structure of Power Generating Systems

    DEFF Research Database (Denmark)

    Ding, Yi; Goel, Lalit; Wang, Peng

    2012-01-01

    the required level of supply reliability to its customers. In previous research, Genetic Algorithm (GA) has been used to solve most reliability optimization problems. However, the GA is not very computationally efficient in some cases. In this chapter a new heuristic optimization technique—the particle swarm...

  8. Using orthogonal design to determine optimal conditions for ...

    African Journals Online (AJOL)

    This study is important for the optimization of protoplast fusogen and washing solution system suitable for protoplast fusion between the Triticum aestivum and Aegilops. By enzymolysis, the result shows that more than 90% viable protoplasts of Mingxian169 (common wheat) and Y2155a (Aegilops) were efficiently obtained ...

  9. Topologically determined optimal stochastic resonance responses of spatially embedded networks

    International Nuclear Information System (INIS)

    Gosak, Marko; Marhl, Marko; Korosak, Dean

    2011-01-01

    We have analyzed the stochastic resonance phenomenon on spatial networks of bistable and excitable oscillators, which are connected according to their location and the amplitude of external forcing. By smoothly altering the network topology from a scale-free (SF) network with dominating long-range connections to a network where principally only adjacent oscillators are connected, we reveal that besides an optimal noise intensity, there is also a most favorable interaction topology at which the best correlation between the response of the network and the imposed weak external forcing is achieved. For various distributions of the amplitudes of external forcing, the optimal topology is always found in the intermediate regime between the highly heterogeneous SF network and the strong geometric regime. Our findings thus indicate that a suitable number of hubs and with that an optimal ratio between short- and long-range connections is necessary in order to obtain the best global response of a spatial network. Furthermore, we link the existence of the optimal interaction topology to a critical point indicating the transition from a long-range interactions-dominated network to a more lattice-like network structure.

  10. Workload Indicators Of Staffing Need Method in determining optimal ...

    African Journals Online (AJOL)

    ... available working hours, category and individual allowances, annual workloads from the previous year\\'s statistics and optimal departmental establishment of workers. Results: There was initial resentment to the exercise because of the notion that it was aimed at retrenching workers. The team was given autonomy by the ...

  11. Using orthogonal design to determine optimal conditions for ...

    African Journals Online (AJOL)

    AJB_YOMI

    2011-10-12

    Oct 12, 2011 ... This study is important for the optimization of protoplast fusogen and washing solution system suitable for protoplast fusion between the Triticum aestivum and Aegilops. By enzymolysis, the result shows that more than 90% viable protoplasts of Mingxian169 (common wheat) and Y2155a (Aegilops) were.

  12. Use of Simplex Method in Determination of Optimal Rational ...

    African Journals Online (AJOL)

    The optimal rational composition was found to be: Nsu Clay = 47.8%, quartz = 33.7% and CaCO3 = 18.5%. The other clay from Ukpor was found unsuitable at the firing temperature (l000°C) used. It showed bending strength lower than the standard requirement for all compositions studied. To improve the strength an ...

  13. A SVR Learning Based Sensor Placement Approach for Nonlinear Spatially Distributed Systems

    Directory of Open Access Journals (Sweden)

    Xian-xia Zhang

    2016-01-01

    Full Text Available Many industrial processes are inherently distributed in space and time and are called spatially distributed dynamical systems (SDDSs. Sensor placement affects capturing the spatial distribution and then becomes crucial issue to model or control an SDDS. In this study, a new data-driven based sensor placement method is developed. SVR algorithm is innovatively used to extract the characteristics of spatial distribution from a spatiotemporal data set. The support vectors learned by SVR represent the crucial spatial data structure in the spatiotemporal data set, which can be employed to determine optimal sensor location and sensor number. A systematic sensor placement design scheme in three steps (data collection, SVR learning, and sensor locating is developed for an easy implementation. Finally, effectiveness of the proposed sensor placement scheme is validated on two spatiotemporal 3D fuzzy controlled spatially distributed systems.

  14. Determination of Pareto frontier in multi-objective maintenance optimization

    International Nuclear Information System (INIS)

    Certa, Antonella; Galante, Giacomo; Lupo, Toni; Passannanti, Gianfranco

    2011-01-01

    The objective of a maintenance policy generally is the global maintenance cost minimization that involves not only the direct costs for both the maintenance actions and the spare parts, but also those ones due to the system stop for preventive maintenance and the downtime for failure. For some operating systems, the failure event can be dangerous so that they are asked to operate assuring a very high reliability level between two consecutive fixed stops. The present paper attempts to individuate the set of elements on which performing maintenance actions so that the system can assure the required reliability level until the next fixed stop for maintenance, minimizing both the global maintenance cost and the total maintenance time. In order to solve the previous constrained multi-objective optimization problem, an effective approach is proposed to obtain the best solutions (that is the Pareto optimal frontier) among which the decision maker will choose the more suitable one. As well known, describing the whole Pareto optimal frontier generally is a troublesome task. The paper proposes an algorithm able to rapidly overcome this problem and its effectiveness is shown by an application to a case study regarding a complex series-parallel system.

  15. Boat boarding ladder placement

    Science.gov (United States)

    1998-04-01

    Presented in three volumes; 'Boat Boarding Ladder Placement,' which explores safety considerations including potential for human contact with a rotating propeller; 'Boat Handhold Placement,' which explores essential principles and methods of fall con...

  16. Clinical practice placements in the community: a survey to determine if they reflect the shift in healthcare delivery from secondary to primary care settings.

    Science.gov (United States)

    Betony, Karen

    2012-01-01

    With the worldwide strategic shift of health care delivery from secondary to primary care settings, more newly qualified nurses are working in primary care, making exposure to the variety of roles available to nurses essential for future workforce development. The aim of this small research project was to explore whether English universities' programmes are providing clinical practice placement experiences which reflect the breadth and complexity of nursing roles available in primary care. A survey of academic staff highlighted that universities designed curricula based on local placement and mentor availability and while a variety of primary care teams are being used, district nursing teams continue to be used the most, particularly for substantive placements. The need for specified staff to work across university and placement settings was deemed essential for identifying and supporting community based clinical placements. Recommendations from the project include: an increasingly collaborative approach amongst clinical, academic and managerial staff to create a learning culture for all health professional students' practice experience; robust strategic systems to ensure clinical placements are offered by services on the periphery of a national health service; and focussing of resources on students with a desire to pursue a primary care career. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. A Placement Advisory Test

    Science.gov (United States)

    Hughes, Chris

    2010-01-01

    The primary method of placement at Portland CC (PCC) is the Compass Placement test. For the most part, students are placed correctly, but there are cases when students feel that they have been placed too low. In such cases we use our newly created Placement Advisory Test (PAT) to help us place them appropriately. (Contains 2 figures.)

  18. Model for determining and optimizing delivery performance in industrial systems

    Directory of Open Access Journals (Sweden)

    Fechete Flavia

    2017-01-01

    Full Text Available Performance means achieving organizational objectives regardless of their nature and variety, and even overcoming them. Improving performance is one of the major goals of any company. Achieving the global performance means not only obtaining the economic performance, it is a must to take into account other functions like: function of quality, delivery, costs and even the employees satisfaction. This paper aims to improve the delivery performance of an industrial system due to their very low results. The delivery performance took into account all categories of performance indicators, such as on time delivery, backlog efficiency or transport efficiency. The research was focused on optimizing the delivery performance of the industrial system, using linear programming. Modeling the delivery function using linear programming led to obtaining precise quantities to be produced and delivered each month by the industrial system in order to minimize their transport cost, satisfying their customers orders and to control their stock. The optimization led to a substantial improvement in all four performance indicators that concern deliveries.

  19. Optimizing direct amplification of forensic commercial kits for STR determination.

    Science.gov (United States)

    Caputo, M; Bobillo, M C; Sala, A; Corach, D

    2017-04-01

    Direct DNA amplification in forensic genotyping reduces analytical time when large sample sets are being analyzed. The amplification success depends mainly upon two factors: on one hand, the PCR chemistry and, on the other, the type of solid substrate where the samples are deposited. We developed a workflow strategy aiming to optimize times and cost when starting from blood samples spotted onto diverse absorbent substrates. A set of 770 blood samples spotted onto Blood cards, Whatman ® 3 MM paper, FTA™ Classic cards, and Whatman ® Grade 1 was analyzed by a unified working strategy including a low-cost pre-treatment, a PCR amplification volume scale-down, and the use of the 3500 Genetic Analyzer as the analytical platform. Samples were analyzed using three different commercial multiplex STR direct amplification kits. The efficiency of the strategy was evidenced by a higher percentage of high-quality profiles obtained (over 94%), a reduced number of re-injections (average 3.2%), and a reduced amplification failure rate (lower than 5%). Average peak height ratio among different commercial kits was 0.91, and the intra-locus balance showed values ranging from 0.92 to 0.94. A comparison with previously reported results was performed demonstrating the efficiency of the proposed modifications. The protocol described herein showed high performance, producing optimal quality profiles, and being both time and cost effective. Copyright © 2017 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  20. Determination and optimization of spatial samples for distributed measurements.

    Energy Technology Data Exchange (ETDEWEB)

    Huo, Xiaoming (Georgia Institute of Technology, Atlanta, GA); Tran, Hy D.; Shilling, Katherine Meghan; Kim, Heeyong (Georgia Institute of Technology, Atlanta, GA)

    2010-10-01

    There are no accepted standards for determining how many measurements to take during part inspection or where to take them, or for assessing confidence in the evaluation of acceptance based on these measurements. The goal of this work was to develop a standard method for determining the number of measurements, together with the spatial distribution of measurements and the associated risks for false acceptance and false rejection. Two paths have been taken to create a standard method for selecting sampling points. A wavelet-based model has been developed to select measurement points and to determine confidence in the measurement after the points are taken. An adaptive sampling strategy has been studied to determine implementation feasibility on commercial measurement equipment. Results using both real and simulated data are presented for each of the paths.

  1. Optimization of operating conditions in a high-shear mixer using dem model: determination of optimal fill level.

    Science.gov (United States)

    Terashita, Keijiro; Nishimura, Takehiko; Natsuyama, Susumu

    2002-12-01

    For the purpose of evaluating optimal fill level of starting materials in a high-shear mixer, discrete element method (DEM) simulation was conducted to visualize kinetic status between particles. The simulation results obtained by changing fill levels were used to determine solid fraction of particles, particle velocity, particle velocity vector, and kinetic energy and discuss the flow pattern. Optimal fill level was obtained from the information on these matters. It was pointed out that understanding the kinetic energy between particles in an agitating vessel was effective in determining the optimal fill level. Granulation experiment was conducted to validate the optimal fill level obtained by the simulation, confirming the good agreement between these two results. It was pointed out that determination of kinetic energy between particles through the simulation was effective in obtaining an index of the kinetic status of particles. Further, it was confirmed that the simulation could provide more information than conventional granulation experiments could provide and also helpful in optimizing the operating conditions.

  2. Determination of the geomagnetic external contribution by nonlinear optimization methods

    International Nuclear Information System (INIS)

    Comisel, H.; Popa, L.

    1993-07-01

    The fluctuations of the Geomagnetic Field have been determined from magnetometric data in the framework of AKTIVE experiment. Using an approximate model which describes the oscillating motional of the satellite, the parameters of motion have also been calculated. (author). 7 refs, 7 figs, 1 tab

  3. A projection method for under determined optimal experimental designs

    KAUST Repository

    Long, Quan

    2014-01-09

    A new implementation, based on the Laplace approximation, was developed in (Long, Scavino, Tempone, & Wang 2013) to accelerate the estimation of the post–experimental expected information gains in the model parameters and predictive quantities of interest. A closed–form approximation of the inner integral and the order of the corresponding dominant error term were obtained in the cases where the parameters are determined by the experiment. In this work, we extend that method to the general cases where the model parameters could not be determined completely by the data from the proposed experiments. We carry out the Laplace approximations in the directions orthogonal to the null space of the corresponding Jacobian matrix, so that the information gain (Kullback–Leibler divergence) can be reduced to an integration against the marginal density of the transformed parameters which are not determined by the experiments. Furthermore, the expected information gain can be approximated by an integration over the prior, where the integrand is a function of the projected posterior covariance matrix. To deal with the issue of dimensionality in a complex problem, we use Monte Carlo sampling or sparse quadratures for the integration over the prior probability density function, depending on the regularity of the integrand function. We demonstrate the accuracy, efficiency and robustness of the proposed method via several nonlinear under determined numerical examples.

  4. Determinants of optimal adherence to antiretroviral therapy among ...

    African Journals Online (AJOL)

    Background: Successful Antiretroviral therapy (ART) was shown to rely on high levels of medication adherence to enable maximum and durable viral suppression for the prolongation of life among people living with HIV/AIDS. Objective: The study sought to determine individual and environmental factors that influence ...

  5. Determining optimal pinger spacing for harbour porpoise bycatch mitigation

    DEFF Research Database (Denmark)

    Larsen, Finn; Krog, Carsten; Eigaard, Ole Ritzau

    2013-01-01

    A trial was conducted in the Danish North Sea hake gillnet fishery in July to September 2006 to determine whether the spacing of the Aquatec AQUAmark100 pinger could be increased without reducing the effectiveness of the pinger in mitigating harbour porpoise bycatch. The trial was designed as a c...

  6. Method for determining optimal supercell representation of interfaces.

    Science.gov (United States)

    Stradi, Daniele; Jelver, Line; Smidstrup, Søren; Stokbro, Kurt

    2017-05-10

    The geometry and structure of an interface ultimately determines the behavior of devices at the nanoscale. We present a generic method to determine the possible lattice matches between two arbitrary surfaces and to calculate the strain of the corresponding matched interface. We apply this method to explore two relevant classes of interfaces for which accurate structural measurements of the interface are available: (i) the interface between pentacene crystals and the (1 1 1) surface of gold, and (ii) the interface between the semiconductor indium-arsenide and aluminum. For both systems, we demonstrate that the presented method predicts interface geometries in good agreement with those measured experimentally, which present nontrivial matching characteristics and would be difficult to guess without relying on automated structure-searching methods.

  7. Determining Reduced Order Models for Optimal Stochastic Reduced Order Models

    Energy Technology Data Exchange (ETDEWEB)

    Bonney, Matthew S. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Brake, Matthew R.W. [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2015-08-01

    The use of parameterized reduced order models(PROMs) within the stochastic reduced order model (SROM) framework is a logical progression for both methods. In this report, five different parameterized reduced order models are selected and critiqued against the other models along with truth model for the example of the Brake-Reuss beam. The models are: a Taylor series using finite difference, a proper orthogonal decomposition of the the output, a Craig-Bampton representation of the model, a method that uses Hyper-Dual numbers to determine the sensitivities, and a Meta-Model method that uses the Hyper-Dual results and constructs a polynomial curve to better represent the output data. The methods are compared against a parameter sweep and a distribution propagation where the first four statistical moments are used as a comparison. Each method produces very accurate results with the Craig-Bampton reduction having the least accurate results. The models are also compared based on time requirements for the evaluation of each model where the Meta- Model requires the least amount of time for computation by a significant amount. Each of the five models provided accurate results in a reasonable time frame. The determination of which model to use is dependent on the availability of the high-fidelity model and how many evaluations can be performed. Analysis of the output distribution is examined by using a large Monte-Carlo simulation along with a reduced simulation using Latin Hypercube and the stochastic reduced order model sampling technique. Both techniques produced accurate results. The stochastic reduced order modeling technique produced less error when compared to an exhaustive sampling for the majority of methods.

  8. Paving the Way for Implant Placement for an Auricular Prosthesis

    Directory of Open Access Journals (Sweden)

    Dipti S Shah

    2013-01-01

    Full Text Available Background: Ideal placement of bone integrated implants to retain a prosthesis is critical for a successful final prosthetic restoration. Several sources have described the importance and use of surgical templates for the optimal placement of extraoral implants. The literature is replete with information explaining the use of surgic al templates for intraoral implant placement. Indeed, correct placement of implants facilitates creating a prosthesis that functions well and looks natural. To ensure proper implant placement, considerable effort should go into pre-surgical planning. It is clear that extraoral surgical templates aid in proper implant placement, yet the literature describing fabrication is limited. This article describes different methods for fabrication of surgical template for placement of implants for an auricular prosthesis.

  9. Determination of Optimal Double Sampling Plan using Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Sampath Sundaram

    2012-03-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Designing double sampling plan requires identification of sample sizes and acceptance numbers. In this paper a genetic algorithm has been designed for the selection of optimal acceptance numbers and sample sizes for the specified producer’s risk and consumer’s risk. Implementation of the algorithm has been illustrated numerically for different choices of quantities involved in a double sampling plan   

  1. Determining Optimal Replacement Policy with an Availability Constraint via Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Shengliang Zong

    2017-01-01

    Full Text Available We develop a model and a genetic algorithm for determining an optimal replacement policy for power equipment subject to Poisson shocks. If the time interval of two consecutive shocks is less than a threshold value, the failed equipment can be repaired. We assume that the operating time after repair is stochastically nonincreasing and the repair time is exponentially distributed with a geometric increasing mean. Our objective is to minimize the expected average cost under an availability requirement. Based on this average cost function, we propose the genetic algorithm to locate the optimal replacement policy N to minimize the average cost rate. The results show that the GA is effective and efficient in finding the optimal solutions. The availability of equipment has significance effect on the optimal replacement policy. Many practical systems fit the model developed in this paper.

  2. A Self-Adaptive Fuzzy c-Means Algorithm for Determining the Optimal Number of Clusters.

    Science.gov (United States)

    Ren, Min; Liu, Peiyu; Wang, Zhihao; Yi, Jing

    2016-01-01

    For the shortcoming of fuzzy c -means algorithm (FCM) needing to know the number of clusters in advance, this paper proposed a new self-adaptive method to determine the optimal number of clusters. Firstly, a density-based algorithm was put forward. The algorithm, according to the characteristics of the dataset, automatically determined the possible maximum number of clusters instead of using the empirical rule [Formula: see text] and obtained the optimal initial cluster centroids, improving the limitation of FCM that randomly selected cluster centroids lead the convergence result to the local minimum. Secondly, this paper, by introducing a penalty function, proposed a new fuzzy clustering validity index based on fuzzy compactness and separation, which ensured that when the number of clusters verged on that of objects in the dataset, the value of clustering validity index did not monotonically decrease and was close to zero, so that the optimal number of clusters lost robustness and decision function. Then, based on these studies, a self-adaptive FCM algorithm was put forward to estimate the optimal number of clusters by the iterative trial-and-error process. At last, experiments were done on the UCI, KDD Cup 1999, and synthetic datasets, which showed that the method not only effectively determined the optimal number of clusters, but also reduced the iteration of FCM with the stable clustering result.

  3. A Self-Adaptive Fuzzy c-Means Algorithm for Determining the Optimal Number of Clusters

    Science.gov (United States)

    Wang, Zhihao; Yi, Jing

    2016-01-01

    For the shortcoming of fuzzy c-means algorithm (FCM) needing to know the number of clusters in advance, this paper proposed a new self-adaptive method to determine the optimal number of clusters. Firstly, a density-based algorithm was put forward. The algorithm, according to the characteristics of the dataset, automatically determined the possible maximum number of clusters instead of using the empirical rule n and obtained the optimal initial cluster centroids, improving the limitation of FCM that randomly selected cluster centroids lead the convergence result to the local minimum. Secondly, this paper, by introducing a penalty function, proposed a new fuzzy clustering validity index based on fuzzy compactness and separation, which ensured that when the number of clusters verged on that of objects in the dataset, the value of clustering validity index did not monotonically decrease and was close to zero, so that the optimal number of clusters lost robustness and decision function. Then, based on these studies, a self-adaptive FCM algorithm was put forward to estimate the optimal number of clusters by the iterative trial-and-error process. At last, experiments were done on the UCI, KDD Cup 1999, and synthetic datasets, which showed that the method not only effectively determined the optimal number of clusters, but also reduced the iteration of FCM with the stable clustering result. PMID:28042291

  4. HIV Treatment and Prevention: A Simple Model to Determine Optimal Investment.

    Science.gov (United States)

    Juusola, Jessie L; Brandeau, Margaret L

    2016-04-01

    To create a simple model to help public health decision makers determine how to best invest limited resources in HIV treatment scale-up and prevention. A linear model was developed for determining the optimal mix of investment in HIV treatment and prevention, given a fixed budget. The model incorporates estimates of secondary health benefits accruing from HIV treatment and prevention and allows for diseconomies of scale in program costs and subadditive benefits from concurrent program implementation. Data sources were published literature. The target population was individuals infected with HIV or at risk of acquiring it. Illustrative examples of interventions include preexposure prophylaxis (PrEP), community-based education (CBE), and antiretroviral therapy (ART) for men who have sex with men (MSM) in the US. Outcome measures were incremental cost, quality-adjusted life-years gained, and HIV infections averted. Base case analysis indicated that it is optimal to invest in ART before PrEP and to invest in CBE before scaling up ART. Diseconomies of scale reduced the optimal investment level. Subadditivity of benefits did not affect the optimal allocation for relatively low implementation levels. The sensitivity analysis indicated that investment in ART before PrEP was optimal in all scenarios tested. Investment in ART before CBE became optimal when CBE reduced risky behavior by 4% or less. Limitations of the study are that dynamic effects are approximated with a static model. Our model provides a simple yet accurate means of determining optimal investment in HIV prevention and treatment. For MSM in the US, HIV control funds should be prioritized on inexpensive, effective programs like CBE, then on ART scale-up, with only minimal investment in PrEP. © The Author(s) 2015.

  5. Node Voltage Improvement by Capacitor Placement in Distribution Network : A Soft Computing Approach

    OpenAIRE

    SHWETA SARKAR,; SANDEEP CHAKRAVORTY

    2010-01-01

    This paper deals with a genetic algorithm based approach for determining the optimum placement location of capacitor in radial distribution system which is obtained after optimum reconfiguration. Reduction of total losses in distribution system is very essential to improve the overall efficiency of power delivery. This can be achieved by placing the optimal value of capacitors at proper ocations in radial distribution systems. The proposed methodologyis a genetic approach based algorithm. Th...

  6. Take me where I want to go: Institutional prestige, advisor sponsorship, and academic career placement preferences.

    Science.gov (United States)

    Pinheiro, Diogo L; Melkers, Julia; Newton, Sunni

    2017-01-01

    Placement in prestigious research institutions for STEM (science, technology, engineering, and mathematics) PhD recipients is generally considered to be optimal. Yet some doctoral recipients are not interested in intensive research careers and instead seek alternative careers, outside but also within academe (for example teaching positions in Liberal Arts Schools). Recent attention to non-academic pathways has expanded our understanding of alternative PhD careers. However, career preferences and placements are also nuanced along the academic pathway. Existing research on academic careers (mostly research-centric) has found that certain factors have a significant impact on the prestige of both the institutional placement and the salary of PhD recipients. We understand less, however, about the functioning of career preferences and related placements outside of the top academic research institutions. Our work builds on prior studies of academic career placement to explore the impact that prestige of PhD-granting institution, advisor involvement, and cultural capital have on the extent to which STEM PhDs are placed in their preferred academic institution types. What determines whether an individual with a preference for research oriented institutions works at a Research Extensive university? Or whether an individual with a preference for teaching works at a Liberal Arts college? Using survey data from a nationally representative sample of faculty in biology, biochemistry, civil engineering and mathematics at four different Carnegie Classified institution types (Research Extensive, Research Intensive, Master's I & II, and Liberal Arts Colleges), we examine the relative weight of different individual and institutional characteristics on institutional type placement. We find that doctoral institutional prestige plays a significant role in matching individuals with their preferred institutional type, but that advisor involvement only has an impact on those with a

  7. Determination and optimization of the ζ potential in boron electrophoretic deposition on aluminium substrates

    International Nuclear Information System (INIS)

    Oliveira Sampa, M.H. de; Vinhas, L.A.; Pino, E.S.

    1991-05-01

    In this work we present an introduction of the electrophoretic process followed by a detailed experimental treatment of the technique used in the determination and optimization of the ζ-potential, mainly as a function of the electrolyte concentration, in a high purity boron electrophoretics deposition on aluminium substrates used as electrodes in neutron detectors. (author)

  8. Optimal fluorescence waveband determination for detecting defect cherry tomatoes using fluorescence excitation-emission matrix

    Science.gov (United States)

    A multi-spectral fluorescence imaging technique was used to detect defect cherry tomatoes. The fluorescence excitation and emission matrix was used to measure for defects, sound surface, and stem areas to determine the optimal fluorescence excitation and emission wavelengths for discrimination. Two-...

  9. Optimal siting and sizing of wind farms

    NARCIS (Netherlands)

    Cetinay-Iyicil, H.; Kuipers, F.A.; Guven, A. Nezih

    2017-01-01

    In this paper, we propose a novel technique to determine the optimal placement of wind farms, thereby taking into account wind characteristics and electrical grid constraints. We model the long-term variability of wind speed using a Weibull distribution according to wind direction intervals, and

  10. Comparison of different techniques of laryngeal mask placement in children.

    Science.gov (United States)

    Ghai, Babita; Wig, Jyotsna

    2009-06-01

    The insertion of laryngeal mask airway is not always easy in children, and many techniques are described to improve success rate of placement. It is very important to determine the optimal insertion technique as unsuccessful prolonged insertion and multiple attempts are associated with adverse respiratory events and trauma in children. This article will review different techniques studied recently for the placement of classical laryngeal mask airway in children as well as recent findings of cuff pressure and depth of anesthesia for laryngeal mask airway placement. Laryngeal mask airway in children has undergone many modifications such as ProSeal laryngeal mask airway to improve its functioning. This article will also review different insertion techniques for ProSeal laryngeal mask airway. Rotational technique with partially inflated cuff is reported to have the highest success rate of insertion and lowest incidence of complications for classical laryngeal mask airway in children. Clinical endpoints for cuff inflation are associated with significant hyperinflation and increased leakage around the laryngeal mask airway cuff. The inferences regarding the dosage of intravenous anesthetic agents and end-tidal concentration of volatile anesthetics in children to achieve adequate depth for laryngeal mask airway placement are very difficult to draw. ProSeal laryngeal mask airway is associated with a very high first attempt success and overall success of insertion in children. Rotational technique may be considered as the first technique of choice for classical laryngeal mask airway insertion in children. The routine use of cuff pressure monitoring is mandatory during the use of laryngeal mask airway in children. Modification of laryngeal mask airway in children, that is ProSeal laryngeal mask airway, is promising and improves the success rate of insertion.

  11. Experimental determination of optimal clamping torque for AB-PEM Fuel cell

    Directory of Open Access Journals (Sweden)

    Noor Ul Hassan

    2016-04-01

    Full Text Available Polymer electrolyte Membrane (PEM fuel cell is an electrochemical device producing electricity by the reaction of hydrogen and oxygen without combustion. PEM fuel cell stack is provided with an appropriate clamping torque to prevent leakage of reactant gases and to minimize the contact resistance between gas diffusion media (GDL and bipolar plates. GDL porous structure and gas permeability is directly affected by the compaction pressure which, consequently, drastically change the fuel cell performance. Various efforts were made to determine the optimal compaction pressure and pressure distributions through simulations and experimentation. Lower compaction pressure results in increase of contact resistance and also chances of leakage. On the other hand, higher compaction pressure decreases the contact resistance but also narrows down the diffusion path for mass transfer from gas channels to the catalyst layers, consequently, lowering cell performance. The optimal cell performance is related to the gasket thickness and compression pressure on GDL. Every stack has a unique assembly pressure due to differences in fuel cell components material and stack design. Therefore, there is still need to determine the optimal torque value for getting the optimal cell performance. This study has been carried out in continuation of deve­lopment of Air breathing PEM fuel cell for small Unmanned Aerial Vehicle (UAV application. Compaction pressure at minimum contact resistance was determined and clamping torque value was calcu­la­ted accordingly. Single cell performance tests were performed at five different clamping torque values i.e 0.5, 1.0, 1.5, 2.0 and 2.5 N m, for achieving optimal cell per­formance. Clamping pressure distribution tests were also performed at these torque values to verify uniform pressure distribution at optimal torque value. Experimental and theoretical results were compared for making inferences about optimal cell perfor­man­ce. A

  12. A New Method for Determining Optimal Regularization Parameter in Near-Field Acoustic Holography

    Directory of Open Access Journals (Sweden)

    Yue Xiao

    2018-01-01

    Full Text Available Tikhonov regularization method is effective in stabilizing reconstruction process of the near-field acoustic holography (NAH based on the equivalent source method (ESM, and the selection of the optimal regularization parameter is a key problem that determines the regularization effect. In this work, a new method for determining the optimal regularization parameter is proposed. The transfer matrix relating the source strengths of the equivalent sources to the measured pressures on the hologram surface is augmented by adding a fictitious point source with zero strength. The minimization of the norm of this fictitious point source strength is as the criterion for choosing the optimal regularization parameter since the reconstructed value should tend to zero. The original inverse problem in calculating the source strengths is converted into a univariate optimization problem which is solved by a one-dimensional search technique. Two numerical simulations with a point driven simply supported plate and a pulsating sphere are investigated to validate the performance of the proposed method by comparison with the L-curve method. The results demonstrate that the proposed method can determine the regularization parameter correctly and effectively for the reconstruction in NAH.

  13. Optimal Fluorescence Waveband Determination for Detecting Defective Cherry Tomatoes Using a Fluorescence Excitation-Emission Matrix

    Directory of Open Access Journals (Sweden)

    In-Suck Baek

    2014-11-01

    Full Text Available A multi-spectral fluorescence imaging technique was used to detect defective cherry tomatoes. The fluorescence excitation and emission matrix was used to measure for defects, sound surface and stem areas to determine the optimal fluorescence excitation and emission wavelengths for discrimination. Two-way ANOVA revealed the optimal excitation wavelength for detecting defect areas was 410 nm. Principal component analysis (PCA was applied to the fluorescence emission spectra of all regions at 410 nm excitation to determine the emission wavelengths for defect detection. The major emission wavelengths were 688 nm and 506 nm for the detection. Fluorescence images combined with the determined emission wavebands demonstrated the feasibility of detecting defective cherry tomatoes with >98% accuracy. Multi-spectral fluorescence imaging has potential utility in non-destructive quality sorting of cherry tomatoes.

  14. Determination of the optimized single-layer ionospheric height for electron content measurements over China

    Science.gov (United States)

    Li, Min; Yuan, Yunbin; Zhang, Baocheng; Wang, Ningbo; Li, Zishen; Liu, Xifeng; Zhang, Xiao

    2018-02-01

    The ionosphere effective height (IEH) is a very important parameter in total electron content (TEC) measurements under the widely used single-layer model assumption. To overcome the requirement of a large amount of simultaneous vertical and slant ionospheric observations or dense "coinciding" pierce points data, a new approach comparing the converted vertical TEC (VTEC) value using mapping function based on a given IEH with the "ground truth" VTEC value provided by the combined International GNSS Service Global Ionospheric Maps is proposed for the determination of the optimal IEH. The optimal IEH in the Chinese region is determined using three different methods based on GNSS data. Based on the ionosonde data from three different locations in China, the altitude variation of the peak electron density (hmF2) is found to have clear diurnal, seasonal and latitudinal dependences, and the diurnal variation of hmF2 varies from approximately 210 to 520 km in Hainan. The determination of the optimal IEH employing the inverse method suggested by Birch et al. (Radio Sci 37, 2002. doi: 10.1029/2000rs002601) did not yield a consistent altitude in the Chinese region. Tests of the method minimizing the mapping function errors suggested by Nava et al. (Adv Space Res 39:1292-1297, 2007) indicate that the optimal IEH ranges from 400 to 600 km, and the height of 450 km is the most frequent IEH at both high and low solar activities. It is also confirmed that the IEH of 450-550 km is preferred for the Chinese region instead of the commonly adopted 350-450 km using the determination method of the optimal IEH proposed in this paper.

  15. Automated Fiber Placement of PEEK/IM7 Composites with Film Interleaf Layers

    Science.gov (United States)

    Hulcher, A. Bruce; Banks, William I., III; Pipes, R. Byron; Tiwari, Surendra N.; Cano, Roberto J.; Johnston, Norman J.; Clinton, R. G., Jr. (Technical Monitor)

    2001-01-01

    The incorporation of thin discrete layers of resin between plies (interleafing) has been shown to improve fatigue and impact properties of structural composite materials. Furthermore, interleafing could be used to increase the barrier properties of composites used as structural materials for cryogenic propellant storage. In this work, robotic heated-head tape placement of PEEK/IM7 composites containing a PEEK polymer film interleaf was investigated. These experiments were carried out at the NASA Langley Research Center automated fiber placement facility. Using the robotic equipment, an optimal fabrication process was developed for the composite without the interleaf. Preliminary interleaf processing trials indicated that a two-stage process was necessary; the film had to be tacked to the partially-placed laminate then fully melted in a separate operation. Screening experiments determined the relative influence of the various robotic process variables on the peel strength of the film-composite interface. Optimization studies were performed in which peel specimens were fabricated at various compaction loads and roller temperatures at each of three film melt processing rates. The resulting data were fitted with quadratic response surfaces. Additional specimens were fabricated at placement parameters predicted by the response surface models to yield high peel strength in an attempt to gage the accuracy of the predicted response and assess the repeatability of the process. The overall results indicate that quality PEEK/lM7 laminates having film interleaves can be successfully and repeatability fabricated by heated head automated fiber placement.

  16. Fine Control of Local Whitespace in Placement

    Directory of Open Access Journals (Sweden)

    Jarrod A. Roy

    2008-01-01

    Full Text Available In modern design methodologies, a large fraction of chip area during placement is left unused by standard cells and allocated as “whitespace.” This is done for a variety of reasons including the need for subsequent buffer insertion, as a means to ensure routability, signal integrity, and low coupling capacitance between wires, and to improve yield through DFM optimizations. To this end, layout constraints often require a certain minimum fraction of whitespace in each region of the chip. Our work introduces several techniques for allocation of whitespace in global, detail, and incremental placement. Our experiments show how to efficiently improve wirelength by reallocating whitespace in legal placements at the large scale. Additionally, for the first time in the literature, we empirically demonstrate high-precision control of whitespace in designs with macros and obstacles. Our techniques consistently improve the quality of whitespace allocation of top-down as well as analytical placement methods and achieve low penalties on designs from the ISPD 2006 placement contest with minimal interconnect increase.

  17. Critical Path-Based Thread Placement for NUMA Systems

    Energy Technology Data Exchange (ETDEWEB)

    Su, C Y; Li, D; Nikolopoulos, D S; Grove, M; Cameron, K; de Supinski, B R

    2011-11-01

    Multicore multiprocessors use a Non Uniform Memory Architecture (NUMA) to improve their scalability. However, NUMA introduces performance penalties due to remote memory accesses. Without efficiently managing data layout and thread mapping to cores, scientific applications, even if they are optimized for NUMA, may suffer performance loss. In this paper, we present algorithms and a runtime system that optimize the execution of OpenMP applications on NUMA architectures. By collecting information from hardware counters, the runtime system directs thread placement and reduces performance penalties by minimizing the critical path of OpenMP parallel regions. The runtime system uses a scalable algorithm that derives placement decisions with negligible overhead. We evaluate our algorithms and runtime system with four NPB applications implemented in OpenMP. On average the algorithms achieve between 8.13% and 25.68% performance improvement compared to the default Linux thread placement scheme. The algorithms miss the optimal thread placement in only 8.9% of the cases.

  18. Multi-type sensor placement and response reconstruction for building structures: Experimental investigations

    Science.gov (United States)

    Hu, Rong-Pan; Xu, You-Lin; Zhan, Sheng

    2018-01-01

    Estimation of lateral displacement and acceleration responses is essential to assess safety and serviceability of high-rise buildings under dynamic loadings including earthquake excitations. However, the measurement information from the limited number of sensors installed in a building structure is often insufficient for the complete structural performance assessment. An integrated multi-type sensor placement and response reconstruction method has thus been proposed by the authors to tackle this problem. To validate the feasibility and effectiveness of the proposed method, an experimental investigation using a cantilever beam with multi-type sensors is performed and reported in this paper. The experimental setup is first introduced. The finite element modelling and model updating of the cantilever beam are then performed. The optimal sensor placement for the best response reconstruction is determined by the proposed method based on the updated FE model of the beam. After the sensors are installed on the physical cantilever beam, a number of experiments are carried out. The responses at key locations are reconstructed and compared with the measured ones. The reconstructed responses achieve a good match with the measured ones, manifesting the feasibility and effectiveness of the proposed method. Besides, the proposed method is also examined for the cases of different excitations and unknown excitation, and the results prove the proposed method to be robust and effective. The superiority of the optimized sensor placement scheme is finally demonstrated through comparison with two other different sensor placement schemes: the accelerometer-only scheme and non-optimal sensor placement scheme. The proposed method can be applied to high-rise buildings for seismic performance assessment.

  19. Placement Design of Changeable Message Signs on Curved Roadways

    Directory of Open Access Journals (Sweden)

    Zhongren Wang, Ph.D. P.E. T.E.

    2015-01-01

    Full Text Available This paper presented a fundamental framework for Changeable Message Sign (CMS placement design along roadways with horizontal curves. This analytical framework determines the available distance for motorists to read and react to CMS messages based on CMS character height, driver's cone of vision, CMS pixel's cone of legibility, roadway horizontal curve radius, and CMS lateral and vertical placement. Sample design charts were developed to illustrate how the analytical framework may facilitate CMS placement design.

  20. A parameter optimization method to determine ski stiffness properties from ski deformation data.

    Science.gov (United States)

    Heinrich, Dieter; Mössner, Martin; Kaps, Peter; Nachbauer, Werner

    2011-02-01

    The deformation of skis and the contact pressure between skis and snow are crucial factors for carved turns in alpine skiing. The purpose of the current study was to develop and to evaluate an optimization method to determine the bending and torsional stiffness that lead to a given bending and torsional deflection of the ski. Euler-Bernoulli beam theory and classical torsion theory were applied to model the deformation of the ski. Bending and torsional stiffness were approximated as linear combinations of B-splines. To compute the unknown coefficients, a parameter optimization problem was formulated and successfully solved by multiple shooting and least squares data fitting. The proposed optimization method was evaluated based on ski stiffness data and ski deformation data taken from a recently published simulation study. The ski deformation data were used as input data to the optimization method. The optimization method was capable of successfully reproducing the shape of the original bending and torsional stiffness data of the ski with a root mean square error below 1 N m2. In conclusion, the proposed computational method offers the possibility to calculate ski stiffness properties with respect to a given ski deformation.

  1. Determining the optimal number of Kanban in multi-products supply chain system

    Science.gov (United States)

    Widyadana, G. A.; Wee, H. M.; Chang, Jer-Yuan

    2010-02-01

    Kanban, a key element of just-in-time system, is a re-order card or signboard giving instruction or triggering the pull system to manufacture or supply a component based on actual usage of material. There are two types of Kanban: production Kanban and withdrawal Kanban. This study uses optimal and meta-heuristic methods to determine the Kanban quantity and withdrawal lot sizes in a supply chain system. Although the mix integer programming method gives an optimal solution, it is not time efficient. For this reason, the meta-heuristic methods are suggested. In this study, a genetic algorithm (GA) and a hybrid of genetic algorithm and simulated annealing (GASA) are used. The study compares the performance of GA and GASA with that of the optimal method using MIP. The given problems show that both GA and GASA result in a near optimal solution, and they outdo the optimal method in term of run time. In addition, the GASA heuristic method gives a better performance than the GA heuristic method.

  2. Determining optimal interconnection capacity on the basis of hourly demand and supply functions of electricity

    International Nuclear Information System (INIS)

    Keppler, Jan Horst; Meunier, William; Coquentin, Alexandre

    2017-01-01

    Interconnections for cross-border electricity flows are at the heart of the project to create a common European electricity market. At the time, increase in production from variable renewables clustered during a limited numbers of hours reduces the availability of existing transport infrastructures. This calls for higher levels of optimal interconnection capacity than in the past. In complement to existing scenario-building exercises such as the TYNDP that respond to the challenge of determining optimal levels of infrastructure provision, the present paper proposes a new empirically-based methodology to perform Cost-Benefit analysis for the determination of optimal interconnection capacity, using as an example the French-German cross-border trade. Using a very fine dataset of hourly supply and demand curves (aggregated auction curves) for the year 2014 from the EPEX Spot market, it constructs linearized net export (NEC) and net import demand curves (NIDC) for both countries. This allows assessing hour by hour the welfare impacts for incremental increases in interconnection capacity. Summing these welfare increases over the 8 760 hours of the year, this provides the annual total for each step increase of interconnection capacity. Confronting welfare benefits with the annual cost of augmenting interconnection capacity indicated the socially optimal increase in interconnection capacity between France and Germany on the basis of empirical market micro-data. (authors)

  3. Method to determine the optimal constitutive model from spherical indentation tests

    Directory of Open Access Journals (Sweden)

    Tairui Zhang

    2018-03-01

    Full Text Available The limitation of current indentation theories was investigated and a method to determine the optimal constitutive model through spherical indentation tests was proposed. Two constitutive models, the Power-law and the Linear-law, were used in Finite Element (FE calculations, and then a set of indentation governing equations was established for each model. The load-depth data from the normal indentation depth was used to fit the best parameters in each constitutive model while the data from the further loading part was compared with those from FE calculations, and the model that better predicted the further deformation was considered the optimal one. Moreover, a Yang’s modulus calculation model which took the previous plastic deformation and the phenomenon of pile-up (or sink-in into consideration was also proposed to revise the original Sneddon-Pharr-Oliver model. The indentation results on six materials, 304, 321, SA508, SA533, 15CrMoR, and Fv520B, were compared with tensile ones, which validated the reliability of the revised E calculation model and the optimal constitutive model determination method in this study. Keywords: Optimal constitutive model, Spherical indentation test, Finite Element calculations, Yang’s modulus

  4. DETERMINATION OF THE OPTIMAL CAPITAL INVESTMENTS TO ENSURE THE SUSTAINABLE DEVELOPMENT OF THE RAILWAY

    Directory of Open Access Journals (Sweden)

    O. I. Kharchenko

    2015-04-01

    Full Text Available Purpose. Every year more attention is paid for the theoretical and practical issue of sustainable development of railway transport. But today the mechanisms of financial support of this development are poorly understood. Therefore, the aim of this article is to determine the optimal investment allocation to ensure sustainable development of the railway transport on the example of State Enterprise «Prydniprovsk Railway» and the creation of preconditions for the mathematical model development. Methodology. The ensuring task for sustainable development of railway transport is solved on the basis of the integral indicator of sustainable development effectiveness and defined as the maximization of this criterion. The optimization of measures technological and technical characters are proposed to carry out for increasing values of the integral performance measure components. To the optimization activities of technological nature that enhance the performance criteria belongs: optimization of the number of train and shunting locomotives, optimization of power handling mechanisms at the stations, optimization of routes of train flows. The activities related to the technical nature include: modernization of railways in the direction of their electrification and modernization of the running gear and coupler drawbars of rolling stock, as well as means of separators mechanization at stations to reduce noise impacts on the environment. Findings. The work resulted in the optimal allocation of investments to ensure the sustainable development of railway transportation of State Enterprise «Prydniprovsk Railway». This allows providing such kind of railway development when functioning of State Enterprise «Prydniprovsk Railway» is characterized by a maximum value of the integral indicator of efficiency. Originality. The work was reviewed and the new approach was proposed to determine the optimal allocation of capital investments to ensure sustainable

  5. Determining Optimal Replacement Policy with an Availability Constraint via Genetic Algorithms

    OpenAIRE

    Zong, Shengliang; Chai, Guorong; Su, Yana

    2017-01-01

    We develop a model and a genetic algorithm for determining an optimal replacement policy for power equipment subject to Poisson shocks. If the time interval of two consecutive shocks is less than a threshold value, the failed equipment can be repaired. We assume that the operating time after repair is stochastically nonincreasing and the repair time is exponentially distributed with a geometric increasing mean. Our objective is to minimize the expected average cost under an availability requi...

  6. Multivariate Optimization in Preconcentration Procedure for Manganese Determination in Seawater Samples by FAAS

    OpenAIRE

    Ferreira, Adriana C.; Korn, Maria das Graças Andrade; Ferreira, Sergio Luis Costa

    2004-01-01

    Texto completo: acesso restrito . p. 271-278 In the present paper, a preconcentration procedure for manganese determination in seawater samples by flame atomic absorption spectrometry (FAAS) is proposed. It is based on the solid phase extraction of manganese(II) ions as a 4-(2-pyridylazo-resorcinol) (PAR) chelate using activated carbon as sorbent. Optimization of the experimental parameters (pH, activated carbon mass, PAR mass and shaking time) was carried out using a two-level full factor...

  7. An Integer Programming Model to Determine Land Use Trajectories for Optimizing Regionally Integrated Ecosystem Services Delivery

    Directory of Open Access Journals (Sweden)

    René Estrella

    2016-01-01

    Full Text Available BIOLP is an Integer Programming model based on the Balanced Compromise Programming multi-criteria decision method. The aim of BIOLP is to determine how a set of land use types should be distributed over space and time in order to optimize the multi-dimensional land performance of a region. Trajectories were defined as the succession of specific land use types over 30 years, assuming that land use changes can only occur at fixed intervals of 10 years. A database that represents the Tabacay catchment (Ecuador as a set of land units with associated performance values was used as the input for BIOLP, which was then executed to determine the trajectories distribution that optimizes regional performance. The sensitivity of BIOLP to uncertainty in the input data, simulated through random variations on the performance values, was also tested. BIOLP showed a relative stability on its results under these conditions of stochastic, restricted changes. Additionally, the behaviour of BIOLP under different settings of its balancing and relative importance parameters was studied. Stronger variations on the outcomes were observed in this case, which indicate the influential role that such parameters play. Finally, the inclusion of performance thresholds in BIOLP was tested through the addition of sample constraints that required some of the criteria at stake to exceed predefined values. The outcome of the optimization exercises makes clear that the phenomenon of trade off between the provisioning service of the land (income and the regulation and maintenance services (runoff, sediment, SOC is crucial. BIOLP succeeds in accounting for this complex multi-dimensional phenomenon when determining the optimal spatio-temporal distributions of land use types. Despite this complexity, it is confirmed that the weights attributed to the provisioning or to the regulation and maintenance services are the main determinants for having the land use distributions dominated by

  8. Optimization of experimental conditions in uranium trace determination using laser time-resolved fluorimetry

    International Nuclear Information System (INIS)

    Baly, L.; Garcia, M.A.

    1996-01-01

    At the present paper a new sample excitation geometry is presented for the uranium trace determination in aqueous solutions by the Time-Resolved Laser-Induced Fluorescence. This new design introduces the laser radiation through the top side of the cell allowing the use of cells with two quartz sides, less expensive than commonly used at this experimental set. Optimization of the excitation conditions, temporal discrimination and spectral selection are presented

  9. Load Determination and Selection of Transformer Substations’ Optimal Power for Tasks of Urban Networks’ Development

    OpenAIRE

    Guseva, S; Borščevskis, O; Skobeļeva, N; Kozireva, Ļ

    2010-01-01

    In this paper an approach to solving the some problem of urban 110/10-20 kV network development until 2020 in Riga city in conditions of information uncertainty are considered. The following steps are considered in the paper: forecast of the total load of the Riga city until 2020, the definition of loads of existing and new substations until 2020, choice of 110/10-20 kV substations’ the optimal power, determine the location of new substations.

  10. Optimal selection and placement of green infrastructure to reduce impacts of land use change and climate change on hydrology and water quality: An application to the Trail Creek Watershed, Indiana.

    Science.gov (United States)

    Liu, Yaoze; Theller, Lawrence O; Pijanowski, Bryan C; Engel, Bernard A

    2016-05-15

    The adverse impacts of urbanization and climate change on hydrology and water quality can be mitigated by applying green infrastructure practices. In this study, the impacts of land use change and climate change on hydrology and water quality in the 153.2 km(2) Trail Creek watershed located in northwest Indiana were estimated using the Long-Term Hydrologic Impact Assessment-Low Impact Development 2.1 (L-THIA-LID 2.1) model for the following environmental concerns: runoff volume, Total Suspended Solids (TSS), Total Phosphorous (TP), Total Kjeldahl Nitrogen (TKN), and Nitrate+Nitrite (NOx). Using a recent 2001 land use map and 2050 land use forecasts, we found that land use change resulted in increased runoff volume and pollutant loads (8.0% to 17.9% increase). Climate change reduced runoff and nonpoint source pollutant loads (5.6% to 10.2% reduction). The 2050 forecasted land use with current rainfall resulted in the largest runoff volume and pollutant loads. The optimal selection and placement of green infrastructure practices using L-THIA-LID 2.1 model were conducted. Costs of applying green infrastructure were estimated using the L-THIA-LID 2.1 model considering construction, maintenance, and opportunity costs. To attain the same runoff volume and pollutant loads as in 2001 land uses for 2050 land uses, the runoff volume, TSS, TP, TKN, and NOx for 2050 needed to be reduced by 10.8%, 14.4%, 13.1%, 15.2%, and 9.0%, respectively. The corresponding annual costs of implementing green infrastructure to achieve the goals were $2.1, $0.8, $1.6, $1.9, and $0.8 million, respectively. Annual costs of reducing 2050 runoff volume/pollutant loads were estimated, and results show green infrastructure annual cost greatly increased for larger reductions in runoff volume and pollutant loads. During optimization, the most cost-efficient green infrastructure practices were selected and implementation levels increased for greater reductions of runoff and nonpoint source pollutants

  11. Ubicación óptima de generación distribuida en sistemas de energía eléctrica Optimal placement of distributed generation in electric power system

    Directory of Open Access Journals (Sweden)

    Jesús María López–Lezama

    2009-06-01

    Full Text Available En este artículo se presenta una metodología para la ubicación óptima de generación distribuida en sistemas de energía eléctrica. Las barras candidatas para ubicar la generación distribuida son identificadas basándose en los precios marginales locales. Estos precios son obtenidos al resolver un flujo de potencia óptimo (OPF y corresponden a los multiplicadores de Lagrange de las ecuaciones de balance de potencia activa en cada una de las barras del sistema. Para incluir la generación distribuida en el OPF, ésta se ha modelado como una inyección negativa de potencia activa. La metodología consiste en un proceso no lineal iterativo en donde la generación distribuida es ubicada en la barra con el mayor precio marginal. Se consideraron tres tipos de generación distribuida: 1 motores de combustión interna, 2 turbinas a gas y 3 microturbinas. La metodología propuesta es evaluada en el sistema IEEE de 30 barras. Los resultados obtenidos muestran que la generación distribuida contribuye a la disminución de los precios nodales y puede ayudar a solucionar problemas de congestión en la red de transmisión.This paper presents a methodology for optimal placement of distributed generation (DG in electric power system. The candidate buses for DG placementare identified on the bases of locational marginal prices. These prices are obtained by solving an optimal power flow (OPF and correspond to the Lagrange multipliers of the active power balance equations in every bus of the system.In order to consider the distributed generation in the OPF model, the DG was modeled as a negative injection of active power. The methodology consists ofa nonlinear iterative process in which DG is allocated in the bus with the highest locational marginal price. Three types of DG were considered in the model: 1 internal combustion engines, 2 gas turbines and 3 microturbines.The proposed methodology is tested on the IEEE 30 bus test system. The results obtained

  12. VLSI Cells Placement Using the Neural Networks

    International Nuclear Information System (INIS)

    Azizi, Hacene; Zouaoui, Lamri; Mokhnache, Salah

    2008-01-01

    The artificial neural networks have been studied for several years. Their effectiveness makes it possible to expect high performances. The privileged fields of these techniques remain the recognition and classification. Various applications of optimization are also studied under the angle of the artificial neural networks. They make it possible to apply distributed heuristic algorithms. In this article, a solution to placement problem of the various cells at the time of the realization of an integrated circuit is proposed by using the KOHONEN network

  13. HPLC/DAD determination of rosmarinic acid in Salvia officinalis: sample preparation optimization by factorial design

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Karina B. de [Universidade Federal do Parana (UFPR), Curitiba, PR (Brazil). Dept. de Farmacia; Oliveira, Bras H. de, E-mail: bho@ufpr.br [Universidade Federal do Parana (UFPR), Curitiba, PR (Brazil). Dept. de Quimica

    2013-01-15

    Sage (Salvia officinalis) contains high amounts of the biologically active rosmarinic acid (RA) and other polyphenolic compounds. RA is easily oxidized, and may undergo degradation during sample preparation for analysis. The objective of this work was to develop and validate an analytical procedure for determination of RA in sage, using factorial design of experiments for optimizing sample preparation. The statistically significant variables for improving RA extraction yield were determined initially and then used in the optimization step, using central composite design (CCD). The analytical method was then fully validated, and used for the analysis of commercial samples of sage. The optimized procedure involved extraction with aqueous methanol (40%) containing an antioxidant mixture (ascorbic acid and ethylenediaminetetraacetic acid (EDTA)), with sonication at 45 deg C for 20 min. The samples were then injected in a system containing a C{sub 18} column, using methanol (A) and 0.1% phosphoric acid in water (B) in step gradient mode (45A:55B, 0-5 min; 80A:20B, 5-10 min) with flow rate of 1.0 mL min-1 and detection at 330 nm. Using this conditions, RA concentrations were 50% higher when compared to extractions without antioxidants (98.94 {+-} 1.07% recovery). Auto-oxidation of RA during sample extraction was prevented by the use of antioxidants resulting in more reliable analytical results. The method was then used for the analysis of commercial samples of sage. (author)

  14. Method to determine the optimal constitutive model from spherical indentation tests

    Science.gov (United States)

    Zhang, Tairui; Wang, Shang; Wang, Weiqiang

    2018-03-01

    The limitation of current indentation theories was investigated and a method to determine the optimal constitutive model through spherical indentation tests was proposed. Two constitutive models, the Power-law and the Linear-law, were used in Finite Element (FE) calculations, and then a set of indentation governing equations was established for each model. The load-depth data from the normal indentation depth was used to fit the best parameters in each constitutive model while the data from the further loading part was compared with those from FE calculations, and the model that better predicted the further deformation was considered the optimal one. Moreover, a Yang's modulus calculation model which took the previous plastic deformation and the phenomenon of pile-up (or sink-in) into consideration was also proposed to revise the original Sneddon-Pharr-Oliver model. The indentation results on six materials, 304, 321, SA508, SA533, 15CrMoR, and Fv520B, were compared with tensile ones, which validated the reliability of the revised E calculation model and the optimal constitutive model determination method in this study.

  15. Optimization

    CERN Document Server

    Pearce, Charles

    2009-01-01

    Focuses on mathematical structure, and on real-world applications. This book includes developments in several optimization-related topics such as decision theory, linear programming, turnpike theory, duality theory, convex analysis, and queuing theory.

  16. Evaluation of sample preparation methods and optimization of nickel determination in vegetable tissues

    Directory of Open Access Journals (Sweden)

    Rodrigo Fernando dos Santos Salazar

    2011-02-01

    Full Text Available Nickel, although essential to plants, may be toxic to plants and animals. It is mainly assimilated by food ingestion. However, information about the average levels of elements (including Ni in edible vegetables from different regions is still scarce in Brazil. The objectives of this study were to: (a evaluate and optimize a method for preparation of vegetable tissue samples for Ni determination; (b optimize the analytical procedures for determination by Flame Atomic Absorption Spectrometry (FAAS and by Electrothermal Atomic Absorption (ETAAS in vegetable samples and (c determine the Ni concentration in vegetables consumed in the cities of Lorena and Taubaté in the Vale do Paraíba, State of São Paulo, Brazil. By means of the analytical technique for determination by ETAAS or FAAS, the results were validated by the test of analyte addition and recovery. The most viable method tested for quantification of this element was HClO4-HNO3 wet digestion. All samples but carrot tissue collected in Lorena contained Ni levels above the permitted by the Brazilian Ministry of Health. The most disturbing results, requiring more detailed studies, were the Ni concentrations measured in carrot samples from Taubaté, where levels were five times higher than permitted by Brazilian regulations.

  17. A Risk-Based Sensor Placement Methodology

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Ronald W [ORNL; Kulesz, James J [ORNL

    2006-08-01

    A sensor placement methodology is proposed to solve the problem of optimal location of sensors or detectors to protect population against the exposure to and effects of known and/or postulated chemical, biological, and/or radiological threats. Historical meteorological data are used to characterize weather conditions as wind speed and direction pairs with the percentage of occurrence of the pairs over the historical period. The meteorological data drive atmospheric transport and dispersion modeling of the threats, the results of which are used to calculate population at risk against standard exposure levels. Sensor locations are determined via a dynamic programming algorithm where threats captured or detected by sensors placed in prior stages are removed from consideration in subsequent stages. Moreover, the proposed methodology provides a quantification of the marginal utility of each additional sensor or detector. Thus, the criterion for halting the iterative process can be the number of detectors available, a threshold marginal utility value, or the cumulative detection of a minimum factor of the total risk value represented by all threats.

  18. 38 CFR 36.4706 - Forced placement of flood insurance.

    Science.gov (United States)

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 2 2010-07-01 2010-07-01 false Forced placement of flood... (CONTINUED) LOAN GUARANTY Sale of Loans, Guarantee of Payment, and Flood Insurance § 36.4706 Forced placement of flood insurance. If the Secretary, or a servicer acting on behalf of the Secretary, determines at...

  19. A New Method for Optimal Regularization Parameter Determination in the Inverse Problem of Load Identification

    Directory of Open Access Journals (Sweden)

    Wei Gao

    2016-01-01

    Full Text Available According to the regularization method in the inverse problem of load identification, a new method for determining the optimal regularization parameter is proposed. Firstly, quotient function (QF is defined by utilizing the regularization parameter as a variable based on the least squares solution of the minimization problem. Secondly, the quotient function method (QFM is proposed to select the optimal regularization parameter based on the quadratic programming theory. For employing the QFM, the characteristics of the values of QF with respect to the different regularization parameters are taken into consideration. Finally, numerical and experimental examples are utilized to validate the performance of the QFM. Furthermore, the Generalized Cross-Validation (GCV method and the L-curve method are taken as the comparison methods. The results indicate that the proposed QFM is adaptive to different measuring points, noise levels, and types of dynamic load.

  20. Determination of the Optimal Tilt Angle for Solar Photovoltaic Panel in Ilorin, Nigeria

    Directory of Open Access Journals (Sweden)

    K.R. Ajao

    2013-06-01

    Full Text Available The optimal tilt angle of solar photovoltaic panel in Ilorin, Nigeria was determined. The solar panel was first mounted at 0o to the horizontal and after ten minutes, the voltage and current generated with the corresponding atmospheric temperature were recorded. The same procedure was repeated for 2o to 30o at a succession of 2o at ten minutes time interval over the entire measurement period. The result obtained shows that the average optimal tilt angle at which a solar panel will be mounted for maximum power performance at fixed position in Ilorin is 22o. This optimum angle of tilt of the solar panel and the orientation are dependent on the month of the year and the location of the site of study.

  1. Parametrical Method for Determining Optimal Ship Carrying Capacity and Performance of Handling Equipment

    Directory of Open Access Journals (Sweden)

    Michalski Jan P.

    2016-04-01

    Full Text Available The paper presents a method of evaluating the optimal value of the cargo ships deadweight and the coupled optimal value of cargo handling capacity. The method may be useful at the stage of establishing the main owners requirements concerning the ship design parameters as well as for choosing a proper second hand ship for a given transportation task. The deadweight and the capacity are determined on the basis of a selected economic measure of the transport effectiveness of ship – the Required Freight Rate. The mathematical model of the problem is of a deterministic character and the simplifying assumptions are justified for ships operating in the liner trade. The assumptions are so selected that solution of the problem is obtained in analytical closed form. The presented method can be useful for application in the preliminary ship design or in the simulation of pre-investment transportation task studies.

  2. A Novel Scheme for Optimal Control of a Nonlinear Delay Differential Equations Model to Determine Effective and Optimal Administrating Chemotherapy Agents in Breast Cancer.

    Science.gov (United States)

    Ramezanpour, H R; Setayeshi, S; Akbari, M E

    2011-01-01

    Determining the optimal and effective scheme for administrating the chemotherapy agents in breast cancer is the main goal of this scientific research. The most important issue here is the amount of drug or radiation administrated in chemotherapy and radiotherapy for increasing patient's survival. This is because in these cases, the therapy not only kills the tumor cells, but also kills some of the healthy tissues and causes serious damages. In this paper we investigate optimal drug scheduling effect for breast cancer model which consist of nonlinear ordinary differential time-delay equations. In this paper, a mathematical model of breast cancer tumors is discussed and then optimal control theory is applied to find out the optimal drug adjustment as an input control of system. Finally we use Sensitivity Approach (SA) to solve the optimal control problem. The goal of this paper is to determine optimal and effective scheme for administering the chemotherapy agent, so that the tumor is eradicated, while the immune systems remains above a suitable level. Simulation results confirm the effectiveness of our proposed procedure. In this paper a new scheme is proposed to design a therapy protocol for chemotherapy in Breast Cancer. In contrast to traditional pulse drug delivery, a continuous process is offered and optimized, according to the optimal control theory for time-delay systems.

  3. Combustion characteristics and optimal factors determination with Taguchi method for diesel engines port-injecting hydrogen

    International Nuclear Information System (INIS)

    Wu, Horng-Wen; Wu, Zhan-Yi

    2012-01-01

    This study applies the L 9 orthogonal array of the Taguchi method to find out the best hydrogen injection timing, hydrogen-energy-share ratio, and the percentage of exhaust gas circulation (EGR) in a single DI diesel engine. The injection timing is controlled by an electronic control unit (ECU) and the quantity of hydrogen is controlled by hydrogen flow controller. For various engine loads, the authors determine the optimal operating factors for low BSFC (brake specific fuel consumption), NO X , and smoke. Moreover, net heat-release rate involving variable specific heat ratio is computed from the experimental in-cylinder pressure. In-cylinder pressure, net heat-release rate, A/F ratios, COV (coefficient of variations) of IMEP (indicated mean effective pressure), NO X , and smoke using the optimum condition factors are compared with those by original baseline diesel engine. The predictions made using Taguchi's parameter design technique agreed with the confirmation results on 95% confidence interval. At 45% and 60% loads the optimum factor combination compared with the original baseline diesel engine reduces 14.52% for BSFC, 60.5% for NO X and for 42.28% smoke and improves combustion performance such as peak in-cylinder pressure and net heat-release rate. Adding hydrogen and EGR would not generate unstable combustion due to lower COV of IMEP. -- Highlights: ► We use hydrogen injector controlled by ECU and cooled EGR system in a diesel engine. ► Optimal factors by Taguchi method are determined for low BSFC, NO X and smoke. ► The COV of IMEP is lower than 10% so it will not cause the unstable combustion. ► We improve A/F ratio, in-cylinder pressure, and heat-release at optimized engine. ► Decrease is 14.5% for BSFC, 60.5% for NO X , and 42.28% for smoke at optimized engine.

  4. The optimal condition of performing MTT assay for the determination of radiation sensitivity

    International Nuclear Information System (INIS)

    Hong, Semie; Kim, Il Han

    2001-01-01

    The measurement of radiation survival using a clonogenic assay, the established standard, can be difficult and time consuming. In this study, We have used the MTT assay, based on the reduction of a tetrazolium salt to a purple formazan precipitate by living cells, as a substitution for clonogenic assay and have examined the optimal condition for performing this assay in determination of radiation sensitivity. Four human cancer cell lines - PCI-1, SNU-1066, NCI-H63O and RKO cells have been used. For each cell line, a clonogenic assay and a MTT assay using Premix WST-1 solution, which is one of the tetrazolium salts and does not require washing or solubilization of the precipitate were carried out after irradiation of 0, 2, 4, 6, 8, 10 Gy, For clonogenic assay, cells in 25 cm 2 flasks were irradiated after overnight incubation and the resultant colonies containing more than 50 cells were scored after culturing the cells for 10-14 days, For MTT assay, the relationship between absorbance and cell number, optimal seeding cell number, and optimal timing of assay was determined. Then, MTT assay was performed when the irradiated cells had regained exponential growth or when the non-irradiated cells had undergone four or more doubling times. There was minimal variation in the values gained from these two methods with the standard deviation generally less than 5%, and there were no statistically significant differences between two methods according to t-test in low radiation dose (below 6 Gy). The regression analyses showed high linear correlation with the R 2 value of 0.975-0.992 between data from the two different methods. The optimal cell numbers for MTT assay were found to be dependent on plating efficiency of used cell line. Less than 300 cells/well were appropriate for cells with high plating efficiency (more than 30%). For cells with low plating efficiency (less than 30%), 500 cells/well or more were appropriate for assay. The optimal time for MTT assay was alter 6

  5. Determination of selenium in urine by inductively coupled plasma mass spectrometry: interferences and optimization

    DEFF Research Database (Denmark)

    Gammelgaard, Bente; Jons, O.

    1999-01-01

    The aim of this study was to develop a method for selenium determination in urine and examine the influence of sensitivity enhancement reagents, instrument parameters, internal standards and the salt content of the urine matrix on the determination. Several carbon-containing solutions (methanol...... and different salts on four selenium isotopes were examined. It was concluded that only Se-82 was usable for quantitative determination in urine as the blank at mass 82 was close to zero. The blank values at masses 76, 77 and 78 varied considerably and differently with different salts and salt concentrations...... of the other parameters. The sensitivities of different selenium species (selenite, selenate, selenomethionine and trimethylselenonium iodide) were equal during the experiments in different enhancement solutes and when analysed with the optimized parameter settings. The influence of the urine matrix...

  6. Optimized determination of uranium traces in iron and steel by absorption photometry

    International Nuclear Information System (INIS)

    Kosturiak, A.

    1986-01-01

    Optimal conditions were sought for determining uranium by absorption photometry in a complex with 1,8-dihydroxy-2,7-bis(2-arsonophenylazo)naphthalene-3,6-disulfonic acid (arsenazo III). A suitable medium for this determination is a glycine buffer solution with pH 1-2. A high excess of iron and of other interfering ions is removed with diethylether from a HCl medium with a concentration of 6.6 mol.dm -3 or with n-amyl acetate from a HCl medium with a concentration of more than 9 mol.dm -3 EDTA in combination with boric acid is used for shielding against a greater number of interfering ions. This procedure may be used to determine U(VI) in Fe-Si-U alloys from 1x10 -3 wt.% and in pig iron or steels from 7x10 -3 wt.%. The results of measurements were statistically verified. (Ha)

  7. Optimized goniometer for determination of the scattering phase function of suspended particles: simulations and measurements.

    Science.gov (United States)

    Foschum, Florian; Kienle, Alwin

    2013-08-01

    We present simulations and measurements with an optimized goniometer for determination of the scattering phase function of suspended particles. We applied the Monte Carlo method, using a radially layered cylindrical geometry and mismatched boundary conditions, in order to investigate the influence of reflections caused by the interfaces of the glass cuvette and the scatterer concentration on the accurate determination of the scattering phase function. Based on these simulations we built an apparatus which allows direct measurement of the phase function from ϑ=7  deg to ϑ=172  deg without any need for correction algorithms. Goniometric measurements on polystyrene and SiO2 spheres proved this concept. Using the validated goniometer, we measured the phase function of yeast cells, demonstrating the improvement of the new system compared to standard goniometers. Furthermore, the scattering phase function of different fat emulsions, like Intralipid, was determined precisely.

  8. Effects of a Foster Parent Training Intervention on Placement Changes of Children in Foster Care

    Science.gov (United States)

    Price, Joseph M.; Chamberlain, Patricia; Landsverk, John; Reid, John; Leve, Leslie; Laurent, Heidemarie

    2008-01-01

    Placement disruptions undermine efforts of child welfare agencies to promote safety, permanency, and child well-being. Child behavior problems significantly contribute to placement changes. The aims of this investigation were to examine the impact of a foster parent training and support intervention (KEEP) on placement changes and to determine whether the intervention mitigates placement disruption risks associated with children's placement histories. The sample consisted of 700 families with children between ages 5 and 12 years, from a variety of ethnic backgrounds. Families were randomly assigned to the intervention or control condition. The number of prior placements was predictive of negative exits from current foster placements. The intervention increased chances of positive exit (e.g., parent/child reunification) and mitigated the negative risk-enhancing effect of a history of multiple placements. Incorporating intervention approaches based on a parent management training model into child welfare services may improve placement outcomes for child in foster care. PMID:18174349

  9. Simplex optimization of the variables influencing the determination of pefloxacin by time-resolved chemiluminescence

    Science.gov (United States)

    Murillo Pulgarín, José A.; Alañón Molina, Aurelia; Jiménez García, Elisa

    2018-03-01

    A new chemiluminescence (CL) detection system combined with flow injection analysis (FIA) for the determination of Pefloxacin is proposed. The determination is based on an energy transfer from Pefloxacin to terbium (III). The metal ion enhances the weak CL signal produced by the KMnO4/H2SO3/Pefloxacin system. A modified simplex method was used to optimize chemical and instrumental variables. The influence of the interaction of the permanganate, Tb (III), sodium sulphite and sulphuric acid concentrations, flow rate and injected sample volume was thoroughly investigated by using a modified simplex optimization procedure. The results revealed a strong direct relationship between flow rate and CL intensity throughout the studied range that was confirmed by a gamma test. The response factor for the CL emission intensity was used to assess performance in order to identify the optimum conditions for maximization of the response. Under such conditions, the CL response was proportional to the Pefloxacin concentration over a wide range. The detection limit as calculated according to Clayton's criterion 13.7 μg L- 1. The analyte was successfully determined in milk samples with an average recovery of 100.6 ± 9.8%.

  10. Theoretical and experimental analyses of optimal experimental design for determination of hydraulic conductivity of cell membrane.

    Science.gov (United States)

    Zhou, Xiaoming; Gao, Frank; Shu, Zhiquan; Chung, Jae-Hyun; Heimfeld, Shelly; Gao, Dayong

    2010-09-01

    Determination of cell hydraulic conductivity (Lp) is required to predict the optimal conditions for cell cryopreservation. One of the critical procedures associated with the determination of Lp is to measure the kinetics of cell volume change in response to a sudden cell exposure to anisosmotic media until the cells achieve an osmotic equilibrium state. To achieve accurate measurement, it should be ensured that (1) the cell osmotic equilibration process is sufficiently slow, and (2) the total cell volume change (ΔV) is much larger than the resolution of the measuring device (δ). In this article, a cell's half volume excursion time (t*) was defined as the time in which osmotically active cell water volume increases or decreases by half of its maximum change. Based on the water transport equations, a series of analytical solutions were derived. The t* and ΔV were expressed as functions of 2 control variables: initial intracellular osmolality (Mo) and extracellular osmolality (Me), and the effects of Me and Mo on t* and ΔV were predicted theoretically. The predictions were confirmed by performing experiments using two different cell types. In the light of this study, a strategy to optimize the experiment design for the Lp determination is suggested.

  11. Voltage regulator placement in radial distribution system using plant ...

    African Journals Online (AJOL)

    The VR problem consists of two sub problems, that of optimal placement and optimal choice of tap setting. The proposed method deals with initial selection of voltage regulator buses by using power loss indices (PLI). The candidate node identification technique and Plant Growth Simulation Algorithm (PGSA) are used for ...

  12. Optimized determination of the radiological inventory during different phases of decommissioning

    International Nuclear Information System (INIS)

    Hillberg, Matthias; Beltz, Detlef; Karschnick, Oliver

    2012-01-01

    The decommissioning of nuclear facilities comprises a lot of activities such as decontamination, dismantling and demolition of equipment and structures. For these activities the aspects of health and safety of the operational personnel and of the general public as well as the minimization of radioactive waste have to be taken into account. An optimized, comprehensible and verifiable determination of the radiological inventory is essential for the decommissioning management with respect to safety, time, and costs. For example: right from the start of the post operational phase, the radiological characterization has to enable the decision whether to perform a system decontamination or not. Furthermore it is necessary, e.g. to determine the relevant nuclides and their composition (nuclide vector) for the release of material and for sustaining the radiological health and safety at work (e. g. minimizing the risk of incorporation). Our contribution will focus on the optimization of the radiological characterization with respect to the requisite extent and the best instant of time during the decommissioning process. For example: which additional information, besides the history of operation, is essential for an adequate amount of sampling and measurements needed in order to determine the relevant nuclides and their compositions? Furthermore, the characterization of buildings requires a kind of a graded approach during the decommissioning process. At the beginning of decommissioning, only a rough estimate of the expected radioactive waste due to the necessary decontamination of the building structures is sufficient. With ongoing decommissioning, a more precise radiological characterization of buildings is needed in order to guarantee an optimized, comprehensible and verifiable decontamination, dismantling and trouble-free clearance. These and other examples will be discussed on the background of and with reference to different decommissioning projects involving direct

  13. Eddy currents - practical determination of optimal testing frequency for non ferromagnetic materials

    International Nuclear Information System (INIS)

    Soares, Adolpho; Messias, Jose Marcos

    1996-01-01

    This work presents an alternative practical option for easier, lower cost and reliable determination of the optimal testing frequency, when using eddy currents testing. This option uses another standard tube produced with the material similar to the tubes to be inspected, where only two discontinuities are worked: one passing hole and another equal diameter external cylindrical hole, with depth equivalent to 50% of the tube thickness. Using this standard, the las step is to adjust the eddy current device frequency to a value which allows a 90 deg angle between the signals coming from the two holes

  14. Optimization of Solar Module Encapsulant Lamination by Optical Constant Determination of Ethylene-Vinyl Acetate

    Directory of Open Access Journals (Sweden)

    Bing-Mau Chen

    2015-01-01

    Full Text Available This investigation elucidates the physical properties of ethylene-vinyl acetate (EVA used in the lamination process of module encapsulation and the module performance from the optical transmission to the photoelectric power. In module encapsulation, the effects of the lamination parameters on the module performance, transmittance, and stack adhesion have been considered as they were found to influence the reliability of the module. The determination of the optical constants of EVA may serve as a nondestructive analytical method for optimizing the module encapsulation, on the basis of its effects on the optical transmittance, gel content, peel strength, and performance power.

  15. Determination of stresses in RC eccentrically compressed members using optimization methods

    Science.gov (United States)

    Lechman, Marek; Stachurski, Andrzej

    2018-01-01

    The paper presents an optimization method for determining the strains and stresses in reinforced concrete (RC) members subjected to the eccentric compression. The governing equations for strains in the rectangular cross-sections are derived by integrating the equilibrium equations of cross-sections, taking account of the effect of concrete softening in plastic range and the mean compressive strength of concrete. The stress-strain relationship for concrete in compression for short term uniaxial loading is assumed according to Eurocode 2 for nonlinear analysis. For reinforcing steel linear-elastic model with hardening in plastic range is applied. The task consists in the solving the set of the derived equations s.t. box constraints. The resulting problem was solved by means of fmincon function implemented from the Matlab's Optimization Toolbox. Numerical experiments have shown the existence of many points verifying the equations with a very good accuracy. Therefore, some operations from the global optimization were included: start of fmincon from many points and clusterization. The model is verified on the set of data encountered in the engineering practice.

  16. Determining optimal clothing ensembles based on weather forecasts, with particular reference to outdoor winter military activities.

    Science.gov (United States)

    Morabito, Marco; Pavlinic, Daniela Z; Crisci, Alfonso; Capecchi, Valerio; Orlandini, Simone; Mekjavic, Igor B

    2011-07-01

    Military and civil defense personnel are often involved in complex activities in a variety of outdoor environments. The choice of appropriate clothing ensembles represents an important strategy to establish the success of a military mission. The main aim of this study was to compare the known clothing insulation of the garment ensembles worn by soldiers during two winter outdoor field trials (hike and guard duty) with the estimated optimal clothing thermal insulations recommended to maintain thermoneutrality, assessed by using two different biometeorological procedures. The overall aim was to assess the applicability of such biometeorological procedures to weather forecast systems, thereby developing a comprehensive biometeorological tool for military operational forecast purposes. Military trials were carried out during winter 2006 in Pokljuka (Slovenia) by Slovene Armed Forces personnel. Gastrointestinal temperature, heart rate and environmental parameters were measured with portable data acquisition systems. The thermal characteristics of the clothing ensembles worn by the soldiers, namely thermal resistance, were determined with a sweating thermal manikin. Results showed that the clothing ensemble worn by the military was appropriate during guard duty but generally inappropriate during the hike. A general under-estimation of the biometeorological forecast model in predicting the optimal clothing insulation value was observed and an additional post-processing calibration might further improve forecast accuracy. This study represents the first step in the development of a comprehensive personalized biometeorological forecast system aimed at improving recommendations regarding the optimal thermal insulation of military garment ensembles for winter activities.

  17. Optimization of the n-type HPGe detector parameters to theoretical determination of efficiency curves

    International Nuclear Information System (INIS)

    Rodriguez-Rodriguez, A.; Correa-Alfonso, C.M.; Lopez-Pino, N.; Padilla-Cabal, F.; D'Alessandro, K.; Corrales, Y.; Garcia-Alvarez, J. A.; Perez-Mellor, A.; Baly-Gil, L.; Machado, A.

    2011-01-01

    A highly detailed characterization of a 130 cm 3 n-type HPGe detector, employed in low - background gamma spectrometry measurements, was done. Precise measured data and several Monte Carlo (MC) calculations have been combined to optimize the detector parameters. HPGe crystal location inside the Aluminum end-cap as well as its dimensions, including the borehole radius and height, were determined from frontal and lateral scans. Additionally, X-ray radiography and Computed Axial Tomography (CT) studies were carried out to complement the information about detector features. Using seven calibrated point sources ( 241 Am, 133 Ba, 57,60 Co, 137 Cs, 22 Na and 152 Eu), photo-peak efficiency curves at three different source - detector distances (SDD) were obtained. Taking into account the experimental values, an optimization procedure by means of MC simulations (MCNPX 2.6 code) were performed. MC efficiency curves were calculated specifying the optimized detector parameters in the MCNPX input files. Efficiency calculation results agree with empirical data, showing relative deviations lesser 10%. (Author)

  18. Determination of the optimal tolerance for MLC positioning in sliding window and VMAT techniques

    Energy Technology Data Exchange (ETDEWEB)

    Hernandez, V., E-mail: vhernandezmasgrau@gmail.com; Abella, R. [Department of Medical Physics, Hospital Sant Joan de Reus, IISPV, Tarragona 43204 (Spain); Calvo, J. F. [Department of Radiation Oncology, Hospital Quirón, Barcelona 08023 (Spain); Jurado-Bruggemann, D. [Department of Medical Physics, Institut Català d’Oncologia, Girona 17007 (Spain); Sancho, I. [Department of Medical Physics, Institut Català d’Oncologia, L’Hospitalet de Llobregat 08908 (Spain); Carrasco, P. [Department of Medical Physics, Hospital de la Santa Creu i Sant Pau, Barcelona 08041 (Spain)

    2015-04-15

    Purpose: Several authors have recommended a 2 mm tolerance for multileaf collimator (MLC) positioning in sliding window treatments. In volumetric modulated arc therapy (VMAT) treatments, however, the optimal tolerance for MLC positioning remains unknown. In this paper, the authors present the results of a multicenter study to determine the optimal tolerance for both techniques. Methods: The procedure used is based on dynalog file analysis. The study was carried out using seven Varian linear accelerators from five different centers. Dynalogs were collected from over 100 000 clinical treatments and in-house software was used to compute the number of tolerance faults as a function of the user-defined tolerance. Thus, the optimal value for this tolerance, defined as the lowest achievable value, was investigated. Results: Dynalog files accurately predict the number of tolerance faults as a function of the tolerance value, especially for low fault incidences. All MLCs behaved similarly and the Millennium120 and the HD120 models yielded comparable results. In sliding window techniques, the number of beams with an incidence of hold-offs >1% rapidly decreases for a tolerance of 1.5 mm. In VMAT techniques, the number of tolerance faults sharply drops for tolerances around 2 mm. For a tolerance of 2.5 mm, less than 0.1% of the VMAT arcs presented tolerance faults. Conclusions: Dynalog analysis provides a feasible method for investigating the optimal tolerance for MLC positioning in dynamic fields. In sliding window treatments, the tolerance of 2 mm was found to be adequate, although it can be reduced to 1.5 mm. In VMAT treatments, the typically used 5 mm tolerance is excessively high. Instead, a tolerance of 2.5 mm is recommended.

  19. Determining the optimal dose of 1940-nm thulium fiber laser for assisting the endodontic treatment.

    Science.gov (United States)

    Sarp, Ayse Sena Kabas; Gulsoy, Murat

    2017-09-01

    Insufficient cleaning, the complex anatomy of the root canal system, inaccessible accessory canals, and inadequate penetration of irrigants through dentinal tubules minimizes the success of the conventional endodontic treatment. Laser-assisted endodontic treatment enhances the quality of conventional treatment, but each laser wavelength has its own its own limitations. The optimal parameters for the antibacterial efficiency of a new wavelength, 1940-nm Thulium Fiber Laser, were firstly investigated in this study. This paper comprises of two preliminary analyses and one main experimental study, presents data about thermal effects of 1940-nm laser application on root canal tissue, effective sterilization parameters for bacteria, Enterococcus faecalis, and finally the antibacterial effectiveness of this 1940-nm Thulium Fiber Laser irradiation in single root canal. Based on these results, the optimal parameter range for safe laser-assisted root canal treatment was investigated in the main experiments. Comparing the antibacterial effects of four laser powers on an E. faecalis bacteria culture in vitro in 96-well plates showed that the most effective group was the one irradiated with 1 W of laser power (antibacterial effect corresponding to a log kill of 3). After the optimal laser power was determined, varying irradiation durations (15, 30, and 60 s) were compared in disinfecting E. faecalis. Laser application caused significant reduction in colony-forming unit values (CFU) compared with control samples in the 17% ethylenediaminetetraacetic acid (EDTA) group. The results of bacteria counts showed that 1 W with 30 s of irradiation with a 1940-nm thulium fiber laser was the optimal dose for safely achieving maximal bactericidal effect.

  20. Determination of the optimal tolerance for MLC positioning in sliding window and VMAT techniques.

    Science.gov (United States)

    Hernandez, V; Abella, R; Calvo, J F; Jurado-Bruggemann, D; Sancho, I; Carrasco, P

    2015-04-01

    Several authors have recommended a 2 mm tolerance for multileaf collimator (MLC) positioning in sliding window treatments. In volumetric modulated arc therapy (VMAT) treatments, however, the optimal tolerance for MLC positioning remains unknown. In this paper, the authors present the results of a multicenter study to determine the optimal tolerance for both techniques. The procedure used is based on dynalog file analysis. The study was carried out using seven Varian linear accelerators from five different centers. Dynalogs were collected from over 100,000 clinical treatments and in-house software was used to compute the number of tolerance faults as a function of the user-defined tolerance. Thus, the optimal value for this tolerance, defined as the lowest achievable value, was investigated. Dynalog files accurately predict the number of tolerance faults as a function of the tolerance value, especially for low fault incidences. All MLCs behaved similarly and the Millennium120 and the HD120 models yielded comparable results. In sliding window techniques, the number of beams with an incidence of hold-offs >1% rapidly decreases for a tolerance of 1.5 mm. In VMAT techniques, the number of tolerance faults sharply drops for tolerances around 2 mm. For a tolerance of 2.5 mm, less than 0.1% of the VMAT arcs presented tolerance faults. Dynalog analysis provides a feasible method for investigating the optimal tolerance for MLC positioning in dynamic fields. In sliding window treatments, the tolerance of 2 mm was found to be adequate, although it can be reduced to 1.5 mm. In VMAT treatments, the typically used 5 mm tolerance is excessively high. Instead, a tolerance of 2.5 mm is recommended.

  1. Determination of the Spatial Distribution in Hydraulic Conductivity Using Genetic Algorithm Optimization

    Science.gov (United States)

    Aksoy, A.; Lee, J. H.; Kitanidis, P. K.

    2016-12-01

    Heterogeneity in hydraulic conductivity (K) impacts the transport and fate of contaminants in subsurface as well as design and operation of managed aquifer recharge (MAR) systems. Recently, improvements in computational resources and availability of big data through electrical resistivity tomography (ERT) and remote sensing have provided opportunities to better characterize the subsurface. Yet, there is need to improve prediction and evaluation methods in order to obtain information from field measurements for better field characterization. In this study, genetic algorithm optimization, which has been widely used in optimal aquifer remediation designs, was used to determine the spatial distribution of K. A hypothetical 2 km by 2 km aquifer was considered. A genetic algorithm library, PGAPack, was linked with a fast Fourier transform based random field generator as well as a groundwater flow and contaminant transport simulation model (BIO2D-KE). The objective of the optimization model was to minimize the total squared error between measured and predicted field values. It was assumed measured K values were available through ERT. Performance of genetic algorithm in predicting the distribution of K was tested for different cases. In the first one, it was assumed that observed K values were evaluated using the random field generator only as the forward model. In the second case, as well as K-values obtained through ERT, measured head values were incorporated into evaluation in which BIO2D-KE and random field generator were used as the forward models. Lastly, tracer concentrations were used as additional information in the optimization model. Initial results indicated enhanced performance when random field generator and BIO2D-KE are used in combination in predicting the spatial distribution in K.

  2. Prevalence, determinants and systems-thinking approaches to optimal hypertension control in West Africa.

    Science.gov (United States)

    Iwelunmor, Juliet; Airhihenbuwa, Collins O; Cooper, Richard; Tayo, Bamidele; Plange-Rhule, Jacob; Adanu, Richard; Ogedegbe, Gbenga

    2014-05-21

    In West Africa, hypertension, once rare, has now emerged as a critical health concern and the trajectory is upward and factors are complex. The true magnitude of hypertension in some West African countries, including in-depth knowledge of underlying risk factors is not completely understood. There is also a paucity of research on adequate systems-level approaches designed to mitigate the growing burden of hypertension in the region. In this review, we thematically synthesize available literature pertaining to the prevalence of hypertension in West Africa and discuss factors that influence its diagnosis, treatment and control. We aimed to address the social and structural determinants influencing hypertension in the sub-region including the effects of urbanization, health infrastructure and healthcare workforce. The prevalence of hypertension in West Africa has increased over the past decade and is rising rapidly with an urban-rural gradient that places higher hypertension prevalence on urban settings compared to rural settings. Overall levels of awareness of one's hypertension status remain consistently low in West African. Structural and economic determinants related to conditions of poverty such as insufficient finances have a direct impact on adherence to prescribed antihypertensive medications. Urbanization contributes to the increasing incidence of hypertension in the sub-region and available evidence indicates that inadequate health infrastructure may act as a barrier to optimal hypertension control in West Africa. Given that optimal hypertension control in West Africa depends on multiple factors that go beyond simply modifying the behaviors of the individuals alone, we conclude by discussing the potential role systems-thinking approaches can play to achieve optimal control in the sub-region. In the context of recent advances in hypertension management including new therapeutic options and innovative solutions to expand health workforce so as to meet the high

  3. Optimization of digital image processing to determine quantum dots' height and density from atomic force microscopy.

    Science.gov (United States)

    Ruiz, J E; Paciornik, S; Pinto, L D; Ptak, F; Pires, M P; Souza, P L

    2018-01-01

    An optimized method of digital image processing to interpret quantum dots' height measurements obtained by atomic force microscopy is presented. The method was developed by combining well-known digital image processing techniques and particle recognition algorithms. The properties of quantum dot structures strongly depend on dots' height, among other features. Determination of their height is sensitive to small variations in their digital image processing parameters, which can generate misleading results. Comparing the results obtained with two image processing techniques - a conventional method and the new method proposed herein - with the data obtained by determining the height of quantum dots one by one within a fixed area, showed that the optimized method leads to more accurate results. Moreover, the log-normal distribution, which is often used to represent natural processes, shows a better fit to the quantum dots' height histogram obtained with the proposed method. Finally, the quantum dots' height obtained were used to calculate the predicted photoluminescence peak energies which were compared with the experimental data. Again, a better match was observed when using the proposed method to evaluate the quantum dots' height. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. FPGA Congestion-Driven Placement Refinement

    Energy Technology Data Exchange (ETDEWEB)

    Vicente de, J.

    2005-07-01

    The routing congestion usually limits the complete proficiency of the FPGA logic resources. A key question can be formulated regarding the benefits of estimating the congestion at placement stage. In the last years, it is gaining acceptance the idea of a detailed placement taking into account congestion. In this paper, we resort to the Thermodynamic Simulated Annealing (TSA) algorithm to perform a congestion-driven placement refinement on the top of the common Bounding-Box pre optimized solution. The adaptive properties of TSA allow the search to preserve the solution quality of the pre optimized solution while improving other fine-grain objectives. Regarding the cost function two approaches have been considered. In the first one Expected Occupation (EO), a detailed probabilistic model to account for channel congestion is evaluated. We show that in spite of the minute detail of EO, the inherent uncertainty of this probabilistic model impedes to relieve congestion beyond the sole application of the Bounding-Box cost function. In the second approach we resort to the fast Rectilinear Steiner Regions algorithm to perform not an estimation but a measurement of the global routing congestion. This second strategy allows us to successfully reduce the requested channel width for a set of benchmark circuits with respect to the widespread Versatile Place and Route (VPR) tool. (Author) 31 refs.

  5. Determination of optimal diagnostic criteria for purulent vaginal discharge and cytological endometritis in dairy cows.

    Science.gov (United States)

    Denis-Robichaud, J; Dubuc, J

    2015-10-01

    The objectives of this observational study were to identify the optimal diagnostic criteria for purulent vaginal discharge (PVD) and cytological endometritis (ENDO) using vaginal discharge, endometrial cytology, and leukocyte esterase (LE) tests, and to quantify their effect on subsequent reproductive performance. Data generated from 1,099 untreated Holstein cows (28 herds) enrolled in a randomized clinical trial were used in this study. Cows were examined at 35 (± 7) d in milk for PVD using vaginal discharge scoring and for ENDO using endometrial cytology and LE testing. Optimal combinations of diagnostic criteria were determined based on the lowest Akaike information criterion (AIC) to predict pregnancy status at first service. Once identified, these criteria were used to quantify the effect of PVD and ENDO on pregnancy risk at first service and on pregnancy hazard until 200 d in milk (survival analysis). Predicting ability of these diagnostic criteria was determined using area under the curve (AUC) values. The prevalence of PVD and ENDO was calculated as well as the agreement between endometrial cytology and LE. The optimal diagnostic criteria (lowest AIC) identified in this study were purulent vaginal discharge or worse (≥ 4), ≥ 6% polymorphonuclear leukocytes (PMNL) by endometrial cytology, and small amounts of leukocytes or worse (≥ 1) by LE testing. When using the combination of vaginal discharge and PMNL percentage as diagnostic tools (n = 1,099), the prevalences of PVD and ENDO were 17.1 and 36.2%, respectively. When using the combination of vaginal discharge and LE (n = 915), the prevalences of PVD and ENDO were 17.1 and 48.4%. The optimal strategies for predicting pregnancy status at first service were the use of LE only (AUC = 0.578) and PMNL percentage only (AUC = 0.575). Cows affected by PVD and ENDO had 0.36 and 0.32 times the odds, respectively, of being pregnant at first service when using PMNL percentage compared with that of unaffected

  6. Determination of optimal reformer temperature in a reformed methanol fuel cell system using ANFIS models and numerical optimization methods

    DEFF Research Database (Denmark)

    Justesen, Kristian Kjær; Andreasen, Søren Juhl

    2015-01-01

    In this work a method for choosing the optimal reformer temperature for a reformed methanol fuel cell system is presented based on a case study of a H3 350 module produced by Serenergy A/S. The method is based on ANFIS models of the dependence of the reformer output gas composition on the reforme...

  7. Genotype 1 hepatitis C virus envelope features that determine antiviral response assessed through optimal covariance networks.

    Directory of Open Access Journals (Sweden)

    John M Murray

    Full Text Available The poor response to the combined antiviral therapy of pegylated alfa-interferon and ribavarin for hepatitis C virus (HCV infection may be linked to mutations in the viral envelope gene E1E2 (env, which can result in escape from the immune response and higher efficacy of viral entry. Mutations that result in failure of therapy most likely require compensatory mutations to achieve sufficient change in envelope structure and function. Compensatory mutations were investigated by determining positions in the E1E2 gene where amino acids (aa covaried across groups of individuals. We assessed networks of covarying positions in E1E2 sequences that differentiated sustained virological response (SVR from non-response (NR in 43 genotype 1a (17 SVR, and 49 genotype 1b (25 SVR chronically HCV-infected individuals. Binary integer programming over covariance networks was used to extract aa combinations that differed between response groups. Genotype 1a E1E2 sequences exhibited higher degrees of covariance and clustered into 3 main groups while 1b sequences exhibited no clustering. Between 5 and 9 aa pairs were required to separate SVR from NR in each genotype. aa in hypervariable region 1 were 6 times more likely than chance to occur in the optimal networks. The pair 531-626 (EI appeared frequently in the optimal networks and was present in 6 of 9 NR in one of the 1a clusters. The most frequent pairs representing SVR were 431-481 (EE, 500-522 (QA in 1a, and 407-434 (AQ in 1b. Optimal networks based on covarying aa pairs in HCV envelope can indicate features that are associated with failure or success to antiviral therapy.

  8. Virtual haptic system for intuitive planning of bone fixation plate placement

    Directory of Open Access Journals (Sweden)

    Kup-Sze Choi

    2017-01-01

    Full Text Available Placement of pre-contoured fixation plate is a common treatment for bone fracture. Fitting of fixation plates on fractured bone can be preoperatively planned and evaluated in 3D virtual environment using virtual reality technology. However, conventional systems usually employ 2D mouse and virtual trackball as the user interface, which makes the process inconvenient and inefficient. In the paper, a preoperative planning system equipped with 3D haptic user interface is proposed to allow users to manipulate the virtual fixation plate intuitively to determine the optimal position for placement on distal medial tibia. The system provides interactive feedback forces and visual guidance based on the geometric requirements. Creation of 3D models from medical imaging data, collision detection, dynamics simulation and haptic rendering are discussed. The system was evaluated by 22 subjects. Results show that the time to achieve optimal placement using the proposed system was shorter than that by using 2D mouse and virtual trackball, and the satisfaction rating was also higher. The system shows potential to facilitate the process of fitting fixation plates on fractured bones as well as interactive fixation plate design.

  9. Determining optimal preventive maintenance interval for component of Well Barrier Element in an Oil & Gas Company

    Science.gov (United States)

    Siswanto, A.; Kurniati, N.

    2018-04-01

    An oil and gas company has 2,268 oil and gas wells. Well Barrier Element (WBE) is installed in a well to protect human, prevent asset damage and minimize harm to the environment. The primary WBE component is Surface Controlled Subsurface Safety Valve (SCSSV). The secondary WBE component is Christmas Tree Valves that consist of four valves i.e. Lower Master Valve (LMV), Upper Master Valve (UMV), Swab Valve (SV) and Wing Valve (WV). Current practice on WBE Preventive Maintenance (PM) program is conducted by considering the suggested schedule as stated on manual. Corrective Maintenance (CM) program is conducted when the component fails unexpectedly. Both PM and CM need cost and may cause production loss. This paper attempts to analyze the failure data and reliability based on historical data. Optimal PM interval is determined in order to minimize the total cost of maintenance per unit time. The optimal PM interval for SCSSV is 730 days, LMV is 985 days, UMV is 910 days, SV is 900 days and WV is 780 days. In average of all components, the cost reduction by implementing the suggested interval is 52%, while the reliability is improved by 4% and the availability is increased by 5%.

  10. Direct generation of stibine from slurries and its determination by ETAAS using multivariate optimization

    Energy Technology Data Exchange (ETDEWEB)

    Cal-Prieto, M.J.; Felipe-Sotelo, M.; Carlosena, A.; Andrade, J.M. [University of La Coruna, La Coruna (Spain). Dept. of Analytical Chemistry

    2005-06-01

    A simple and fast analytical method combining slurry preparation, hydride generation, and trapping on iridium-treated graphite tubes is presented to determine Sb in soil samples by electrothermal atomic absorption spectrometry (HG-ETAAS). Chemometric optimization of the slurry preparation and generation of antimony was carried out (Plackett-Burman designs and Simplex optimizations). Slurries were prepared in 9 mol L{sup -1} HCl, and stibine was generated using 0.4 mol L{sup -1} HCl and 2.5% (m/v) NaBH{sub 4} (150 mL min{sup -1} Ar flow). Further, a heated quartz tube atomization system was used (HG-AAS). The slurry HG-ETAAS method was validated with nine certified reference materials: three soils, two sediments, two coal fly ashes, and two coals. The lowest LOD obtained was 0.08 mu g g{sup -1} and good overall precision was achieved (RSD < 79%). The coal samples required a previous ashing step (450{sup o}C, up to constant weight). The analyte extraction to the liquid phase was also studied.

  11. Determining the optimal product-mix using integer programming: An application in audio speaker production

    Science.gov (United States)

    Khan, Sahubar Ali Bin Mohamed Nadhar; Ahmarofi, Ahmad Afif Bin

    2014-12-01

    In manufacturing sector, production planning or scheduling is the most important managerial task in order to achieve profit maximization and cost minimization. With limited resources, the management has to satisfy customer demand and at the same time fulfill company's objective, which is to maximize profit or minimize cost. Hence, planning becomes a significant task for production site in order to determine optimal number of units for each product to be produced. In this study, integer programming technique is used to develop an appropriate product-mix planning to obtain the optimal number of audio speaker products that should be produced in order to maximize profit. Branch-and-bound method is applied to obtain exact integer solutions when non-integer solutions occurred. Three major resource constraints are considered in this problem: raw materials constraint, demand constraint and standard production time constraint. It is found that, the developed integer programming model gives significant increase in profit compared to the existing method used by the company. At the end of the study, sensitivity analysis was performed to evaluate the effects of changes in objective function coefficient and available resources on the developed model. This will enable the management to foresee the effects on the results when some changes happen to the profit of its products or available resources.

  12. Is patient size important in dose determination and optimization in cardiology?

    International Nuclear Information System (INIS)

    Reay, J; Chapple, C L; Kotre, C J

    2003-01-01

    Patient dose determination and optimization have become more topical in recent years with the implementation of the Medical Exposures Directive into national legislation, the Ionising Radiation (Medical Exposure) Regulations. This legislation incorporates a requirement for new equipment to provide a means of displaying a measure of patient exposure and introduces the concept of diagnostic reference levels. It is normally assumed that patient dose is governed largely by patient size; however, in cardiology, where procedures are often very complex, the significance of patient size is less well understood. This study considers over 9000 cardiology procedures, undertaken throughout the north of England, and investigates the relationship between patient size and dose. It uses simple linear regression to calculate both correlation coefficients and significance levels for data sorted by both room and individual clinician for the four most common examinations, left ventrical and/or coronary angiography, single vessel stent insertion and single vessel angioplasty. This paper concludes that the correlation between patient size and dose is weak for the procedures considered. It also illustrates the use of an existing method for removing the effect of patient size from dose survey data. This allows typical doses and, therefore, reference levels to be defined for the purposes of dose optimization

  13. Optimization of the indirect at neutron activation technique for the determination of boron in aqueous solutions

    International Nuclear Information System (INIS)

    Luz, L.C.Q.P. da.

    1984-01-01

    The purpose of this work was the development of an instrumental method for the optimization of the indirect neutron activation analysis of boron in aqueous solutions. The optimization took into account the analytical parameters under laboratory conditions: activation carried out with a 241 Am/Be neutron source and detection of the activity induced in vanadium with two NaI(Tl) gamma spectrometers. A calibration curve was thus obtained for a concentration range of 0 to 5000 ppm B. Later on, experimental models were built in order to study the feasibility of automation. The analysis of boron was finally performed, under the previously established conditions, with an automated system comprising the operations of transport, irradiation and counting. An improvement in the quality of the analysis was observed, with boron concentrations as low as 5 ppm being determined with a precision level better than 0.4%. The experimental model features all basic design elements for an automated device for the analysis of boron in agueous solutions wherever this is required, as in the operation of nuclear reactors. (Author) [pt

  14. Regional gray matter abnormalities in patients with schizophrenia determined with optimized voxel-based morphometry

    Science.gov (United States)

    Guo, XiaoJuan; Yao, Li; Jin, Zhen; Chen, Kewei

    2006-03-01

    This study examined regional gray matter abnormalities across the whole brain in 19 patients with schizophrenia (12 males and 7 females), comparing with 11 normal volunteers (7 males and 4 females). The customized brain templates were created in order to improve spatial normalization and segmentation. Then automated preprocessing of magnetic resonance imaging (MRI) data was conducted using optimized voxel-based morphometry (VBM). The statistical voxel based analysis was implemented in terms of two-sample t-test model. Compared with normal controls, regional gray matter concentration in patients with schizophrenia was significantly reduced in the bilateral superior temporal gyrus, bilateral middle frontal and inferior frontal gyrus, right insula, precentral and parahippocampal areas, left thalamus and hypothalamus as well as, however, significant increases in gray matter concentration were not observed across the whole brain in the patients. This study confirms and extends some earlier findings on gray matter abnormalities in schizophrenic patients. Previous behavior and fMRI researches on schizophrenia have suggested that cognitive capacity decreased and self-conscious weakened in schizophrenic patients. These regional gray matter abnormalities determined through structural MRI with optimized VBM may be potential anatomic underpinnings of schizophrenia.

  15. Rapid Determination of Optimal Conditions in a Continuous Flow Reactor Using Process Analytical Technology

    Directory of Open Access Journals (Sweden)

    Michael F. Roberto

    2013-12-01

    Full Text Available Continuous flow reactors (CFRs are an emerging technology that offer several advantages over traditional batch synthesis methods, including more efficient mixing schemes, rapid heat transfer, and increased user safety. Of particular interest to the specialty chemical and pharmaceutical manufacturing industries is the significantly improved reliability and product reproducibility over time. CFR reproducibility can be attributed to the reactors achieving and maintaining a steady state once all physical and chemical conditions have stabilized. This work describes the implementation of a smart CFR with univariate physical and multivariate chemical monitoring that allows for rapid determination of steady state, requiring less than one minute. Additionally, the use of process analytical technology further enabled a significant reduction in the time and cost associated with offline validation methods. The technology implemented for this study is chemistry and hardware agnostic, making this approach a viable means of optimizing the conditions of any CFR.

  16. A Mathematical Method for Determining Optimal Quantity of Backfill Materials Used for Grounding Resistance Reduction

    Directory of Open Access Journals (Sweden)

    Jovan Trifunovic

    2018-01-01

    Full Text Available During installation of grounding system, which represents a significant part of any electrical power system, various backfill materials are used for grounding resistance reduction. The general mathematical method for determining an optimal quantity of backfill materials used for grounding resistance reduction, based on the mathematical tools, 3D FEM modeling, numerical analysis of the obtained results, and the “knee” of the curve concept, as well as on the engineering analysis based on the designer’s experience, is developed and offered in this paper. The proposed method has been tested by applying it to a square loop enveloped by a backfill material and buried in a 2-layer soil. The results obtained by the presented method showed a good correlation with the experimentally obtained data from literature. The proposed method can help the designers to avoid the saturation areas in order to maximize efficiency of backfill material usage.

  17. Determination of the dose rate from external irradiation. Geological considerations in sampling optimization

    International Nuclear Information System (INIS)

    Baeza, A.; Paniagua, J.M.; Fernandez, J.A.

    1997-01-01

    Dose rates received from natural external irradiation were evaluated with two techniques: using radiation measurements made in radiometric flights, and using gamma-emitter radioactivity levels in the soil. The zone of the study was the province of Caceres (Spain), of some 20 000 km 2 area. Because of its complicated geology, this zone has a great spatial variability in the concentrations of radionuclides present in the soil. The results allowed ratification of the two dose rate measurement techniques, and the establishment of criteria with which, using geology as a parameter, future sampling campaigns could be optimized through the determination of the minimum number of points to sample and their most suitable locations. (Author)

  18. Morphology Analysis and Optimization: Crucial Factor Determining the Performance of Perovskite Solar Cells.

    Science.gov (United States)

    Zeng, Wenjin; Liu, Xingming; Guo, Xiangru; Niu, Qiaoli; Yi, Jianpeng; Xia, Ruidong; Min, Yong

    2017-03-24

    This review presents an overall discussion on the morphology analysis and optimization for perovskite (PVSK) solar cells. Surface morphology and energy alignment have been proven to play a dominant role in determining the device performance. The effect of the key parameters such as solution condition and preparation atmosphere on the crystallization of PVSK, the characterization of surface morphology and interface distribution in the perovskite layer is discussed in detail. Furthermore, the analysis of interface energy level alignment by using X-ray photoelectron spectroscopy and ultraviolet photoelectron spectroscopy is presented to reveals the correlation between morphology and charge generation and collection within the perovskite layer, and its influence on the device performance. The techniques including architecture modification, solvent annealing, etc. were reviewed as an efficient approach to improve the morphology of PVSK. It is expected that further progress will be achieved with more efforts devoted to the insight of the mechanism of surface engineering in the field of PVSK solar cells.

  19. Determining Optimal Hourly and Annual Coefficient District Cooling - One of the Aspects use of Green Technology

    Directory of Open Access Journals (Sweden)

    Sefik M.Bajmak

    2013-11-01

    Full Text Available Work together more cooling source (refrigeration machines the system of centralized supply cooling energy ( SCSCE is a way to achieve cost-effective operation and safe and rational supply consumption area with cool water for central cooling and air conditioning . Maximum energy needs cold water occurs rarely , because the extremely high temperatures occur rarely . Therefore , the total cooling load is divided into basic and peak . One of the main characteristics that define the justification of the use of coupled processes and sizes hourly coefficient centralized supply of cold water, temperature regime , or hour coefficient district cooling . Determination of Optimal hour coefficient district cooling is one of the most techno economic tasks at the design of the system of centralized supply cold water for air conditioning and industrial building social housing and business districts .

  20. Predictors of professional placement outcome: cultural background, English speaking and international student status.

    Science.gov (United States)

    Attrill, Stacie; McAllister, Sue; Lincoln, Michelle

    2016-08-01

    Placements provide opportunities for students to develop practice skills in professional settings. Learning in placements may be challenging for culturally and linguistically diverse (CALD) students, international students, or those without sufficient English proficiency for professional practice. This study investigated whether these factors, which are hypothesized to influence acculturation, predict poor placement outcome. Placement outcome data were collected for 854 students who completed 2747 placements. Placement outcome was categorized into 'Pass' or 'At risk' categories. Multilevel binomial regression analysis was used to determine whether being CALD, an international student, speaking 'English as an additional language', or a 'Language other than English at home' predicted placement outcome. In multiple multilevel analysis speaking English as an additional language and being an international student were significant predictors of 'at risk' placements, but other variables tested were not. Effect sizes were small indicating untested factors also influenced placement outcome. These results suggest that students' English as an additional language or international student status influences success in placements. The extent of acculturation may explain the differences in placement outcome for the groups tested. This suggests that learning needs for placement may differ for students undertaking more acculturative adjustments. Further research is needed to understand this and to identify placement support strategies.

  1. Fundamentals of Cluster-Centric Content Placement in Cache-Enabled Device-to-Device Networks

    OpenAIRE

    Afshang, Mehrnaz; Dhillon, Harpreet S.; Chong, Peter Han Joo

    2015-01-01

    This paper develops a comprehensive analytical framework with foundations in stochastic geometry to characterize the performance of cluster-centric content placement in a cache-enabled device-to-device (D2D) network. Different from device-centric content placement, cluster-centric placement focuses on placing content in each cluster such that the collective performance of all the devices in each cluster is optimized. Modeling the locations of the devices by a Poisson cluster process, we defin...

  2. Determining risk for out-of-hospital cardiac arrest by location type in a Canadian urban setting to guide future public access defibrillator placement.

    Science.gov (United States)

    Brooks, Steven C; Hsu, Jonathan H; Tang, Sabrina K; Jeyakumar, Roshan; Chan, Timothy C Y

    2013-05-01

    Automated external defibrillator use by lay bystanders during out-of-hospital cardiac arrest rarely occurs but can improve survival. We seek to estimate risk for out-of-hospital cardiac arrest by location type and evaluate current automated external defibrillator deployment in a Canadian urban setting to guide future automated external defibrillator deployment. This was a retrospective analysis of a population-based out-of-hospital cardiac arrest database. We included consecutive public location, nontraumatic, out-of-hospital cardiac arrests occurring in Toronto from January 1, 2006, to June 30, 2010, captured in the Resuscitation Outcomes Consortium Epistry database. Two investigators independently categorized each out-of-hospital cardiac arrest and automated external defibrillator location into one of 38 categories. Total site counts in each location category were used to estimate average annual per-site cardiac arrest incidence and determine the relative automated external defibrillator coverage for each location type. There were 608 eligible out-of-hospital cardiac arrest cases. The top 5 location categories by average annual out-of-hospital cardiac arrests per site were race track/casino (0.67; 95% confidence interval [CI] 0 to 1.63), jail (0.62; 95% CI 0.3 to 1.06), hotel/motel (0.15; 95% CI 0.12 to 0.18), hostel/shelter (0.14; 95% CI 0.067 to 0.19), and convention center (0.11; 95% CI 0 to 0.43). Although schools were relatively lower risk for cardiac arrest, they represented 72.5% of automated external defibrillator-covered locations in the study region. Some higher-risk location types such as hotel/motel, hostel/shelter, and rail station were severely underrepresented with respect to automated external defibrillator coverage. We have identified types of locations with higher per-site risk for cardiac arrest relative to others. We have also identified potential mismatches between cardiac arrest risk by location type and registered automated external

  3. Determination of artificial sweeteners by capillary electrophoresis with contactless conductivity detection optimized by hydrodynamic pumping.

    Science.gov (United States)

    Stojkovic, Marko; Mai, Thanh Duc; Hauser, Peter C

    2013-07-17

    The common sweeteners aspartame, cyclamate, saccharin and acesulfame K were determined by capillary electrophoresis with contactless conductivity detection. In order to obtain the best compromise between separation efficiency and analysis time hydrodynamic pumping was imposed during the electrophoresis run employing a sequential injection manifold based on a syringe pump. Band broadening was avoided by using capillaries of a narrow 10 μm internal diameter. The analyses were carried out in an aqueous running buffer consisting of 150 mM 2-(cyclohexylamino)ethanesulfonic acid and 400 mM tris(hydroxymethyl)aminomethane at pH 9.1 in order to render all analytes in the fully deprotonated anionic form. The use of surface modification to eliminate or reverse the electroosmotic flow was not necessary due to the superimposed bulk flow. The use of hydrodynamic pumping allowed easy optimization, either for fast separations (80s) or low detection limits (6.5 μmol L(-1), 5.0 μmol L(-1), 4.0 μmol L(-1) and 3.8 μmol L(-1) for aspartame, cyclamate, saccharin and acesulfame K respectively, at a separation time of 190 s). The conditions for fast separations not only led to higher limits of detection but also to a narrower dynamic range. However, the settings can be changed readily between separations if needed. The four compounds were determined successfully in food samples. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. Optimized and Validated Spectrophotometric Methods for the Determination of Enalapril Maleate in Commercial Dosage Forms

    Directory of Open Access Journals (Sweden)

    Sk Manirul Haque

    2008-01-01

    Full Text Available Four simple, rapid and sensitive spectrophotometric methods have been proposed for the determination of enalapril maleate in pharmaceutical formulations. The first method is based on the reaction of carboxylic acid group of enalapril maleate with a mixture of potassium iodate (KIO3 and iodide (KI to form yellow colored product in aqueous medium at 25 ± 1°C .The reaction is followed spectrophotometrically by measuring the absorbance at 352 nm. The second, third and fourth methods are based on the charge transfer complexation reaction of the drug with p-chloranilic acid (pCA in 1, 4-dioxan-methanol medium, 2, 3-dichloro 5, 6-dicyano 1, 4-benzoquinone (DDQ in acetonitrile-1,4 dioxane medium and iodine in acetonitrile-dichloromethane medium. Under optimized experimental conditions, Beer’s law is obeyed in the concentration ranges of 2.5–50, 20–560, 5–75 and 10–200 μg mL−1, respectively. All the methods have been applied to the determination of enalapril maleate in pharmaceutical dosage forms. Results of analysis are validated statistically.

  5. Optimized and validated spectrophotometric methods for the determination of enalapril maleate in commercial dosage forms.

    Science.gov (United States)

    Rahman, Nafisur; Haque, Sk Manirul

    2008-03-01

    Four simple, rapid and sensitive spectrophotometric methods have been proposed for the determination of enalapril maleate in pharmaceutical formulations. The first method is based on the reaction of carboxylic acid group of enalapril maleate with a mixture of potassium iodate (KIO(3)) and iodide (KI) to form yellow colored product in aqueous medium at 25 +/- 1 degrees C. The reaction is followed spectrophotometrically by measuring the absorbance at 352 nm. The second, third and fourth methods are based on the charge transfer complexation reaction of the drug with p-chloranilic acid (pCA) in 1, 4-dioxan-methanol medium, 2, 3-dichloro 5, 6-dicyano 1, 4-benzoquinone (DDQ) in acetonitrile-1,4 dioxane medium and iodine in acetonitrile-dichloromethane medium. Under optimized experimental conditions, Beer's law is obeyed in the concentration ranges of 2.5-50, 20-560, 5-75 and 10-200 microg mL(-1), respectively. All the methods have been applied to the determination of enalapril maleate in pharmaceutical dosage forms. Results of analysis are validated statistically.

  6. In situ moisture determination of a cytotoxic compound during process optimization.

    Science.gov (United States)

    Hicks, Michael B; Zhou, George X; Lieberman, David R; Antonucci, Vincent; Ge, Zhihong; Shi, Yao-Jun; Cameron, Mark; Lynch, Joseph E

    2003-03-01

    A simple and safe prototype apparatus was designed and adapted for the in situ determination of the moisture content of a cytotoxic compound (9-fluorenylmethyl-protected doxorubicin-peptide conjugate, or Fm-DPC) by near-infrared absorbance spectroscopy during optimization of the chemical isolation procedure. The cytotoxic nature of the compound restricts one's ability to safely sample such drying processes for more traditional means of moisture determination for fear of hazardous solids dusting, hence in situ sampling approaches are of great importance. These concerns also exist for the process development laboratory, where despite the smaller scale of operations, the volume of experiments (hence cytotoxic samples) required to define a chemical process is often more significant. In this application, partial least squares regression was used with Karl Fischer volumetric titration analysis to generate a calibration model. Although pronounced differences in cake density were observed as a function of the buffer selected for the isolation process, the model still achieved a standard error of calibration of 0.63% w/w and a standard error of prediction of 0.99% (w/w). These results demonstrated the versatility of the prototype apparatus/data processing approach to model Fm-DPC drying under extremely variable conditions, as inherently expected during the investigational laboratory development of a chemical process. Copyright 2003 Wiley-Liss Inc. and the American Pharmaeceutical Association

  7. Determination of the optimal area of waste incineration in a rotary kiln using a simulation model.

    Science.gov (United States)

    Bujak, J

    2015-08-01

    The article presents a mathematical model to determine the flux of incinerated waste in terms of its calorific values. The model is applicable in waste incineration systems equipped with rotary kilns. It is based on the known and proven energy flux balances and equations that describe the specific losses of energy flux while considering the specificity of waste incineration systems. The model is universal as it can be used both for the analysis and testing of systems burning different types of waste (municipal, medical, animal, etc.) and for allowing the use of any kind of additional fuel. Types of waste incinerated and additional fuel are identified by a determination of their elemental composition. The computational model has been verified in three existing industrial-scale plants. Each system incinerated a different type of waste. Each waste type was selected in terms of a different calorific value. This allowed the full verification of the model. Therefore the model can be used to optimize the operation of waste incineration system both at the design stage and during its lifetime. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Optimization and Validation of an ETAAS Method for the Determination of Nickel in Postmortem Material.

    Science.gov (United States)

    Dudek-Adamska, Danuta; Lech, Teresa; Kościelniak, Paweł

    2015-01-01

    In this article, optimization and validation of a procedure for the determination of total nickel in wet digested samples of human body tissues (internal organs) for forensic toxicological purposes are presented. Four experimental setups of the electrothermal atomic absorption spectrometry (ETAAS) using a Solaar MQZe (Thermo Electron Co.) were compared, using the following (i) no modifier, (ii) magnesium nitrate, (iii) palladium nitrate and (iv) magnesium nitrate and ammonium dihydrogen phosphate mixture as chemical modifiers. It was ascertained that the ETAAS without any modifier with 1,300/2,400°C as the pyrolysis and atomization temperatures, respectively, can be used to determine total nickel at reference levels in biological materials as well as its levels found in chronic or acute poisonings. The method developed was validated, obtaining a linear range of calibration from 0.76 to 15.0 μg/L, limit of detection at 0.23 µg/L, limit of quantification at 0.76 µg/L, precision (as relative standard deviation) up to 10% and accuracy of 97.1% for the analysis of certified material (SRM 1577c Bovine Liver) and within a range from 99.2 to 109.9% for the recovery of fortified liver samples. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  9. Multivariate optimization for molybdenum determination in environmental solid samples by slurry extraction-ETAAS

    Energy Technology Data Exchange (ETDEWEB)

    Felipe-Sotelo, M. [Department of Analytical Chemistry, University of A Coruna, Campus de A Zapateira s/n, E-15071 A Coruna (Spain); Cal-Prieto, M.J. [Department of Analytical Chemistry, University of A Coruna, Campus de A Zapateira s/n, E-15071 A Coruna (Spain); Carlosena, A. [Department of Analytical Chemistry, University of A Coruna, Campus de A Zapateira s/n, E-15071 A Coruna (Spain)]. E-mail: alatzne@udc.es; Andrade, J.M. [Department of Analytical Chemistry, University of A Coruna, Campus de A Zapateira s/n, E-15071 A Coruna (Spain); Fernandez, E. [Department of Analytical Chemistry, University of A Coruna, Campus de A Zapateira s/n, E-15071 A Coruna (Spain); Prada, D. [Department of Analytical Chemistry, University of A Coruna, Campus de A Zapateira s/n, E-15071 A Coruna (Spain)

    2005-11-30

    A direct procedure to determine molybdenum in environmental solid samples (coal fly ash, sediment, soil and urban dust) by slurry extraction-electrothermal atomic absorption spectroscopy (SE-ETAAS) employing multivariate optimization is presented. Sample mass, ultrasonic power, HNO{sub 3} concentration and HCl concentration were the most important variables affecting the extraction of Mo, as Plackett-Burman designs put forward. Time of agitation revealed not significant (within the 10-100 s range), thanks to the focused ultrasonic agitation program employed throughout (using a USS-100 probe). Two Simplex, a regular and a modified one, were carried out to simultaneously optimize the HNO{sub 3} and HCl concentrations. Both Simplex converged to 14% (v/v) HCl and 10% (v/v) HNO{sub 3}. Sample mass was studied by means of a univariate procedure (optimum at 50 mg). Furnace programs were studied using wall atomization with and without BaF{sub 2} as modifier, yielding similar results. Quantitative extraction was obtained, with good accuracy (ca. 90% recovery) and precision (R.S.D. < 9%), evaluated using five CRMs (coal fly ash SRM1633a, urban dust SRM1649a, marine sediments BCSS1 and PACS1, soil GBW07401). The limit of detection of the method was 0.1 {mu}g g{sup -1} (20 mg/1 mL slurry) and the characteristic mass was 9.5 {+-} 1.7 pg. Main advantages of the slurry extraction procedure are that it can be implemented directly on the autosampler cups, and that it is inexpensive and fast.

  10. Optimization of a radiochemistry method for plutonium determination in biological samples

    International Nuclear Information System (INIS)

    Cerchetti, Maria L.; Arguelles, Maria G.

    2005-01-01

    Plutonium has been widely used for civilian an military activities. Nevertheless, the methods to control work exposition have not evolved in the same way, remaining as one of the major challengers for the radiological protection practice. Due to the low acceptable incorporation limit, the usual determination is based on indirect methods in urine samples. Our main objective was to optimize a technique used to monitor internal contamination of workers exposed to Plutonium isotopes. Different parameters were modified and their influence on the three steps of the method was evaluated. Those which gave the highest yield and feasibility were selected. The method involves: 1-) Sample concentration (coprecipitation); 2-) Plutonium purification; and 3-) Source preparation by electrodeposition. On the coprecipitation phase, changes on temperature and concentration of the carrier were evaluated. On the ion-exchange separation, changes on the type of the resin, elution solution for hydroxylamine (concentration and volume), length and column recycle were evaluated. Finally, on the electrodeposition phase, we modified the following: electrolytic solution, pH and time. Measures were made by liquid scintillation counting and alpha spectrometry (PIPS). We obtained the following yields: 88% for coprecipitation (at 60 C degree with 2 ml of CaHPO 4 ), 71% for ion-exchange (resins AG 1x8 Cl - 100-200 mesh, hydroxylamine 0.1N in HCl 0.2N as eluent, column between 4.5 and 8 cm), and 93% for electrodeposition (H 2 SO 4 -NH 4 OH, 100 minutes and pH from 2 to 2.8). The expand uncertainty was 30% (NC 95%), the decision threshold (Lc) was 0.102 Bq/L and the minimum detectable activity was 0.218 Bq/L of urine. We obtained an optimized method to screen workers exposed to Plutonium. (author)

  11. Rapid Titration of Measles and Other Viruses: Optimization with Determination of Replication Cycle Length

    Science.gov (United States)

    Grigorov, Boyan; Rabilloud, Jessica; Lawrence, Philip; Gerlier, Denis

    2011-01-01

    Background Measles virus (MV) is a member of the Paramyxoviridae family and an important human pathogen causing strong immunosuppression in affected individuals and a considerable number of deaths worldwide. Currently, measles is a re-emerging disease in developed countries. MV is usually quantified in infectious units as determined by limiting dilution and counting of plaque forming unit either directly (PFU method) or indirectly from random distribution in microwells (TCID50 method). Both methods are time-consuming (up to several days), cumbersome and, in the case of the PFU assay, possibly operator dependent. Methods/Findings A rapid, optimized, accurate, and reliable technique for titration of measles virus was developed based on the detection of virus infected cells by flow cytometry, single round of infection and titer calculation according to the Poisson's law. The kinetics follow up of the number of infected cells after infection with serial dilutions of a virus allowed estimation of the duration of the replication cycle, and consequently, the optimal infection time. The assay was set up to quantify measles virus, vesicular stomatitis virus (VSV), and human immunodeficiency virus type 1 (HIV-1) using antibody labeling of viral glycoprotein, virus encoded fluorescent reporter protein and an inducible fluorescent-reporter cell line, respectively. Conclusion Overall, performing the assay takes only 24–30 hours for MV strains, 12 hours for VSV, and 52 hours for HIV-1. The step-by-step procedure we have set up can be, in principle, applicable to accurately quantify any virus including lentiviral vectors, provided that a virus encoded gene product can be detected by flow cytometry. PMID:21915289

  12. Optimizing Waveform Maximum Determination for Specular Point Tracking in Airborne GNSS-R.

    Science.gov (United States)

    Motte, Erwan; Zribi, Mehrez

    2017-08-16

    Airborne GNSS-R campaigns are crucial to the understanding of signal interactions with the Earth's surface. As a consequence of the specific geometric configurations arising during measurements from aircraft, the reflected signals can be difficult to interpret under certain conditions like over strongly attenuating media such as forests, or when the reflected signal is contaminated by the direct signal. For these reasons, there are many cases where the reflectivity is overestimated, or a portion of the dataset has to be flagged as unusable. In this study we present techniques that have been developed to optimize the processing of airborne GNSS-R data, with the goal of improving its accuracy and robustness under non-optimal conditions. This approach is based on the detailed analysis of data produced by the instrument GLORI, which was recorded during an airborne campaign in the south west of France in June 2015. Our technique relies on the improved determination of reflected waveform peaks in the delay dimension, which is related to the loci of the signals contributed by the zone surrounding the specular point. It is shown that when developing techniques for the correct localization of waveform maxima under conditions of surfaces of low reflectivity, and/or contamination from the direct signal, it is possible to correct and extract values corresponding to the real reflectivity of the zone in the neighborhood of the specular point. This algorithm was applied to a reanalysis of the complete campaign dataset, following which the accuracy and sensitivity improved, and the usability of the dataset was improved by 30%.

  13. Optimizing aspects of pedestrian traffic in building designs

    KAUST Repository

    Rodriguez, Samuel

    2013-11-01

    In this work, we investigate aspects of building design that can be optimized. Architectural features that we explore include pillar placement in simple corridors, doorway placement in buildings, and agent placement for information dispersement in an evacuation. The metrics utilized are tuned to the specific scenarios we study, which include continuous flow pedestrian movement and building evacuation. We use Multidimensional Direct Search (MDS) optimization with an extreme barrier criteria to find optimal placements while enforcing building constraints. © 2013 IEEE.

  14. Directly patching high-level exchange-correlation potential based on fully determined optimized effective potentials

    Science.gov (United States)

    Huang, Chen; Chi, Yu-Chieh

    2017-12-01

    The key element in Kohn-Sham (KS) density functional theory is the exchange-correlation (XC) potential. We recently proposed the exchange-correlation potential patching (XCPP) method with the aim of directly constructing high-level XC potential in a large system by patching the locally computed, high-level XC potentials throughout the system. In this work, we investigate the patching of the exact exchange (EXX) and the random phase approximation (RPA) correlation potentials. A major challenge of XCPP is that a cluster's XC potential, obtained by solving the optimized effective potential equation, is only determined up to an unknown constant. Without fully determining the clusters' XC potentials, the patched system's XC potential is "uneven" in the real space and may cause non-physical results. Here, we developed a simple method to determine this unknown constant. The performance of XCPP-RPA is investigated on three one-dimensional systems: H20, H10Li8, and the stretching of the H19-H bond. We investigated two definitions of EXX: (i) the definition based on the adiabatic connection and fluctuation dissipation theorem (ACFDT) and (ii) the Hartree-Fock (HF) definition. With ACFDT-type EXX, effective error cancellations were observed between the patched EXX and the patched RPA correlation potentials. Such error cancellations were absent for the HF-type EXX, which was attributed to the fact that for systems with fractional occupation numbers, the integral of the HF-type EXX hole is not -1. The KS spectra and band gaps from XCPP agree reasonably well with the benchmarks as we make the clusters large.

  15. Optimization and validation of Folin-Ciocalteu method for the determination of total polyphenol content of Pu-erh tea.

    Science.gov (United States)

    Musci, Marilena; Yao, Shicong

    2017-12-01

    Pu-erh tea is a post-fermented tea that has recently gained popularity worldwide, due to potential health benefits related to the antioxidant activity resulting from its high polyphenolic content. The Folin-Ciocalteu method is a simple, rapid, and inexpensive assay widely applied for the determination of total polyphenol content. Over the past years, it has been subjected to many modifications, often without any systematic optimization or validation. In our study, we sought to optimize the Folin-Ciocalteu method, evaluate quality parameters including linearity, precision and stability, and then apply the optimized model to determine the total polyphenol content of 57 Chinese teas, including green tea, aged and ripened Pu-erh tea. Our optimized Folin-Ciocalteu method reduced analysis time, allowed for the analysis of a large number of samples, to discriminate among the different teas, and to assess the effect of the post-fermentation process on polyphenol content.

  16. Optimization of the simultaneous determination of imatinib and its major metabolite, CGP74588, in human plasma by a rapid HPLC method using D-optimal experimental design.

    Science.gov (United States)

    Golabchifar, Ali-Akbar; Rouini, Mohammad-Reza; Shafaghi, Bijan; Rezaee, Saeed; Foroumadi, Alireza; Khoshayand, Mohammad-Reza

    2011-10-15

    A simple, rapid and specific HPLC method has been developed and validated for the simultaneous determination of imatinib, a tyrosine kinase inhibitor, and its major metabolite, CGP74588, in human plasma. The optimization of the HPLC procedure involved several variables, of which the influences of each was studied. After a series of preliminary-screening experiments, the composition of the mobile phase and the pH of the added buffer solution were set as the investigated variables, while the resolution between imatinib and CGP74588 peaks, the retention time and the imatinib peak width were chosen as the dependent variables. Applying D-optimal design, the optimal chromatographic conditions for the separation were defined. The method proved to show good agreement between the experimental data and predictive values throughout the studied parameter range. The optimum assay conditions were achieved with a Chromolith™ Performance RP-8e 100 mm × 4.6 mm column and a mixture of methanol/acetonitrile/triethylamine/diammonium hydrogen phosphate (pH 6.25, 0.048 mol L(-1)) (20:20:0.1:59.9, v/v/v/v) as the mobile phase at a flow rate of 2 mL min(-1) and detection wavelength of 261 nm. The run time was less than 5 min, which is much shorter than the previously optimized methods. The optimized method was validated according to FDA guidelines to confirm specificity, linearity, accuracy and precision. Copyright © 2011 Elsevier B.V. All rights reserved.

  17. Relay Placement for FSO Multihop DF Systems With Link Obstacles and Infeasible Regions

    KAUST Repository

    Zhu, Bingcheng

    2015-05-19

    Optimal relay placement is studied for free-space optical multihop communication with link obstacles and infeasible regions. An optimal relay placement scheme is proposed to achieve the lowest outage probability, enable the links to bypass obstacles of various geometric shapes, and place the relay nodes in specified available regions. When the number of relay nodes is large, the searching space can grow exponentially, and thus, a grouping optimization technique is proposed to reduce the searching time. We numerically demonstrate that the grouping optimization can provide suboptimal solutions close to the optimal solutions, but the average searching time linearly grows with the number of relay nodes. Two useful theorems are presented to reveal insights into the optimal relay locations. Simulation results show that our proposed optimization framework can effectively provide desirable solution to the problem of optimal relay nodes placement. © 2015 IEEE.

  18. Optimal Parameters to Determine the Apparent Diffusion Coefficient in Diffusion Weighted Imaging via Simulation

    Science.gov (United States)

    Perera, Dimuthu

    Diffusion weighted (DW) Imaging is a non-invasive MR technique that provides information about the tissue microstructure using the diffusion of water molecules. The diffusion is generally characterized by the apparent diffusion coefficient (ADC) parametric map. The purpose of this study is to investigate in silico how the calculation of ADC is affected by image SNR, b-values, and the true tissue ADC. Also, to provide optimal parameter combination depending on the percentage accuracy and precision for prostate peripheral region cancer application. Moreover, to suggest parameter choices for any type of tissue, while providing the expected accuracy and precision. In this research DW images were generated assuming a mono-exponential signal model at two different b-values and for known true ADC values. Rician noise of different levels was added to the DWI images to adjust the image SNR. Using the two DWI images, ADC was calculated using a mono-exponential model for each set of b-values, SNR, and true ADC. 40,000 ADC data were collected for each parameter setting to determine the mean and the standard-deviation of the calculated ADC, as well as the percentage accuracy and precision with respect to the true ADC. The accuracy was calculated using the difference between known and calculated ADC. The precision was calculated using the standard-deviation of calculated ADC. The optimal parameters for a specific study was determined when both the percentage accuracy and precision were minimized. In our study, we simulated two true ADCs (ADC 0.00102 for tumor and 0.00180 mm2/s for normal prostate peripheral region tissue). Image SNR was varied from 2 to 100 and b-values were varied from 0 to 2000s/mm2. The results show that the percentage accuracy and percentage precision were minimized with image SNR. To increase SNR, 10 signal-averagings (NEX) were used considering the limitation in total scan time. The optimal NEX combination for tumor and normal tissue for prostate

  19. On the determination of optimal costly measurement strategies for linear stochastic systems.

    Science.gov (United States)

    Athans, M.

    1972-01-01

    This paper presents the formulation of a class of optimization problems dealing with selecting, at each instant of time, one measurement provided by one out of many sensors. Each measurement has an associated measurement cost. The basic problem is then to select an optimal measurement policy, during a specified observation time interval, so that a weighted combination of prediction accuracy and accumulated observation cost is optimized. The current analysis is limited to the class of linear stochastic dynamic systems and measurement subsystems. The problem of selecting the optimal measurement strategy can be transformed into a deterministic optimal control problem. It is shown that the optimal measurement policy and the associated matched Kalman-type filter can be precomputed.

  20. Cognitive Development Optimization Algorithm Based Support Vector Machines for Determining Diabetes

    Directory of Open Access Journals (Sweden)

    Utku Kose

    2016-03-01

    Full Text Available The definition, diagnosis and classification of Diabetes Mellitus and its complications are very important. First of all, the World Health Organization (WHO and other societies, as well as scientists have done lots of studies regarding this subject. One of the most important research interests of this subject is the computer supported decision systems for diagnosing diabetes. In such systems, Artificial Intelligence techniques are often used for several disease diagnostics to streamline the diagnostic process in daily routine and avoid misdiagnosis. In this study, a diabetes diagnosis system, which is formed via both Support Vector Machines (SVM and Cognitive Development Optimization Algorithm (CoDOA has been proposed. Along the training of SVM, CoDOA was used for determining the sigma parameter of the Gauss (RBF kernel function, and eventually, a classification process was made over the diabetes data set, which is related to Pima Indians. The proposed approach offers an alternative solution to the field of Artificial Intelligence-based diabetes diagnosis, and contributes to the related literature on diagnosis processes.

  1. [Optimization of determination of aflatoxins in foods with bromine postcolumn derivatization].

    Science.gov (United States)

    Czerwiecki, Ludwik; Wilczyńska, Grazyna

    2007-01-01

    The method for determination of aflatoxin B1, B2, G1 i G2 in nuts, culinary spices, cereals and cereal products was described. To optimize the analytical procedure in several products, condition of proper extraction, clean-up, HPLC and detection were selected. After extraction by means of methanol and water (80+20 v/v) or (70+30 v/v), clean-up on IAC columns, HPLC on C18 columns--Nucleosil and Nova Pak with mobile phase-methanol, acetonitrile, water (20+20+60 v/v) was performed. For fluorometric detection at 362/430 nm, post-column derivatization of aflatoxin B1 and G1 with bromine was carried out. The mean recovery of the method depending on matrix and aflatoxin, was 52-102% at RSD% 0.2-8.3. LOD and LOQ, respectively were: 0.01 and 0.02 microg/kg for nuts and 0.05 and 0.1 microg/kg for culinary spices and cereal products. The concentrations of aflatoxins in 86 samples of foods from market were below the permissible maximum levels legally binding.

  2. Optimization of sampling for the determination of the mean Radium-226 concentration in surface soil

    International Nuclear Information System (INIS)

    Williams, L.R.; Leggett, R.W.; Espegren, M.L.; Little, C.A.

    1987-08-01

    This report describes a field experiment that identifies an optimal method for determination of compliance with the US Environmental Protection Agency's Ra-226 guidelines for soil. The primary goals were to establish practical levels of accuracy and precision in estimating the mean Ra-226 concentration of surface soil in a small contaminated region; to obtain empirical information on composite vs. individual soil sampling and on random vs. uniformly spaced sampling; and to examine the practicality of using gamma measurements in predicting the average surface radium concentration and in estimating the number of soil samples required to obtain a given level of accuracy and precision. Numerous soil samples were collected on each six sites known to be contaminated with uranium mill tailings. Three types of samples were collected on each site: 10-composite samples, 20-composite samples, and individual or post hole samples; 10-composite sampling is the method of choice because it yields a given level of accuracy and precision for the least cost. Gamma measurements can be used to reduce surface soil sampling on some sites. 2 refs., 5 figs., 7 tabs

  3. Electrodialytic desalination of brackish water: determination of optimal experimental parameters using full factorial design

    Science.gov (United States)

    Gmar, Soumaya; Helali, Nawel; Boubakri, Ali; Sayadi, Ilhem Ben Salah; Tlili, Mohamed; Amor, Mohamed Ben

    2017-12-01

    The aim of this work is to study the desalination of brackish water by electrodialysis (ED). A two level-three factor (23) full factorial design methodology was used to investigate the influence of different physicochemical parameters on the demineralization rate (DR) and the specific power consumption (SPC). Statistical design determines factors which have the important effects on ED performance and studies all interactions between the considered parameters. Three significant factors were used including applied potential, salt concentration and flow rate. The experimental results and statistical analysis show that applied potential and salt concentration are the main effect for DR as well as for SPC. The effect of interaction between applied potential and salt concentration was observed for SPC. A maximum value of 82.24% was obtained for DR under optimum conditions and the best value of SPC obtained was 5.64 Wh L-1. Empirical regression models were also obtained and used to predict the DR and the SPC profiles with satisfactory results. The process was applied for the treatment of real brackish water using the optimal parameters.

  4. Morphology Analysis and Optimization: Crucial Factor Determining the Performance of Perovskite Solar Cells

    Directory of Open Access Journals (Sweden)

    Wenjin Zeng

    2017-03-01

    Full Text Available This review presents an overall discussion on the morphology analysis and optimization for perovskite (PVSK solar cells. Surface morphology and energy alignment have been proven to play a dominant role in determining the device performance. The effect of the key parameters such as solution condition and preparation atmosphere on the crystallization of PVSK, the characterization of surface morphology and interface distribution in the perovskite layer is discussed in detail. Furthermore, the analysis of interface energy level alignment by using X-ray photoelectron spectroscopy and ultraviolet photoelectron spectroscopy is presented to reveals the correlation between morphology and charge generation and collection within the perovskite layer, and its influence on the device performance. The techniques including architecture modification, solvent annealing, etc. were reviewed as an efficient approach to improve the morphology of PVSK. It is expected that further progress will be achieved with more efforts devoted to the insight of the mechanism of surface engineering in the field of PVSK solar cells.

  5. Determining the optimal spectral sampling frequency and uncertainty thresholds for hyperspectral remote sensing of ocean color.

    Science.gov (United States)

    Vandermeulen, Ryan A; Mannino, Antonio; Neeley, Aimee; Werdell, Jeremy; Arnone, Robert

    2017-08-07

    Using a modified geostatistical technique, empirical variograms were constructed from the first derivative of several diverse Remote Sensing Reflectance and Phytoplankton Absorbance spectra to describe how data points are correlated with "distance" across the spectra. The maximum rate of information gain is measured as a function of the kurtosis associated with the Gaussian structure of the output, and is determined for discrete segments of spectra obtained from a variety of water types (turbid river filaments, coastal waters, shelf waters, a dense Microcystis bloom, and oligotrophic waters), as well as individual and mixed phytoplankton functional types (PFTs; diatoms, eustigmatophytes, cyanobacteria, coccolithophores). Results show that a continuous spectrum of 5 to 7 nm spectral resolution is optimal to resolve the variability across mixed reflectance and absorbance spectra. In addition, the impact of uncertainty on subsequent derivative analysis is assessed, showing that a 3% Gaussian noise (SNR ~66) addition compromises data quality without smoothing the spectrum, and a 13% noise (SNR ~15) addition compromises data with smoothing.

  6. Determination of optimal formulation for extrusion granulation by compression test of wet kneaded mass.

    Science.gov (United States)

    Ohnishi, Yoshito; Okamoto, Takumi; Watano, Satoru

    2004-10-01

    The purpose of this study is to propose the application of a compression test to the determination of an optimal formulation for extrusion granulation. The electric current during extrusion was measured and the characteristics of the wet kneaded mass in the compression test were analyzed under various operating conditions, with different types of extruders and several formulations of kneaded mass. It was found that addition of a binder (HPC-L) to pharmaceutical powders lowered the load of a high-compressing type extruder, since the binder reduced the friction among the wet mass during extrusion. Also, the support stress was found to be proportional to the compression pressure without a binder, although an inflection point appeared on the support stress curve when a binder was present. This inflection point suggested large water retention of the wet kneaded mass, at which the medium of pressure was changed from a discontinuous solid powder to a continuous liquid, and large water retention contributed to the low friction of the wet mass. The friction of the wet kneaded mass and the aptitude of the formulation for extrusion were understood by using the compression test. The compression test is a very useful procedure at the first stage of a formulation study.

  7. An RTT-Aware Virtual Machine Placement Method

    Directory of Open Access Journals (Sweden)

    Li Quan

    2017-12-01

    Full Text Available Virtualization is a key technology for mobile cloud computing (MCC and the virtual machine (VM is a core component of virtualization. VM provides a relatively independent running environment for different applications. Therefore, the VM placement problem focuses on how to place VMs on optimal physical machines, which ensures efficient use of resources and the quality of service, etc. Most previous work focuses on energy consumption, network traffic between VMs and so on and rarely consider the delay for end users’ requests. In contrast, the latency between requests and VMs is considered in this paper for the scenario of optimal VM placement in MCC. In order to minimize average RTT for all requests, the round-trip time (RTT is first used as the metric for the latency of requests. Based on our proposed RTT metric, an RTT-Aware VM placement algorithm is then proposed to minimize the average RTT. Furthermore, the case in which one of the core switches does not work is considered. A VM rescheduling algorithm is proposed to keep the average RTT lower and reduce the fluctuation of the average RTT. Finally, in the simulation study, our algorithm shows its advantage over existing methods, including random placement, the traffic-aware VM placement algorithm and the remaining utilization-aware algorithm.

  8. Optimization of LC apparatus for determinations in neurochemistry with an emphasis on microdialysis samples.

    Science.gov (United States)

    Kissinger, P T; Shoup, R E

    1990-09-01

    The popularity of in vivo microdialysis sampling of low-molecular-weight substances has focused attention on improved liquid chromatography procedures for such studies. The complexity of the in vivo experiment, coupled with the complexity of LC, has discouraged some workers from developing this capability in their laboratory. Many small-volume dialysate samples are collected over hours of experiments. Proper handling of the animal (anaesthesia, temperature control, probe size and placement, choice of perfusion media) is critical. Simultaneously giving equal attention to sample collection and storage, derivatization, analysis precision, calibration for accuracy, and preventive maintenance of LC pumps, columns, and detectors is difficult for many laboratories. For reliable LC-based assays for microliter volumes of dialysates minimum human interaction with the samples is desirable. Automation strategies and selection of LC components which we have adopted are described for routine analytes such as biogenic amines, amino acids, choline/acetylcholine, and glucose.

  9. Determining the optimal approach to identifying individuals with chronic obstructive pulmonary disease: The DOC study.

    Science.gov (United States)

    Ronaldson, Sarah J; Dyson, Lisa; Clark, Laura; Hewitt, Catherine E; Torgerson, David J; Cooper, Brendan G; Kearney, Matt; Laughey, William; Raghunath, Raghu; Steele, Lisa; Rhodes, Rebecca; Adamson, Joy

    2018-03-13

    Early identification of chronic obstructive pulmonary disease (COPD) results in patients receiving appropriate management for their condition at an earlier stage in their disease. The determining the optimal approach to identifying individuals with chronic obstructive pulmonary disease (DOC) study was a case-finding study to enhance early identification of COPD in primary care, which evaluated the diagnostic accuracy of a series of simple lung function tests and symptom-based case-finding questionnaires. Current smokers aged 35 or more were invited to undertake a series of case-finding tools, which comprised lung function tests (specifically, spirometry, microspirometry, peak flow meter, and WheezoMeter) and several case-finding questionnaires. The effectiveness of these tests, individually or in combination, to identify small airways obstruction was evaluated against the gold standard of spirometry, with the quality of spirometry tests assessed by independent overreaders. The study was conducted with general practices in the Yorkshire and Humberside area, in the UK. Six hundred eighty-one individuals met the inclusion criteria, with 444 participants completing their study appointments. A total of 216 (49%) with good-quality spirometry readings were included in the analysis. The most effective case-finding tools were found to be the peak flow meter alone, the peak flow meter plus WheezoMeter, and microspirometry alone. In addition to the main analysis, where the severity of airflow obstruction was based on fixed ratios and percent of predicted values, sensitivity analyses were conducted by using lower limit of normal values. This research informs the choice of test for COPD identification; case-finding by use of the peak flow meter or microspirometer could be used routinely in primary care for suspected COPD patients. Only those testing positive to these tests would move on to full spirometry, thereby reducing unnecessary spirometric testing. © 2018 John Wiley

  10. Determination of the optimal stylet strategy for the C-MAC videolaryngoscope.

    LENUS (Irish Health Repository)

    McElwain, J

    2010-04-01

    The C-MAC videolaryngoscope is a novel intubation device that incorporates a camera system at the end of its blade, thereby facilitating obtaining a view of the glottis without alignment of the oral, pharyngeal and tracheal axes. It retains the traditional Macintosh blade shape and can be used as a direct or indirect laryngoscope. We wished to determine the optimal stylet strategy for use with the C-MAC. Ten anaesthetists were allowed up to three attempts to intubate the trachea in one easy and three progressively more difficult laryngoscopy scenarios in a SimMan manikin with four tracheal tube stylet strategies: no stylet; stylet; directional stylet (Parker Flex-It); and hockey-stick stylet. The use of a stylet conferred no advantage in the easy laryngoscopy scenario. In the difficult scenarios, the directional and hockey-stick stylets performed best. In the most difficult scenario, the median (IQR [range]) duration of the successful intubation attempt was lowest with the hockey-stick stylet; 18 s (15-22 [12-43]) s, highest with the unstyletted tracheal tube; 60 s (60-60 [60, 60]) s and styletted tracheal tube 60 s (29-60 [18-60]) s, and intermediate with the directional stylet 21 s (15-60 [8-60]) s. The use of a stylet alone does not confer benefit in the setting of easy laryngoscopy. However, in more difficult laryngoscopy scenarios, the C-MAC videolaryngoscope performs best when used with a stylet that angulates the distal tracheal tube. The hockey-stick stylet configuration performed best in the scenarios tested.

  11. METHODOLOGY FOR DETERMINING OPTIMAL EXPOSURE PARAMETERS OF A HYPERSPECTRAL SCANNING SENSOR

    Directory of Open Access Journals (Sweden)

    P. Walczykowski

    2016-06-01

    Full Text Available The purpose of the presented research was to establish a methodology that would allow the registration of hyperspectral images with a defined spatial resolution on a horizontal plane. The results obtained within this research could then be used to establish the optimum sensor and flight parameters for collecting aerial imagery data using an UAV or other aerial system. The methodology is based on an user-selected optimal camera exposure parameters (i.e. time, gain value and flight parameters (i.e. altitude, velocity. A push-broom hyperspectral imager- the Headwall MicroHyperspec A-series VNIR was used to conduct this research. The measurement station consisted of the following equipment: a hyperspectral camera MicroHyperspec A-series VNIR, a personal computer with HyperSpec III software, a slider system which guaranteed the stable motion of the sensor system, a white reference panel and a Siemens star, which was used to evaluate the spatial resolution. Hyperspectral images were recorded at different distances between the sensor and the target- from 5m to 100m. During the registration process of each acquired image, many exposure parameters were changed, such as: the aperture value, exposure time and speed of the camera’s movement on the slider. Based on all of the registered hyperspectral images, some dependencies between chosen parameters had been developed: - the Ground Sampling Distance – GSD and the distance between the sensor and the target, - the speed of the camera and the distance between the sensor and the target, - the exposure time and the gain value, - the Density Number and the gain value. The developed methodology allowed us to determine the speed and the altitude of an unmanned aerial vehicle on which the sensor would be mounted, ensuring that the registered hyperspectral images have the required spatial resolution.

  12. The role of innovation in the placement of country productive forces

    OpenAIRE

    I.A. Franiv

    2012-01-01

    The article analyzed the impact of innovation processes present at the search for enterprises optimal placement firms. Substantiates the impact of innovation on changing economic and business processes at existing plants.

  13. Determining the optimal operator allocation in SME's food manufacturing company using computer simulation and data envelopment analysis

    Science.gov (United States)

    Rani, Ruzanita Mat; Ismail, Wan Rosmanira; Rahman, Asmahanim Ab

    2014-09-01

    In a labor intensive manufacturing system, optimal operator allocation is one of the most important decisions in determining the efficiency of the system. In this paper, ten operator allocation alternatives are identified using the computer simulation ARENA. Two inputs; average wait time and average cycle time and two outputs; average operator utilization and total packet values of each alternative are generated. Four Data Envelopment Analysis (DEA) models; CCR, BCC, MCDEA and AHP/DEA are used to determine the optimal operator allocation at one of the SME food manufacturing companies in Selangor. The results of all four DEA models showed that the optimal operator allocation is six operators at peeling process, three operators at washing and slicing process, three operators at frying process and two operators at packaging process.

  14. Small-Bowel Feeding Tube Placement at Bedside: Electronic Medical Device Placement and X-Ray Agreement.

    Science.gov (United States)

    Carter, Michaelann; Roberts, Susan; Carson, Jo Ann

    2018-03-12

    The use of an electromagnetic placement device (EMPD) can allow trained clinicians to safely perform small-bowel feeding tube (SBFT) placement at the bedside. Before initiation of enteral nutrition, most facilities require a radiology confirmation of tube placement. Requirement of X-ray confirmation delays the start of nutrition and leads to increased costs and utilization of resources. The purpose of this study was to determine the rate of agreement between clinician interpretation of SBFT placement using the EMPD images and X-ray confirmation on tip of SBFT placement. This single-center, retrospective, observational study used data completed by registered dietitians or registered nurses after SBFT placement and compared it with radiology reports in the electronic health record. All tube placements were performed using the EMPD and were determined to be in 1 of 4 locations: stomach, duodenum, at the ligament of Trietz, or not specified within the small bowel. A total of 280 tube placements were analyzed. When differentiating between stomach and small bowel, the rate of agreement using a κ statistic was substantial agreement (κ = 0.67) and when determining tip-of-tube location within the small bowel excluding not specified locations, there was almost perfect agreement with a κ = 0.93 and n = 84. These findings suggest that EMPD images provide substantial agreement with X-ray confirmation and almost perfect agreement when the tip of the tube is within the small bowel. This indicates that the EMPD could be used without X-ray confirmation. © 2018 American Society for Parenteral and Enteral Nutrition.

  15. Determining a Robust D-Optimal Design for Testing for Departure from Additivity in a Mixture of Four Perfluoroalkyl Acids.

    Science.gov (United States)

    Our objective is to determine an optimal experimental design for a mixture of perfluoroalkyl acids (PFAAs) that is robust to the assumption of additivity. PFAAs are widely used in consumer products and industrial applications. The presence and persistence of PFAAs, especially in ...

  16. DETERMINING A ROBUST D-OPTIMAL DESIGN FOR TESTING FOR DEPARTURE FROM ADDITIVITY IN A MIXTURE OF FOUR PFAAS

    Science.gov (United States)

    Our objective was to determine an optimal experimental design for a mixture of perfluoroalkyl acids (PFAAs) that is robust to the assumption of additivity. Of particular focus to this research project is whether an environmentally relevant mixture of four PFAAs with long half-liv...

  17. Determining the Optimal Protocol for Measuring an Albuminuria Class Transition in Clinical Trials in Diabetic Kidney Disease

    NARCIS (Netherlands)

    Kropelin, Tobias F.; de Zeeuw, Dick; Remuzzi, Giuseppe; Bilous, Rudy; Parving, Hans-Henrik; Heerspink, Hiddo J. L.

    2016-01-01

    Albuminuria class transition (normo-to micro-to macroalbuminuria) is used as an intermediate end point to assess renoprotective drug efficacy. However, definitions of such class transition vary between trials. To determine the most optimal protocol, we evaluated the approaches used in four clinical

  18. Optimized Analytical Method to Determine Gallic and Picric Acids in Pyrotechnic Samples by Using HPLC/UV (Reverse Phase)

    International Nuclear Information System (INIS)

    Garcia Alonso, S.; Perez Pastor, R. M.

    2013-01-01

    A study on the optimization and development of a chromatographic method for the determination of gallic and picric acids in pyrotechnic samples is presented. In order to achieve this, both analytical conditions by HPLC with diode detection and extraction step of a selected sample were studied. (Author)

  19. Determination of an Optimal Trunk Sewer-line Route for Kikuyu ...

    African Journals Online (AJOL)

    - latrines and septic tanks. With the rapid population growth, traditional methods of sewage collection are proving to be inefficient, hence the need for a sewerage system. The inefficient and traditional techniques of optimal sewer-line location ...

  20. Functional Fit Evaluation to Determine Optimal Ease Requirements in Canadian Forces Chemical Protective Gloves

    National Research Council Canada - National Science Library

    Tremblay-Lutter, Julie

    1995-01-01

    A functional fit evaluation of the Canadian Forces (CF) chemical protective lightweight glove was undertaken in order to quantify the amount of ease required within the glove for optimal functional fit...

  1. DEVELOPMENT OF THE METHOD OF DETERMINING THE TARGET FUNCTION OF OPTIMIZATION OF POWER PLANT

    Directory of Open Access Journals (Sweden)

    O. Maksymovа

    2017-08-01

    Full Text Available It has been proposed the application of an optimization criterion based on properties of target functions, taken from the elements of technical, economic and thermodynamic analyses. Marginal costs indicators of energy for different energy products have also been identified. Target function of the power plant optimization was proposed, that considers energy expenditure in the presented plant and in plants closing the energy sources generation and consumption balance.

  2. On the Determinants of Optimal Border Taxes for a Small Open Economy

    DEFF Research Database (Denmark)

    Munk, Knud Jørgen; Rasmussen, Bo Sandemann

    For a small open economy where the government is restricted to raise revenue usingborder taxes only, the optimal structure of border taxes is considered. As a matter of normalization exports and the supply to the market of the primary factor may be assumed to be untaxed, but that the household use...... are interpreted in the spirit of the Corlett-Hague results for the optimal tax structure in a closed economy and compared with results from CGE models....

  3. D-Optimal mixture design optimization of an HPLC method for simultaneous determination of commonly used antihistaminic parent molecules and their active metabolites in human serum and urine.

    Science.gov (United States)

    Kanthiah, Selvakumar; Kannappan, Valliappan

    2017-08-01

    This study describes a specific, precise, sensitive and accurate method for simultaneous determination of hydroxyzine, loratadine, terfenadine, rupatadine and their main active metabolites cetirizine, desloratadine and fexofenadine, in serum and urine using meclizine as an internal standard. Solid-phase extraction method for sample clean-up and preconcentration of analytes was carried out using Phenomenex Strata-X-C and Strata X polymeric cartridges. Chromatographic analysis was performed on a Phenomenex cyano (150 × 4.6 mm i.d., 5 μm) analytical column. A D-optimal mixture design methodology was used to evaluate the effect of changes in mobile phase compositions on dependent variables and optimization of the response of interest. The mixture design experiments were performed and results were analyzed. The region of ideal mobile phase composition consisting of acetonitrile-methanol-ammonium acetate buffer (40 mm; pH 3.8 adjusted with acetic acid): 18:36:46% v/v/v was identified by a graphical optimization technique using an overlay plot. While using this optimized condition all analytes were baseline resolved in rate and analytes peaks were detected at 222 nm. The proposed bioanalytical method was validated according to US Food and Drug Administration guidelines. The proposed method was sensitive with detection limits of 0.06-0.15 μg/mL in serum and urine samples. Relative standard deviation for inter- and intra-day precision data was found to be <7%. The proposed method may find application in the determination of selected antihistaminic drugs in biological fluids. Copyright © 2017 John Wiley & Sons, Ltd.

  4. Using a computational model to quantify the potential impact of changing the placement of healthy beverages in stores as an intervention to "Nudge" adolescent behavior choice.

    Science.gov (United States)

    Wong, Michelle S; Nau, Claudia; Kharmats, Anna Yevgenyevna; Vedovato, Gabriela Milhassi; Cheskin, Lawrence J; Gittelsohn, Joel; Lee, Bruce Y

    2015-12-23

    Product placement influences consumer choices in retail stores. While sugar sweetened beverage (SSB) manufacturers expend considerable effort and resources to determine how product placement may increase SSB purchases, the information is proprietary and not available to the public health and research community. This study aims to quantify the effect of non-SSB product placement in corner stores on adolescent beverage purchasing behavior. Corner stores are small privately owned retail stores that are important beverage providers in low-income neighborhoods--where adolescents have higher rates of obesity. Using data from a community-based survey in Baltimore and parameters from the marketing literature, we developed a decision-analytic model to simulate and quantify how placement of healthy beverage (placement in beverage cooler closest to entrance, distance from back of the store, and vertical placement within each cooler) affects the probability of adolescents purchasing non-SSBs. In our simulation, non-SSB purchases were 2.8 times higher when placed in the "optimal location"--on the second or third shelves of the front cooler--compared to the worst location on the bottom shelf of the cooler farthest from the entrance. Based on our model results and survey data, we project that moving non-SSBs from the worst to the optional location would result in approximately 5.2 million more non-SSBs purchased by Baltimore adolescents annually. Our study is the first to quantify the potential impact of changing placement of beverages in corner stores. Our findings suggest that this could be a low-cost, yet impactful strategy to nudge this population--highly susceptible to obesity--towards healthier beverage decisions.

  5. Placement suitability criteria of composite tape for mould surface in automated tape placement

    Directory of Open Access Journals (Sweden)

    Zhang Peng

    2015-10-01

    Full Text Available Automated tape placement is an important automated process used for fabrication of large composite structures in aeronautical industry. The carbon fiber composite parts realized with this process tend to replace the aluminum parts produced by high-speed machining. It is difficult to determine the appropriate width of the composite tape in automated tape placement. Wrinkling will appear in the tape if it does not suit for the mould surface. Thus, this paper deals with establishing placement suitability criteria of the composite tape for the mould surface. With the assumptions for ideal mapping and by applying some principles and theorems of differential geometry, the centerline trajectory of the composite tape is identified to follow the geodesic. The placement suitability of the composite tape is examined on three different types of non-developable mould surfaces and four criteria are derived. The developed criteria have been used to test the deposition process over several mould surfaces and the appropriate width for each mould surface is obtained by referring to these criteria.

  6. A divide and conquer approach to determine the Pareto frontier for optimization of protein engineering experiments

    Science.gov (United States)

    He, Lu; Friedman, Alan M.; Bailey-Kellogg, Chris

    2016-01-01

    In developing improved protein variants by site-directed mutagenesis or recombination, there are often competing objectives that must be considered in designing an experiment (selecting mutations or breakpoints): stability vs. novelty, affinity vs. specificity, activity vs. immunogenicity, and so forth. Pareto optimal experimental designs make the best trade-offs between competing objectives. Such designs are not “dominated”; i.e., no other design is better than a Pareto optimal design for one objective without being worse for another objective. Our goal is to produce all the Pareto optimal designs (the Pareto frontier), in order to characterize the trade-offs and suggest designs most worth considering, but to avoid explicitly considering the large number of dominated designs. To do so, we develop a divide-and-conquer algorithm, PEPFR (Protein Engineering Pareto FRontier), that hierarchically subdivides the objective space, employing appropriate dynamic programming or integer programming methods to optimize designs in different regions. This divide-and-conquer approach is efficient in that the number of divisions (and thus calls to the optimizer) is directly proportional to the number of Pareto optimal designs. We demonstrate PEPFR with three protein engineering case studies: site-directed recombination for stability and diversity via dynamic programming, site-directed mutagenesis of interacting proteins for affinity and specificity via integer programming, and site-directed mutagenesis of a therapeutic protein for activity and immunogenicity via integer programming. We show that PEPFR is able to effectively produce all the Pareto optimal designs, discovering many more designs than previous methods. The characterization of the Pareto frontier provides additional insights into the local stability of design choices as well as global trends leading to trade-offs between competing criteria. PMID:22180081

  7. A Review of a Reading Class Placement for Children with Dyslexia, Focusing on Literacy Attainment and Pupil Perspectives

    Science.gov (United States)

    Casserly, Ann Marie; Gildea, Anne

    2015-01-01

    This research investigated a special reading class placement for children with dyslexia in the Republic of Ireland. The study compared the literacy attainments of children before and after their reading class placement, and determined in particular children's views regarding the placement. Participants included 16 children with dyslexia who had…

  8. Balance of Interactions Determines Optimal Survival in Multi-Species Communities.

    Directory of Open Access Journals (Sweden)

    Anshul Choudhary

    Full Text Available We consider a multi-species community modelled as a complex network of populations, where the links are given by a random asymmetric connectivity matrix J, with fraction 1 - C of zero entries, where C reflects the over-all connectivity of the system. The non-zero elements of J are drawn from a Gaussian distribution with mean μ and standard deviation σ. The signs of the elements Jij reflect the nature of density-dependent interactions, such as predatory-prey, mutualism or competition, and their magnitudes reflect the strength of the interaction. In this study we try to uncover the broad features of the inter-species interactions that determine the global robustness of this network, as indicated by the average number of active nodes (i.e. non-extinct species in the network, and the total population, reflecting the biomass yield. We find that the network transitions from a completely extinct system to one where all nodes are active, as the mean interaction strength goes from negative to positive, with the transition getting sharper for increasing C and decreasing σ. We also find that the total population, displays distinct non-monotonic scaling behaviour with respect to the product μC, implying that survival is dependent not merely on the number of links, but rather on the combination of the sparseness of the connectivity matrix and the net interaction strength. Interestingly, in an intermediate window of positive μC, the total population is maximal, indicating that too little or too much positive interactions is detrimental to survival. Rather, the total population levels are optimal when the network has intermediate net positive connection strengths. At the local level we observe marked qualitative changes in dynamical patterns, ranging from anti-phase clusters of period 2 cycles and chaotic bands, to fixed points, under the variation of mean μ of the interaction strengths. We also study the correlation between synchronization and survival

  9. Determination of optimal parameters for dual-layer cathode of polymer electrolyte fuel cell using computational intelligence-aided design.

    Science.gov (United States)

    Chen, Yi; Huang, Weina; Peng, Bei

    2014-01-01

    Because of the demands for sustainable and renewable energy, fuel cells have become increasingly popular, particularly the polymer electrolyte fuel cell (PEFC). Among the various components, the cathode plays a key role in the operation of a PEFC. In this study, a quantitative dual-layer cathode model was proposed for determining the optimal parameters that minimize the over-potential difference η and improve the efficiency using a newly developed bat swarm algorithm with a variable population embedded in the computational intelligence-aided design. The simulation results were in agreement with previously reported results, suggesting that the proposed technique has potential applications for automating and optimizing the design of PEFCs.

  10. Optimization of wet digestion procedure of blood and tissue for selenium determination by means of 75Se tracer

    International Nuclear Information System (INIS)

    Holynska, B.; Lipinska-Kalita, K.

    1977-01-01

    Selenium-75 tracer has been used for optimization of analytical procedure of selenium determination in blood and tissue. Wet digestion procedure and reduction of selenium to its elemental form with tellurium as coprecipitant have been tested. It is seen that the use of a mixture of perchloric and sulphuric acid with sodium molybdenate for the wet digestion of organic matter followed by the reduction of selenium to its elementary form by a mixture of stannous chloride and hydroxylamine hydrochloride results in very good recovery of selenium. Recovery of selenium obtained with the use of optimized analytical procedure amounts to 95% and precision is equal to 4.2%. (T.I.)

  11. Ultrasonography for confirmation of gastric tube placement.

    Science.gov (United States)

    Tsujimoto, Hiraku; Tsujimoto, Yasushi; Nakata, Yukihiko; Akazawa, Mai; Kataoka, Yuki

    2017-04-17

    %) studies to have low risk of bias in the participant selection domain because they performed ultrasound after they confirmed correct position by other methods.Few data (43 participants) were available for misplacement detection (specificity) due to the low incidence of misplacement. We did not perform a meta-analysis because of considerable heterogeneity of the index test such as the difference of echo window, the combination of ultrasound with other confirmation methods (e.g. saline flush visualization by ultrasound) and ultrasound during the insertion of the tube. For all settings, sensitivity estimates for individual studies ranged from 0.50 to 1.00 and specificity estimates from 0.17 to 1.00. For settings where X-ray was not readily available and participants underwent gastric tube insertion for drainage (four studies, 305 participants), sensitivity estimates of ultrasound in combination with other confirmatory tests ranged from 0.86 to 0.98 and specificity estimates of 1.00 with wide confidence intervals.For the studies using ultrasound alone (four studies, 314 participants), sensitivity estimates ranged from 0.91 to 0.98 and specificity estimates from 0.67 to 1.00. Of 10 studies that assessed the diagnostic accuracy of gastric tube placement, few studies had a low risk of bias. Based on limited evidence, ultrasound does not have sufficient accuracy as a single test to confirm gastric tube placement. However, in settings where X-ray is not readily available, ultrasound may be useful to detect misplaced gastric tubes. Larger studies are needed to determine the possibility of adverse events when ultrasound is used to confirm tube placement.

  12. Rotation effect and anatomic landmark accuracy for midline placement of lumbar artificial disc under fluoroscopy.

    Science.gov (United States)

    Mikhael, Mark; Brooks, Jaysson T; Akpolat, Yusuf T; Cheng, Wayne K

    2017-03-01

    Total disc arthroplasty can be a viable alternative to fusion for degenerative disc disease of the lumbar spine. The correct placement of the prosthesis within 3 mm from midline is critical for optimal function. Intra-operative radiographic error could lead to malposition of the prosthesis. The objective of this study was first to measure the effect of fluoroscopy angle on the placement of prosthesis under fluoroscopy. Secondly, determine the visual accuracy of the placement of artificial discs using different anatomical landmarks (pedicle, waist, endplate, spinous process) under fluoroscopy. Artificial discs were implanted into three cadaver specimens at L2-3, L3-4, and L4-L5. Fluoroscopic images were obtained at 0°, 2.5°, 5°, 7.5°, 10°, and 15° from the mid axis. Computerized tomography (CT) scans were obtained after the procedure. Distances were measured from each of the anatomic landmarks to the center of the implant on both fluoroscopy and CT. The difference between fluoroscopy and CT scans was compared to evaluate the position of prosthesis to each anatomic landmark at different angles. The differences between the fluoroscopy to CT measurements from the implant to pedicle was 1.31 mm, p fluoroscopy angle was greater than 7.5°, the difference between fluoroscopy and CT measurements was greater than 3 mm for all landmarks. A fluoroscopy angle of 7.5° or more can lead to implant malposition greater than 3 mm. The pedicle is the most accurate of the anatomic landmarks studied for placement of total artificial discs in the lumbar spine.

  13. Comparison of defibrillation efficacy between two pads placements in a pediatric porcine model of cardiac arrest.

    Science.gov (United States)

    Ristagno, Giuseppe; Yu, Tao; Quan, Weilun; Freeman, Gary; Li, Yongqin

    2012-06-01

    The placement of defibrillation pads at ideal anatomical sites is one of the major determinants of transthoracic defibrillation success. However, the optimal pads position for ventricular defibrillation is still undetermined. In the present study, we compared the effects of two different pads positions on defibrillation success rate in a pediatric porcine model of cardiac arrest. Eight domestic male pigs weighing 12-15 kg were randomized to receive shocks using either the anterior-posterior (AP) or the anterior-lateral (AL) position with pediatric pads. Ventricular fibrillation (VF) was electrically induced and untreated for 30 s. A sequence of randomized biphasic electrical shocks ranging from 10 to 100 J was attempted. If the defibrillation failed to terminate VF, a 100 J rescuer shock was then delivered. After a recovery interval of 5 min, the sequence was repeated for a total of approximately 30 test shocks were attempted for each animal. The dose response curves were constructed and the defibrillation thresholds were compared between groups. The aggregated success rate was 65.6% for AP placement and 43.0% for AL one (p=0.0005) when shock energy was between 10 and 70 J. A significantly lower 50% defibrillation threshold was obtained for AP pads placement compared with traditional AL pads position (2.1±0.4 J/kg vs. 3.6±0.9 J/kg, p=0.041). In this pediatric porcine model of cardiac arrest, the anterior-posterior placement of pediatric pads yielded a higher success rate by lowering defibrillation threshold compared to the anterior-lateral position. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  14. The determination of optimal cells disintegration method of Candida albicans and Candida tropicalis fungals

    Directory of Open Access Journals (Sweden)

    M. V. Rybalkyn

    2014-08-01

    Candida tropicalis fungi has been prepared separately on Sabouraud agar. Incubation has been done at 25 ± 2º C for 6 days and then washed by 25 ml of sterile 0.9% isotonic sodium chloride solution. We determined the microbiological purity of cell suspension of Candida albicans and Candida tropicalis fungi visually and by microscopy. Further washings has been obtained by centrifuged at speed 3000 r / min for 10 min. The resulting precipitate of fungi has been proved by sterile isotonic 0.9% sodium chloride solution to (8,5 – 9х108 in 1 ml of standardized suspension and by counting the cells in the Goryaeva fungi cell. For cell disruption fungi has been resorted to the action of ultrasound, rubbing with abrasive material and freeze-thaw. Key parameters in the ultrasonic disintegration are: frequency 22 kHz, the intensity of 5 W/cm2, a temperature of 25 ± 2° C, time 15 minutes, 10 ml of 0,9% isotonic sterile sodium chloride solution. For grinding fungal cells using mortar, pestle, quartz sand and biomaterial in a 1:1 ratio, and 10 mL of sterile isotonic 0,9% sodium chloride solution. Freezing and thawing have been performed in 10 ml sterile isotonic 0.9% sodium chloride solution at a temperature of -25 ± 2 ° C and 25 ± 2 ° C. In each case the amount of protein and polysaccharides has been calculated. For a more detailed analysis the monosaccharide composition has been determined in each case. It is possible to establish the optimal method of cell disruption of Candida albicans and Candida tropicalis fungi, namely ultrasonic disintegration. In the future we plan to study the immunological properties of the proteins and polysaccharides on animals.

  15. Automated beam placement for breast radiotherapy using a support vector machine based algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Zhao Xuan; Kong, Dewen; Jozsef, Gabor; Chang, Jenghwa; Wong, Edward K.; Formenti, Silvia C.; Wang Yao [Department of Electrical and Computer Engineering, Polytechnic Institute of New York University, Brooklyn, New York 11201 (United States); Department of Radiation Oncology, School of Medicine, Langone Medical Center, New York University, New York, New York 10016 (United States); Department of Computer Science and Engineering, Polytechnic Institute of New York University, Brooklyn, New York 11201 (United States); Department of Radiation Oncology, School of Medicine, Langone Medical Center, New York University, New York, New York 10016 (United States); Department of Electrical and Computer Engineering, Polytechnic Institute of New York University, Brooklyn, New York 11201 (United States)

    2012-05-15

    Purpose: To develop an automated beam placement technique for whole breast radiotherapy using tangential beams. We seek to find optimal parameters for tangential beams to cover the whole ipsilateral breast (WB) and minimize the dose to the organs at risk (OARs). Methods: A support vector machine (SVM) based method is proposed to determine the optimal posterior plane of the tangential beams. Relative significances of including/avoiding the volumes of interests are incorporated into the cost function of the SVM. After finding the optimal 3-D plane that separates the whole breast (WB) and the included clinical target volumes (CTVs) from the OARs, the gantry angle, collimator angle, and posterior jaw size of the tangential beams are derived from the separating plane equation. Dosimetric measures of the treatment plans determined by the automated method are compared with those obtained by applying manual beam placement by the physicians. The method can be further extended to use multileaf collimator (MLC) blocking by optimizing posterior MLC positions. Results: The plans for 36 patients (23 prone- and 13 supine-treated) with left breast cancer were analyzed. Our algorithm reduced the volume of the heart that receives >500 cGy dose (V5) from 2.7 to 1.7 cm{sup 3} (p = 0.058) on average and the volume of the ipsilateral lung that receives >1000 cGy dose (V10) from 55.2 to 40.7 cm{sup 3} (p = 0.0013). The dose coverage as measured by volume receiving >95% of the prescription dose (V95%) of the WB without a 5 mm superficial layer decreases by only 0.74% (p = 0.0002) and the V95% for the tumor bed with 1.5 cm margin remains unchanged. Conclusions: This study has demonstrated the feasibility of using a SVM-based algorithm to determine optimal beam placement without a physician's intervention. The proposed method reduced the dose to OARs, especially for supine treated patients, without any relevant degradation of dose homogeneity and coverage in general.

  16. Comparison of implant and provisional placement protocols in sinus-augmented bone: a preliminary report.

    Science.gov (United States)

    Lang, Lisa A; Edgin, Wendell A; Garcia, Lily T; Olvera, Norma; Verrett, Ronald; Bohnenkamp, David; Haney, Stephen J

    2015-01-01

    To evaluate preliminary data on clinical outcomes associated with timing of placement of single implant-supported provisional crowns and implants in augmented bone. Twenty patients underwent sinus elevation bone grafting followed by a 6-month healing period before implant placement and immediate placement of a provisional crown (group [G] 1); 20 patients received sinus elevation bone grafting at the time of implant placement and immediate placement of a provisional crown (G2); 20 patients required no bone augmentation before implant placement and immediate placement of a provisional crown (G3); and 20 patients received sinus elevation bone grafting followed by a 6-month healing period before implant placement followed by a 6-month healing period before restoration (G4). The height of the crestal bone was measured and recorded to determine mean bone changes, and success rates were determined. Mean bone level comparisons were made between G2 and G3, G2 and G4, and G3 and G4. No statistically significant differences were found between the groups (P crown placement. Implants that were restored immediately regardless of the timing of bone augmentation showed greater failure rates than implants in augmented bone with delayed restoration protocols or those that were restored immediately in sites without bone augmentation. Neither the timing of loading nor timing of implant placement in relation to bone augmentation surgery affected mean bone loss.

  17. [Clinical research of using optimal compliance to determine positive end-expiratory pressure].

    Science.gov (United States)

    Xu, Lei; Feng, Quan-sheng; Lian, Fu; Shao, Xin-hua; Li, Zhi-bo; Wang, Zhi-yong; Li, Jun

    2012-07-01

    To observe the availability and security of optimal compliance strategy to titrate the optimal positive end-expiratory pressure (PEEP), compared with quasi-static pressure-volume curve (P-V curve) traced by low-flow method. Fourteen patients received mechanical ventilation with acute respiratory distress syndrome (ARDS) admitted in intensive care unit (ICU) of Tianjin Third Central Hospital from November 2009 to December 2010 were divided into two groups(n = 7). The quasi-static P-V curve method and the optimal compliance titration were used to set the optimal PEEP respectively, repeated 3 times in a row. The optimal PEEP and the consistency of repeated experiments were compared between groups. The hemodynamic parameters, oxygenation index (OI), lung compliance (C), cytokines and pulmonary surfactant-associated protein D (SP-D) concentration in plasma before and 2, 4, and 6 hours after the experiment were observed in each group. (1) There were no significant differences in gender, age and severity of disease between two groups. (2)The optimal PEEP [cm H(2)O, 1 cm H(2)O=0.098 kPa] had no significant difference between quasi-static P-V curve method group and the optimal compliance titration group (11.53 ± 2.07 vs. 10.57 ± 0.87, P>0.05). The consistency of repeated experiments in quasi-static P-V curve method group was poor, the slope of the quasi-static P-V curve in repeated experiments showed downward tendency. The optimal PEEP was increasing in each measure. There was significant difference between the first and the third time (10.00 ± 1.58 vs. 12.80 ± 1.92, P vs. 93.71 ± 5.38, temperature: 38.05 ± 0.73 vs. 36.99 ± 1.02, IL-6: 144.84 ± 23.89 vs. 94.73 ± 5.91, TNF-α: 151.46 ± 46.00 vs. 89.86 ± 13.13, SP-D: 33.65 ± 8.66 vs. 16.63 ± 5.61, MAP: 85.47 ± 9.24 vs. 102.43 ± 8.38, CCI: 3.00 ± 0.48 vs. 3.81 ± 0.81, OI: 62.00 ± 21.45 vs. 103.40 ± 37.27, C: 32.10 ± 2.92 vs. 49.57 ± 7.18, all P safety and usability.

  18. The same number of optimized parameters scheme for determining intermolecular interaction energies

    DEFF Research Database (Denmark)

    Kristensen, Kasper; Ettenhuber, Patrick; Eriksen, Janus Juul

    2015-01-01

    We propose the Same Number Of Optimized Parameters (SNOOP) scheme as an alternative to the counterpoise method for treating basis set superposition errors in calculations of intermolecular interaction energies. The key point of the SNOOP scheme is to enforce that the number of optimized wave...... as numerically. Numerical results for second-order Møller-Plesset perturbation theory (MP2) and coupled-cluster with single, double, and approximate triple excitations (CCSD(T)) show that the SNOOP scheme in general outperforms the uncorrected and counterpoise approaches. Furthermore, we show that SNOOP...

  19. Reactive Power and Voltage Optimization Control Strategy in Active Distribution Network Based on the Determination of the Key Nodes

    Science.gov (United States)

    Meng, Qingmeng; Che, Renfei; Gao, Shi

    2017-05-01

    The distributed generation which is integrated in the active distribution network changes power flow, bringing new challenges to the voltage control. When voltage limit violation happens, in order to make the voltage return to normal range and improve the voltage quality, a novel voltage control strategy is proposed. Considering the voltage quality and node importance, the electrical closeness centrality and key node contribution degree are defined, and the key nodes are determined by the orders of the key node contribution degree. This paper uses the reactive power compensation devices which are installed at the key nodes coordinated with the reactive power output of the distributed generation to realize the voltage optimization control. The voltage optimization control model is established by taking the minimum power loss as an objective function. Using the particle swarm optimization algorithm solves the model. The simulation results of the improved IEEE-33 bus system verify the effectiveness of the proposed method.

  20. Educational Placement of Students with Autism: The Impact of State of Residence

    Science.gov (United States)

    Kurth, Jennifer A.

    2015-01-01

    Typically, child characteristics such as IQ and severity of autism symptoms are thought to determine educational placement. The present study examines external factors, including state of residence and state funding formulas, to determine their potential influence on placement outcomes. Findings reveal that considerable variations exist among…

  1. Factors influencing radiation therapy student clinical placement satisfaction

    International Nuclear Information System (INIS)

    Bridge, Pete; Carmichael, Mary-Ann

    2014-01-01

    Introduction: Radiation therapy students at Queensland University of Technology (QUT) attend clinical placements at five different clinical departments with varying resources and support strategies. This study aimed to determine the relative availability and perceived importance of different factors affecting student support while on clinical placement. The purpose of the research was to inform development of future support mechanisms to enhance radiation therapy students’ experience on clinical placement. Methods: This study used anonymous Likert-style surveys to gather data from years 1 and 2 radiation therapy students from QUT and clinical educators from Queensland relating to availability and importance of support mechanisms during clinical placements in a semester. Results: The study findings demonstrated student satisfaction with clinical support and suggested that level of support on placement influenced student employment choices. Staff support was perceived as more important than physical resources; particularly access to a named mentor, a clinical educator and weekly formative feedback. Both students and educators highlighted the impact of time pressures. Conclusions: The support offered to radiation therapy students by clinical staff is more highly valued than physical resources or models of placement support. Protected time and acknowledgement of the importance of clinical education roles are both invaluable. Joint investment in mentor support by both universities and clinical departments is crucial for facilitation of effective clinical learning

  2. THE DETERMINATION OF THE OPTIMAL PARAMETERS OF THE BEARING ALLOYS MICROSTRUCTURE IN CONTACT FRICTION AREA

    Directory of Open Access Journals (Sweden)

    M. O. Kuzin

    2009-03-01

    Full Text Available The possibility of using the simulation structure models and variation models of mechanics is shown for finding quantity and size of antifriction alloy phase with raised wear resistance. The numerical realization of models displays that the optimal value of structure parameters of babbit B16 is 56 % of hardening phase SnSb with average size of 47 mkm.

  3. From Determinism and Probability to Chaos: Chaotic Evolution towards Philosophy and Methodology of Chaotic Optimization

    Science.gov (United States)

    2015-01-01

    We present and discuss philosophy and methodology of chaotic evolution that is theoretically supported by chaos theory. We introduce four chaotic systems, that is, logistic map, tent map, Gaussian map, and Hénon map, in a well-designed chaotic evolution algorithm framework to implement several chaotic evolution (CE) algorithms. By comparing our previous proposed CE algorithm with logistic map and two canonical differential evolution (DE) algorithms, we analyse and discuss optimization performance of CE algorithm. An investigation on the relationship between optimization capability of CE algorithm and distribution characteristic of chaotic system is conducted and analysed. From evaluation result, we find that distribution of chaotic system is an essential factor to influence optimization performance of CE algorithm. We propose a new interactive EC (IEC) algorithm, interactive chaotic evolution (ICE) that replaces fitness function with a real human in CE algorithm framework. There is a paired comparison-based mechanism behind CE search scheme in nature. A simulation experimental evaluation is conducted with a pseudo-IEC user to evaluate our proposed ICE algorithm. The evaluation result indicates that ICE algorithm can obtain a significant better performance than or the same performance as interactive DE. Some open topics on CE, ICE, fusion of these optimization techniques, algorithmic notation, and others are presented and discussed. PMID:25879067

  4. From Determinism and Probability to Chaos: Chaotic Evolution towards Philosophy and Methodology of Chaotic Optimization

    Directory of Open Access Journals (Sweden)

    Yan Pei

    2015-01-01

    Full Text Available We present and discuss philosophy and methodology of chaotic evolution that is theoretically supported by chaos theory. We introduce four chaotic systems, that is, logistic map, tent map, Gaussian map, and Hénon map, in a well-designed chaotic evolution algorithm framework to implement several chaotic evolution (CE algorithms. By comparing our previous proposed CE algorithm with logistic map and two canonical differential evolution (DE algorithms, we analyse and discuss optimization performance of CE algorithm. An investigation on the relationship between optimization capability of CE algorithm and distribution characteristic of chaotic system is conducted and analysed. From evaluation result, we find that distribution of chaotic system is an essential factor to influence optimization performance of CE algorithm. We propose a new interactive EC (IEC algorithm, interactive chaotic evolution (ICE that replaces fitness function with a real human in CE algorithm framework. There is a paired comparison-based mechanism behind CE search scheme in nature. A simulation experimental evaluation is conducted with a pseudo-IEC user to evaluate our proposed ICE algorithm. The evaluation result indicates that ICE algorithm can obtain a significant better performance than or the same performance as interactive DE. Some open topics on CE, ICE, fusion of these optimization techniques, algorithmic notation, and others are presented and discussed.

  5. From determinism and probability to chaos: chaotic evolution towards philosophy and methodology of chaotic optimization.

    Science.gov (United States)

    Pei, Yan

    2015-01-01

    We present and discuss philosophy and methodology of chaotic evolution that is theoretically supported by chaos theory. We introduce four chaotic systems, that is, logistic map, tent map, Gaussian map, and Hénon map, in a well-designed chaotic evolution algorithm framework to implement several chaotic evolution (CE) algorithms. By comparing our previous proposed CE algorithm with logistic map and two canonical differential evolution (DE) algorithms, we analyse and discuss optimization performance of CE algorithm. An investigation on the relationship between optimization capability of CE algorithm and distribution characteristic of chaotic system is conducted and analysed. From evaluation result, we find that distribution of chaotic system is an essential factor to influence optimization performance of CE algorithm. We propose a new interactive EC (IEC) algorithm, interactive chaotic evolution (ICE) that replaces fitness function with a real human in CE algorithm framework. There is a paired comparison-based mechanism behind CE search scheme in nature. A simulation experimental evaluation is conducted with a pseudo-IEC user to evaluate our proposed ICE algorithm. The evaluation result indicates that ICE algorithm can obtain a significant better performance than or the same performance as interactive DE. Some open topics on CE, ICE, fusion of these optimization techniques, algorithmic notation, and others are presented and discussed.

  6. Determining the Optimal Number of Spinal Manipulation Sessions for Chronic Low-Back Pain

    Science.gov (United States)

    ... Optimal Number of Spinal Manipulation Sessions for Chronic Low-Back Pain Share: © Matthew Lester Findings from the largest and ... study of spinal manipulative therapy (SMT) for chronic low-back pain suggest that 12 sessions (SMT) may be the ...

  7. A model based on stochastic dynamic programming for determining China's optimal strategic petroleum reserve policy

    International Nuclear Information System (INIS)

    Zhang Xiaobing; Fan Ying; Wei Yiming

    2009-01-01

    China's Strategic Petroleum Reserve (SPR) is currently being prepared. But how large the optimal stockpile size for China should be, what the best acquisition strategies are, how to release the reserve if a disruption occurs, and other related issues still need to be studied in detail. In this paper, we develop a stochastic dynamic programming model based on a total potential cost function of establishing SPRs to evaluate the optimal SPR policy for China. Using this model, empirical results are presented for the optimal size of China's SPR and the best acquisition and drawdown strategies for a few specific cases. The results show that with comprehensive consideration, the optimal SPR size for China is around 320 million barrels. This size is equivalent to about 90 days of net oil import amount in 2006 and should be reached in the year 2017, three years earlier than the national goal, which implies that the need for China to fill the SPR is probably more pressing; the best stockpile release action in a disruption is related to the disruption levels and expected continuation probabilities. The information provided by the results will be useful for decision makers.

  8. Determination of optimal LWR containment design, excluding accidents more severe than Class 8

    International Nuclear Information System (INIS)

    Cave, L.; Min, T.K.

    1980-04-01

    Information is presented concerning the restrictive effect of existing NRC requirements; definition of possible targets for containment; possible containment systems for LWR; optimization of containment design for class 3 through class 8 accidents (PWR); estimated costs of some possible containment arrangements for PWR relative to the standard dry containment system; estimated costs of BWR containment

  9. Determination of the optimal values for the parameters of radio meteors

    International Nuclear Information System (INIS)

    Kostylev, K.K.; Alferova, T.G.

    1985-01-01

    The authors present previously published data from studies of the amplitude time characteristics of the echo signals from underdense meteor trails analyzed by nonlinear optimization logarithms, and they use these results to confirm the hypothesis that small meteor particles experience significant braking in the earth's atmosphere

  10. Optimized and validated high-performance liquid chromatography method for the determination of deoxynivalenol and aflatoxins in cereals.

    Science.gov (United States)

    Skendi, Adriana; Irakli, Maria N; Papageorgiou, Maria D

    2016-04-01

    A simple, sensitive and accurate analytical method was optimized and developed for the determination of deoxynivalenol and aflatoxins in cereals intended for human consumption using high-performance liquid chromatography with diode array and fluorescence detection and a photochemical reactor for enhanced detection. A response surface methodology, using a fractional central composite design, was carried out for optimization of the water percentage at the beginning of the run (X1, 80-90%), the level of acetonitrile at the end of gradient system (X2, 10-20%) with the water percentage fixed at 60%, and the flow rate (X3, 0.8-1.2 mL/min). The studied responses were the chromatographic peak area, the resolution factor and the time of analysis. Optimal chromatographic conditions were: X1 = 80%, X2 = 10%, and X3 = 1 mL/min. Following a double sample extraction with water and a mixture of methanol/water, mycotoxins were rapidly purified by an optimized solid-phase extraction protocol. The optimized method was further validated with respect to linearity (R(2) >0.9991), sensitivity, precision, and recovery (90-112%). The application to 23 commercial cereal samples from Greece showed contamination levels below the legally set limits, except for one maize sample. The main advantages of the developed method are the simplicity of operation and the low cost. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Determination of the optimal time and cost of manufacturing flow of an assembly using the Taguchi method

    Science.gov (United States)

    Petrila, S.; Brabie, G.; Chirita, B.

    2016-08-01

    The optimization of the parts and assembly manufacturing operation was carried out in order to minimize both the time and cost of production as appropriate. The optimization was made by using the Taguchi method. The Taguchi method is based on the plans of experiences that vary the input and outputs factors. The application of the Taguchi method in order to optimize the flow of the analyzed assembly production is made in the following: to find the optimal combination between the manufacturing operations; to choose the variant involving the use of equipment performance; to delivery operations based on automation. The final aim of the Taguchi method application is that the entire assembly to be achieved at minimum cost and in a short time. Philosophy Taguchi method of optimizing product quality is synthesized from three basic concepts: quality must be designed into the product and not he product inspected after it has been manufactured; the higher quality is obtained when the deviation from the proposed target is low or when uncontrollable factors action has no influence on it, which translates robustness; costs entailed quality are expressed as a function of deviation from the nominal value [1]. When determining the number of experiments involving the study of a phenomenon by this method, follow more restrictive conditions [2].

  12. DETERMINATION OF OPTIMAL CONTOURS OF OPEN PIT MINE DURING OIL SHALE EXPLOITATION, BY MINEX 5.2.3. PROGRAM

    Directory of Open Access Journals (Sweden)

    Miroslav Ignjatović

    2013-04-01

    Full Text Available By examination and determination of optimal solution of technological processes of exploitation and oil shale processing from Aleksinac site and with adopted technical solution and exploitation of oil shale, derived a technical solution that optimize contour of the newly defined open pit mine. In the world, this problem is solved by using a computer program that has become the established standard for quick and efficient solution for this problem. One of the computer’s program, which can be used for determination of the optimal contours of open pit mines is Minex 5.2.3. program, produced in Australia in the Surpac Minex Group Pty Ltd Company, which is applied at the Mining and Metallurgy Institute Bor (no. of licenses are SSI - 24765 and SSI - 24766. In this study, authors performed 11 optimization of deposit geo - models in Minex 5.2.3. based on the tests results, performed in a laboratory for soil mechanics of Mining and Metallurgy Institute, Bor, on samples from the site of Aleksinac deposits.

  13. Use of Debye's series to determine the optimal edge-effect terms for computing the extinction efficiencies of spheroids.

    Science.gov (United States)

    Lin, Wushao; Bi, Lei; Liu, Dong; Zhang, Kejun

    2017-08-21

    The extinction efficiencies of atmospheric particles are essential to determining radiation attenuation and thus are fundamentally related to atmospheric radiative transfer. The extinction efficiencies can also be used to retrieve particle sizes or refractive indices through particle characterization techniques. This study first uses the Debye series to improve the accuracy of high-frequency extinction formulae for spheroids in the context of Complex angular momentum theory by determining an optimal number of edge-effect terms. We show that the optimal edge-effect terms can be accurately obtained by comparing the results from the approximate formula with their counterparts computed from the invariant imbedding Debye series and T-matrix methods. An invariant imbedding T-matrix method is employed for particles with strong absorption, in which case the extinction efficiency is equivalent to two plus the edge-effect efficiency. For weakly absorptive or non-absorptive particles, the T-matrix results contain the interference between the diffraction and higher-order transmitted rays. Therefore, the Debye series was used to compute the edge-effect efficiency by separating the interference from the transmission on the extinction efficiency. We found that the optimal number strongly depends on the refractive index and is relatively insensitive to the particle geometry and size parameter. By building a table of optimal numbers of edge-effect terms, we developed an efficient and accurate extinction simulator that has been fully tested for randomly oriented spheroids with various aspect ratios and a wide range of refractive indices.

  14. Determining the Optimal Protocol for Measuring an Albuminuria Class Transition in Clinical Trials in Diabetic Kidney Disease

    DEFF Research Database (Denmark)

    Kröpelin, Tobias F; de Zeeuw, Dick; Remuzzi, Giuseppe

    2016-01-01

    Albuminuria class transition (normo- to micro- to macroalbuminuria) is used as an intermediate end point to assess renoprotective drug efficacy. However, definitions of such class transition vary between trials. To determine the most optimal protocol, we evaluated the approaches used in four...... baseline in addition to the class transition. In Cox regression analysis, neither increasing the number of urine samples collected at a single study visit nor differences in the other variables used to define albuminuria class transition altered the average drug effect. However, the SEM of the treatment...... effect increased (decreased precision) with stricter end point definitions, resulting in a loss of statistical significance. In conclusion, the optimal albuminuria transition end point for use in drug intervention trials can be determined with a single urine collection for albuminuria assessment per...

  15. On the determination of optimized, fully quadratic, coupled state quasidiabatic Hamiltonians for determining bound state vibronic spectra.

    Science.gov (United States)

    Zhu, Xiaolei; Yarkony, David R

    2009-06-21

    The quasidiabatic, coupled electronic state, fully quadratic Hamiltonian (H(d)), suitable for the simulation of spectra exhibiting strong vibronic couplings and constructed using a recently introduced pseudonormal equations approach, is studied. The flexibility inherent in the normal equations approach is shown to provide a robust means for (i) improving the accuracy of H(d), (ii) extending its domain of utility, and (iii) determining the limits of the fully quadratic model. The two lowest electronic states of pyrrolyl which are coupled by conical intersections are used as a test case. The requisite ab initio data are obtained from large multireference configuration interaction expansions comprised of 108.5x10(6) configuration state functions and based on polarized triple zeta quality atomic orbital bases.

  16. The Determination of the Optimal Material Proportion in Natural Fiber-Cement Composites Using Design of Mixture Experiments

    OpenAIRE

    Aramphongphun Chuckaphun; Ungtawondee Kampanart; Chaysuwan Duangrudee

    2016-01-01

    This research aims to determine the optimal material proportion in a natural fiber-cement composite as an alternative to an asbestos fibercement composite while the materials cost is minimized and the properties still comply with Thai Industrial Standard (TIS) for applications of profile sheet roof tiles. Two experimental sets were studied in this research. First, a three-component mixture of (i) virgin natural fiber, (ii) synthetic fiber and (iii) cement was studied while the proportion of c...

  17. Determining the Optimal Inventory Management Policy for Naval Medical Center San Diego’s Pharmacy

    Science.gov (United States)

    2016-12-01

    budget. It is a very good tool for those hospitals that have a restrictive formulary. There is also the option to combine these to analyses into the...evaluating the inventory of a pharmacy and should be used in conjunction with a tool that takes non- monetary factors into consideration. The VEN analysis...OPTIMAL INVENTORY MANAGEMENT POLICY FOR NAVAL MEDICAL CENTER SAN DIEGO’S PHARMACY by Jason S. Galka December 2016 Thesis Advisor: Eddine

  18. Determination of optimal samples for robot calibration based on error similarity

    Directory of Open Access Journals (Sweden)

    Tian Wei

    2015-06-01

    Full Text Available Industrial robots are used for automatic drilling and riveting. The absolute position accuracy of an industrial robot is one of the key performance indexes in aircraft assembly, and can be improved through error compensation to meet aircraft assembly requirements. The achievable accuracy and the difficulty of accuracy compensation implementation are closely related to the choice of sampling points. Therefore, based on the error similarity error compensation method, a method for choosing sampling points on a uniform grid is proposed. A simulation is conducted to analyze the influence of the sample point locations on error compensation. In addition, the grid steps of the sampling points are optimized using a statistical analysis method. The method is used to generate grids and optimize the grid steps of a Kuka KR-210 robot. The experimental results show that the method for planning sampling data can be used to effectively optimize the sampling grid. After error compensation, the position accuracy of the robot meets the position accuracy requirements.

  19. Using an optimal CC-PLSR-RBFNN model and NIR spectroscopy for the starch content determination in corn.

    Science.gov (United States)

    Jiang, Hao; Lu, Jiangang

    2018-05-05

    Corn starch is an important material which has been traditionally used in the fields of food and chemical industry. In order to enhance the rapidness and reliability of the determination for starch content in corn, a methodology is proposed in this work, using an optimal CC-PLSR-RBFNN calibration model and near-infrared (NIR) spectroscopy. The proposed model was developed based on the optimal selection of crucial parameters and the combination of correlation coefficient method (CC), partial least squares regression (PLSR) and radial basis function neural network (RBFNN). To test the performance of the model, a standard NIR spectroscopy data set was introduced, containing spectral information and chemical reference measurements of 80 corn samples. For comparison, several other models based on the identical data set were also briefly discussed. In this process, the root mean square error of prediction (RMSEP) and coefficient of determination (Rp 2 ) in the prediction set were used to make evaluations. As a result, the proposed model presented the best predictive performance with the smallest RMSEP (0.0497%) and the highest Rp 2 (0.9968). Therefore, the proposed method combining NIR spectroscopy with the optimal CC-PLSR-RBFNN model can be helpful to determine starch content in corn. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Using an optimal CC-PLSR-RBFNN model and NIR spectroscopy for the starch content determination in corn

    Science.gov (United States)

    Jiang, Hao; Lu, Jiangang

    2018-05-01

    Corn starch is an important material which has been traditionally used in the fields of food and chemical industry. In order to enhance the rapidness and reliability of the determination for starch content in corn, a methodology is proposed in this work, using an optimal CC-PLSR-RBFNN calibration model and near-infrared (NIR) spectroscopy. The proposed model was developed based on the optimal selection of crucial parameters and the combination of correlation coefficient method (CC), partial least squares regression (PLSR) and radial basis function neural network (RBFNN). To test the performance of the model, a standard NIR spectroscopy data set was introduced, containing spectral information and chemical reference measurements of 80 corn samples. For comparison, several other models based on the identical data set were also briefly discussed. In this process, the root mean square error of prediction (RMSEP) and coefficient of determination (Rp2) in the prediction set were used to make evaluations. As a result, the proposed model presented the best predictive performance with the smallest RMSEP (0.0497%) and the highest Rp2 (0.9968). Therefore, the proposed method combining NIR spectroscopy with the optimal CC-PLSR-RBFNN model can be helpful to determine starch content in corn.

  1. Spectrophotometric determination of fluoxetine by molecularly imprinted polypyrrole and optimization by experimental design, artificial neural network and genetic algorithm

    Science.gov (United States)

    Nezhadali, Azizollah; Motlagh, Maryam Omidvar; Sadeghzadeh, Samira

    2018-02-01

    A selective method based on molecularly imprinted polymer (MIP) solid-phase extraction (SPE) using UV-Vis spectrophotometry as a detection technique was developed for the determination of fluoxetine (FLU) in pharmaceutical and human serum samples. The MIPs were synthesized using pyrrole as a functional monomer in the presence of FLU as a template molecule. The factors that affecting the preparation and extraction ability of MIP such as amount of sorbent, initiator concentration, the amount of monomer to template ratio, uptake shaking rate, uptake time, washing buffer pH, take shaking rate, Taking time and polymerization time were considered for optimization. First a Plackett-Burman design (PBD) consists of 12 randomized runs were applied to determine the influence of each factor. The other optimization processes were performed using central composite design (CCD), artificial neural network (ANN) and genetic algorithm (GA). At optimal condition the calibration curve showed linearity over a concentration range of 10- 7-10- 8 M with a correlation coefficient (R2) of 0.9970. The limit of detection (LOD) for FLU was obtained 6.56 × 10- 9 M. The repeatability of the method was obtained 1.61%. The synthesized MIP sorbent showed a good selectivity and sensitivity toward FLU. The MIP/SPE method was used for the determination of FLU in pharmaceutical, serum and plasma samples, successfully.

  2. Multivariate optimization of an ultrasound-assisted extraction procedure for Cu, Mn, Ni and Zn determination in ration to chickens

    Directory of Open Access Journals (Sweden)

    JOELIA M. BARROS

    2013-09-01

    Full Text Available In this work, multivariate optimization techniques were used to develop a method based on the ultrasound-assisted extraction for copper, manganese, nickel and zinc determination from rations for chicken nutrition using flame atomic absorption spectrometry. The proportions of extracting components (2.0 mol.L–1 nitric, hydrochloric and acetic acid solutions were optimized using centroid-simplex mixture design. The optimum proportions of this mixture taken as percentage of each component were respectively 20%, 37% and 43%. Variables of method (sample mass, sonication time and final acid concentration were optimized using Doehlert design. The optimum values found for these variables were respectively 0.24 g, 18s and 3.6 mol.L–1. The developed method allows copper, manganese, nickel and zinc determination with quantification limits of 2.82; 4.52; 10.7; e 9.69 µg.g–1, and precision expressed as relative standard deviation (%RSD, 25 µg.g–1, N = 5 of 5.30; 2.13; 0.88; and 0.83%, respectively. This method was applied in the analytes determination from chicken rations collected from specialized commerce in Jequié city (Bahia State/Brazil. Application of paired t-test at the obtained results, in a confidence level of 95%, does not show significant difference between the proposed method and the microwave-assisted digestion.

  3. Familial placement of Wightia (Lamiales)

    DEFF Research Database (Denmark)

    Zhou, Qing-Mei; Jensen, Søren Rosendal; Liu, Guo-Li

    2014-01-01

    The familial placement of Wightia has long been a problem. Here, we present a comprehensive phylogenetic inspection of Wightia based on noncoding chloroplast loci (the rps16 intron and the trnL–F region) and nuclear ribosomal internal transcribed spacer, and on chemical analysis. A total of 70 sa...

  4. Automated Fiber Placement of Advanced Materials (Preprint)

    National Research Council Canada - National Science Library

    Benson, Vernon M; Arnold, Jonahira

    2006-01-01

    .... ATK has been working with the Air Force Research Laboratory to foster improvements in the BMI materials and in the fiber placement processing techniques to achieve rates comparable to Epoxy placement rates...

  5. Determination of the optimal sample size for a clinical trial accounting for the population size.

    Science.gov (United States)

    Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin

    2017-07-01

    The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Waveform capnography: an alternative to physician gestalt in determining optimal intubating conditions after administration of paralytic agents.

    Science.gov (United States)

    Scoccimarro, Anthony; West, Jason R; Kanter, Marc; Caputo, Nicholas D

    2018-01-01

    We sought to evaluate the utility of waveform capnography (WC) in detecting paralysis, by using apnoea as a surrogate determinant, as compared with clinical gestalt during rapid sequence intubation. Additionally, we sought to determine if this improves the time to intubation and first pass success rates through more consistent and expedient means of detecting optimal intubating conditions (ie, paralysis). A prospective observational cohort study of consecutively enrolled patients was conducted from April to June 2016 at an academic, urban, level 1 trauma centre in New York City. Nasal cannula WC was used to determine the presence of apnoea as a surrogate measure of paralysis versus physician gestalt (ie, blink test, mandible relaxation, and so on). One hundred patients were enrolled (50 in the WC group and 50 in the gestalt group). There were higher proportions of failure to determine optimal intubating conditions (ie, paralysis) in the gestalt group (32%, n=16) versus the WC group (6%, n=3), absolute difference 26, 95% CI 10 to 40. Time to intubation was longer in the gestalt group versus the WC group (136 seconds vs 116 seconds, absolute difference 20 seconds 95% CI 14 to 26). First pass success rates were higher in the WC group verses the gestalt group (92%, 95% CI 85 to 97 vs 88%, 95% CI 88 to 95, absolute difference 4%, 95% CI 1 to 8). These preliminary results demonstrate WC may be a useful objective measure to determine the presence of paralysis and optimal in tubating conditions in RSI. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  7. Concise Approach for Determining the Optimal Annual Capacity Shortage Percentage using Techno-Economic Feasibility Parameters of PV Power System

    Science.gov (United States)

    Alghoul, M. A.; Ali, Amer; Kannanaikal, F. V.; Amin, N.; Sopian, K.

    2017-11-01

    PV power systems have been commercially available and widely used for decades. The performance of a reliable PV system that fulfils the expectations requires correct input data and careful design. Inaccurate input data of the techno-economic feasibility would affect the size, cost aspects, stability and performance of PV power system on the long run. The annual capacity shortage is one of the main input data that should be selected with careful attention. The aim of this study is to reveal the effect of different annual capacity shortages on the techno-economic feasibility parameters and determining the optimal value for Baghdad city location using HOMER simulation tool. Six values of annual capacity shortage percentages (0%, 1%, 2%, 3%, 4%, and 5%), and wide daily load profile range (10 kWh - 100 kWh) are implemented. The optimal annual capacity shortage is the value that always "wins" when each techno-economic feasibility parameter is at its optimal/ reasonable criteria. The results showed that the optimal annual capacity shortage that reduces significantly the cost of PV power system while keeping the PV system with reasonable technical feasibility is 3%. This capacity shortage value can be carried as a reference value in future works for Baghdad city location. Using this approach of analysis at other locations, annual capacity shortage can be always offered as a reference value for those locations.

  8. Development and optimization of the determination of pharmaceuticals in water samples by SPE and HPLC with diode-array detection.

    Science.gov (United States)

    Pavlović, Dragana Mutavdžić; Ašperger, Danijela; Tolić, Dijana; Babić, Sandra

    2013-09-01

    This paper describes the development, optimization, and validation of a method for the determination of five pharmaceuticals from different therapeutic classes (antibiotics, anthelmintics, glucocorticoides) in water samples. Water samples were prepared using SPE and extracts were analyzed by HPLC with diode-array detection. The efficiency of 11 different SPE cartridges to extract the investigated compounds from water was tested in preliminary experiments. Then, the pH of the water sample, elution solvent, and sorbent mass were optimized. Except for optimization of the SPE procedure, selection of the optimal HPLC column with different stationary phases from different manufacturers has been performed. The developed method was validated using spring water samples spiked with appropriate concentrations of pharmaceuticals. Good linearity was obtained in the range of 2.4-200 μg/L, depending on the pharmaceutical with the correlation coefficients >0.9930 in all cases, except for ciprofloxacin (0.9866). Also, the method has revealed that low LODs (0.7-3.9 μg/L), good precision (intra- and interday) with RSD below 17% and recoveries above 98% for all pharmaceuticals. The method has been successfully applied to the analysis of production wastewater samples from the pharmaceutical industry. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Critical Path-Based Thread Placement for NUMA Systems

    Energy Technology Data Exchange (ETDEWEB)

    Su, Chun-Yi [Virginia Polytechnic Institute and State University (Virginia Tech); Li, Dong [ORNL; Nikolopoulos, Dimitrios [FORTH-ICS; Grove, Matthew [Virginia Polytechnic Institute and State University (Virginia Tech); Cameron, Kirk W. [Virginia Polytechnic Institute and State University (Virginia Tech); de Supinski, Bronis R. [Lawrence Livermore National Laboratory (LLNL)

    2012-01-01

    Multicore multiprocessors use Non Uniform Memory Architecture (NUMA) to improve their scalability. However,NUMA introduces performance penalties due to remote memory accesses. Without efficiently managing data layout and thread mapping to cores, scientific applications, even if they are optimized for NUMA, may suffer performance loss. In this paper, we present an algorithm that optimizes the placement of OpenMP threads on NUMA processors. By collecting information from hardware counters and defining new metrics to capture the effects of thread placement, the algorithm reduces NUMA performance penalty by minimizing the critical path of OpenMP parallel regions and by avoiding local memory resource contention. We evaluate our algorithm with NPB benchmarks and achieve performance improvement between 8.13% and 25.68%, compared to the OS default scheduling.

  10. Issues in the determination of the optimal portfolio of electricity supply options

    International Nuclear Information System (INIS)

    Hickey, Emily A.; Lon Carlson, J.; Loomis, David

    2010-01-01

    In recent years a growing amount of attention has been focused on the need to develop a cost-effective portfolio of electricity supply options that provides society with a measure of protection from such factors as fuel price volatility and supply interruptions. A number of strategies, including portfolio theory, real options theory, and different measures of diversity have been suggested. In this paper we begin by first considering how we might characterize an optimal portfolio of supply options and identify a number of constraints that must be satisfied as part of the optimization process. We then review the strengths and limitations of each approach listed above. The results of our review lead us to conclude that, of the strategies we consider, using the concept of diversity to assess the viability of an electricity supply portfolio is most appropriate. We then provide an example of how a particular measure of diversity, the Shannon-Weiner Index, can be used to assess the diversity of the electricity supply portfolio in the state of Illinois, the region served by the Midwest Independent System Operator (MISO), and the continental United States. (author)

  11. Isothermal microcalorimetry as a quality by design tool to determine optimal blending sequences.

    Science.gov (United States)

    Al-Hallak, M H D Kamal; Azarmi, Shirzad; Xu, Zhenghe; Maham, Yadollah; Löbenberg, Raimar

    2010-09-01

    This study was designed to assess the value of isothermal microcalorimetry (ITMC) as a quality by design (QbD) tool to optimize blending conditions during tablet preparation. Powder mixtures that contain microcrystalline cellulose (MCC), dibasic calcium phosphate dihydrate (DCPD), and prednisone were prepared as 1:1:1 ratios using different blending sequences. ITMC was used to monitor the thermal activity of the powder mixtures before and after each blending process. Differential scanning calorimetry (DSC) and X-ray powder diffraction (XRPD) were performed on all final powder mixtures. Final powder mixtures were used to prepare tablets with 10 mg prednisone content, and dissolution tests were performed on all tablet formulations. Using ITMC, it was observed that the powder mixtures had different thermal activity depending on the blending sequences of the ingredients. All mixtures prepared by mixing prednisone with DCPD in the first stage were associated with relatively fast and significant heat exchange. In contrast, mixing prednisone with MCC in the first step resulted in slower heat exchange. Powder mixture with high thermal activity showed extra DSC peaks, and their dissolution was generally slower compared to the other tablets. Blending is considered as a critical parameter in tablet preparation. This study showed that ITMC is a simple and efficient tool to monitor solid-state reactions between excipients and prednisone depending on blending sequences. ITMC has the potential to be used in QbD approaches to optimize blending parameters for prednisone tablets.

  12. A Procedure to Determine the Optimal Sensor Positions for Locating AE Sources in Rock Samples

    Science.gov (United States)

    Duca, S.; Occhiena, C.; Sambuelli, L.

    2015-03-01

    Within a research work aimed to better understand frost weathering mechanisms of rocks, laboratory tests have been designed to specifically assess a theoretical model of crack propagation due to ice segregation process in water-saturated and thermally microcracked cubic samples of Arolla gneiss. As the formation and growth of microcracks during freezing tests on rock material is accompanied by a sudden release of stored elastic energy, the propagation of elastic waves can be detected, at the laboratory scale, by acoustic emission (AE) sensors. The AE receiver array geometry is a sensitive factor influencing source location errors, for it can greatly amplify the effect of small measurement errors. Despite the large literature on the AE source location, little attention, to our knowledge, has been paid to the description of the experimental design phase. As a consequence, the criteria for sensor positioning are often not declared and not related to location accuracy. In the present paper, a tool for the identification of the optimal sensor position on a cubic shape rock specimen is presented. The optimal receiver configuration is chosen by studying the condition numbers of each of the kernel matrices, used for inverting the arrival time and finding the source location, and obtained for properly selected combinations between sensors and sources positions.

  13. Han's model parameters for microalgae grown under intermittent illumination: Determined using particle swarm optimization.

    Science.gov (United States)

    Pozzobon, Victor; Perre, Patrick

    2018-01-21

    This work provides a model and the associated set of parameters allowing for microalgae population growth computation under intermittent lightning. Han's model is coupled with a simple microalgae growth model to yield a relationship between illumination and population growth. The model parameters were obtained by fitting a dataset available in literature using Particle Swarm Optimization method. In their work, authors grew microalgae in excess of nutrients under flashing conditions. Light/dark cycles used for these experimentations are quite close to those found in photobioreactor, i.e. ranging from several seconds to one minute. In this work, in addition to producing the set of parameters, Particle Swarm Optimization robustness was assessed. To do so, two different swarm initialization techniques were used, i.e. uniform and random distribution throughout the search-space. Both yielded the same results. In addition, swarm distribution analysis reveals that the swarm converges to a unique minimum. Thus, the produced set of parameters can be trustfully used to link light intensity to population growth rate. Furthermore, the set is capable to describe photodamages effects on population growth. Hence, accounting for light overexposure effect on algal growth. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. A Determination Method of Optimal Customization Degree of Logistics Service Supply Chain with Mass Customization Service

    Directory of Open Access Journals (Sweden)

    Weihua Liu

    2014-01-01

    Full Text Available Customization degree is a very important field of mass customization. Its improvement could enhance customer satisfaction and further increase customer demand while correspondingly it will increase service price and decrease customer satisfaction and demand. Therefore this paper discusses how to deal with such issues in logistics service supply chain (LSSC with a logistics service integrator (LSI and a customer. With the establishment of customer demand function for logistics services and profit functions of the LSI and the customer, three different decision modes are proposed (i.e., customization degree dominated by LSI, customization degree dominated by customer, and customization degree decided by concentrated supply chain; many interesting findings are achieved. Firstly, to achieve customization cooperation between LSI and customer, measures should be taken to make the unit increase cost of the customized logistics services lower than a certain value. Secondly, there are differences between the optimal customization degree dominated by LSI and that dominated by customer. And in both cases, the dominator could realize more profit than the follower. Thirdly, with the profit secondary distribution strategy, the modified decentralized decision mode could accomplish the maximum profit achieved in centralized decision mode and meanwhile get the optimal customization degree.

  15. Determination of radial profile of ICF hot spot's state by multi-objective parameters optimization

    International Nuclear Information System (INIS)

    Dong Jianjun; Deng Bo; Cao Zhurong; Ding Yongkun; Jiang Shaoen

    2014-01-01

    A method using multi-objective parameters optimization is presented to determine the radial profile of hot spot temperature and density. And a parameter space which contain five variables: the temperatures at center and the interface of fuel and remain ablator, the maximum model density of remain ablator, the mass ratio of remain ablator to initial ablator and the position of interface between fuel and the remain ablator, is used to described the hot spot radial temperature and density. Two objective functions are set as the variances of normalized intensity profile from experiment X-ray images and the theory calculation. Another objective function is set as the variance of experiment average temperature of hot spot and the average temperature calculated by theoretical model. The optimized parameters are obtained by multi-objective genetic algorithm searching for the five dimension parameter space, thereby the optimized radial temperature and density profiles can be determined. The radial temperature and density profiles of hot spot by experiment data measured by KB microscope cooperating with X-ray film are presented. It is observed that the temperature profile is strongly correlated to the objective functions. (authors)

  16. METHODOLOGY FOR DETERMINING THE OPTIMAL CLEANING PERIOD OF HEAT EXCHANGERS BY USING THE CRITERIA OF MINIMUM COST

    Directory of Open Access Journals (Sweden)

    Yanileisy Rodríguez Calderón

    2015-04-01

    Full Text Available One of the most serious problems of the Process Industry is that when planning the maintenance of the heat exchangers is not applied the methodologies based on economic criteria to optimize periods of cleaning surfaces resulting in additional costs for the company and for the country. This work develops and proposes a methodical based on the criterion of Minimum Cost for determining the optimal cleaning period. It is given an example of application of this method to the case of intercoolers of a centrifugal compressor with a high fouling level.It occurs this because is used sea water with many microorganisms as cooling agent which severely embeds transfer surfaces of side water. The methodology employed can be generalized to other applications.

  17. Problems involved in quantitative gamma, camera scintigraphy. B. Determination of the optimal spectrometric window

    International Nuclear Information System (INIS)

    Soussaline, F.; Ricard, S.; Raynaud, C.

    1976-01-01

    By means of suitable equipment including a delay line camera and an on-line data processing system with energy coding for each photon, a preliminary study of the camera response versus energy in terms of dispersion function can explain the consequences of the choice of window on the resolution and sensitivity. Using the results of other authors on the optimization of low-energy windows based on largely subjective criteria, the kidney 197 HgCl 2 uptake values for a series of 25 patients were calculated in four digital windows of different widths and variable thresholds. From these results it was possible to estimate the relative error in each case and to choose a 25% window with a threshold corresponding to the photopeak maximum [fr

  18. Determination of Optimal Opening Scheme for Electromagnetic Loop Networks Based on Fuzzy Analytic Hierarchy Process

    Directory of Open Access Journals (Sweden)

    Yang Li

    2016-01-01

    Full Text Available Studying optimization and decision for opening electromagnetic loop networks plays an important role in planning and operation of power grids. First, the basic principle of fuzzy analytic hierarchy process (FAHP is introduced, and then an improved FAHP-based scheme evaluation method is proposed for decoupling electromagnetic loop networks based on a set of indicators reflecting the performance of the candidate schemes. The proposed method combines the advantages of analytic hierarchy process (AHP and fuzzy comprehensive evaluation. On the one hand, AHP effectively combines qualitative and quantitative analysis to ensure the rationality of the evaluation model; on the other hand, the judgment matrix and qualitative indicators are expressed with trapezoidal fuzzy numbers to make decision-making more realistic. The effectiveness of the proposed method is validated by the application results on the real power system of Liaoning province of China.

  19. Non-invasive determination of port wine stain anatomy and physiology for optimal laser treatment strategies

    Science.gov (United States)

    van Gemert, Martin J. C.; Nelson, J. Stuart; Milner, Thomas E.; Smithies, Derek J.; Verkruysse, Wim; de Boer, Johannes F.; Lucassen, Gerald W.; Goodman, Dennis M.; Tanenbaum, B. Samuel; Norvang, Lill T.; Svaasand, Lars O.

    1997-05-01

    The treatment of port wine stains (PWSs) using a flashlamp-pumped pulsed dye laser is often performed using virtually identical irradiation parameters. Although encouraging clinical results have been reported, we propose that lasers will only reach their full potential provided treatment parameters match individual PWS anatomy and physiology. The purpose of this paper is to review the progress made on the technical development and clinical implementation of (i) infrared tomography (IRT), optical reflectance spectroscopy (ORS) and optical low-coherence reflectometry (OLCR) to obtain in vivo diagnostic data on individual PWS anatomy and physiology and (ii) models of light and heat propagation, predicting irreversible vascular injury in human skin, to select optimal laser wavelength, pulse duration, spot size and radiant exposure for complete PWS blanching in the fewest possible treatment sessions. Although non-invasive optical sensing techniques may provide significant diagnostic data, development of a realistic model will require a better understanding of relevant mechanisms for irreversible vascular injury.

  20. Determining a sustainable and economically optimal wastewater treatment and discharge strategy.

    Science.gov (United States)

    Hardisty, Paul E; Sivapalan, Mayuran; Humphries, Robert

    2013-01-15

    Options for treatment and discharge of wastewater in regional Western Australia (WA) are examined from the perspective of overall sustainability and social net benefit. Current practice in the state has typically involved a basic standard of treatment deemed to be protective of human health, followed by discharge to surface water bodies. Community and regulatory pressure to move to higher standards of treatment is based on the presumption that a higher standard of treatment is more protective of the environment and society, and thus is more sustainable. This analysis tests that hypothesis for Western Australian conditions. The merits of various wastewater treatment and discharge strategies are examined by quantifying financial costs (capital and operations), and by monetising the wider environmental and social costs and benefits of each option over an expanded planning horizon (30 years). Six technical treatment-disposal options were assessed at a test site, all of which met the fundamental criterion of protecting human health. From a financial perspective, the current business-as-usual option is preferred - it is the least cost solution. However, valuing externalities such as water, greenhouse gases, ecological impacts and community amenity, the status quo is revealed as sub-optimal. Advanced secondary treatment with stream disposal improves water quality and provides overall net benefit to society. All of the other options were net present value (NPV) negative. Sensitivity analysis shows that the favoured option outperforms all of the others under a wide range of financial and externality values and assumptions. Expanding the findings across the state reveals that moving from the identified socially optimal level of treatment to higher (tertiary) levels of treatment would result in a net loss to society equivalent to several hundred million dollars. In other words, everyone benefits from improving treatment to the optimum point. But society, the environment, and

  1. Determination of Optimal Parameters for Diffusion Bonding of Semi-Solid Casting Aluminium Alloy by Response Surface Methodology

    Directory of Open Access Journals (Sweden)

    Kaewploy Somsak

    2015-01-01

    Full Text Available Liquid state welding techniques available are prone to gas porosity problems. To avoid this solid state bonding is usually an alternative of preference. Among solid state bonding techniques, diffusion bonding is often employed in aluminium alloy automotive parts welding in order to enhance their mechanical properties. However, there has been no standard procedure nor has there been any definitive criterion for judicious welding parameters setting. It is thus a matter of importance to find the set of optimal parameters for effective diffusion bonding. This work proposes the use of response surface methodology in determining such a set of optimal parameters. Response surface methodology is more efficient in dealing with complex process compared with other techniques available. There are two variations of response surface methodology. The one adopted in this work is the central composite design approach. This is because when the initial upper and lower bounds of the desired parameters are exceeded the central composite design approach is still capable of yielding the optimal values of the parameters that appear to be out of the initially preset range. Results from the experiments show that the pressing pressure and the holding time affect the tensile strength of jointing. The data obtained from the experiment fits well to a quadratic equation with high coefficient of determination (R2 = 94.21%. It is found that the optimal parameters in the process of jointing semi-solid casting aluminium alloy by using diffusion bonding are the pressing pressure of 2.06 MPa and 214 minutes of the holding time in order to achieve the highest tensile strength of 142.65 MPa

  2. IDEA 2004: Section 615 (k) (Placement in Alternative Educational Setting). PHP-c111

    Science.gov (United States)

    PACER Center, 2005

    2005-01-01

    School personnel may consider any unique circumstances on a case-by-case basis when determining whether to order a change in placement for a child with a disability who violates a code of student conduct. This article describes IDEA 2004: Section 615 (k), which discusses the placement of special needs children in alternative educational settings.…

  3. Sensor placement for calibration of spatially varying model parameters

    Science.gov (United States)

    Nath, Paromita; Hu, Zhen; Mahadevan, Sankaran

    2017-08-01

    This paper presents a sensor placement optimization framework for the calibration of spatially varying model parameters. To account for the randomness of the calibration parameters over space and across specimens, the spatially varying parameter is represented as a random field. Based on this representation, Bayesian calibration of spatially varying parameter is investigated. To reduce the required computational effort during Bayesian calibration, the original computer simulation model is substituted with Kriging surrogate models based on the singular value decomposition (SVD) of the model response and the Karhunen-Loeve expansion (KLE) of the spatially varying parameters. A sensor placement optimization problem is then formulated based on the Bayesian calibration to maximize the expected information gain measured by the expected Kullback-Leibler (K-L) divergence. The optimization problem needs to evaluate the expected K-L divergence repeatedly which requires repeated calibration of the spatially varying parameter, and this significantly increases the computational effort of solving the optimization problem. To overcome this challenge, an approximation for the posterior distribution is employed within the optimization problem to facilitate the identification of the optimal sensor locations using the simulated annealing algorithm. A heat transfer problem with spatially varying thermal conductivity is used to demonstrate the effectiveness of the proposed method.

  4. Impact of placement type on the development of clinical competency in speech-language pathology students.

    Science.gov (United States)

    Sheepway, Lyndal; Lincoln, Michelle; McAllister, Sue

    2014-01-01

    Speech-language pathology students gain experience and clinical competency through clinical education placements. However, currently little empirical information exists regarding how competency develops. Existing research about the effectiveness of placement types and models in developing competency is generally descriptive and based on opinions and perceptions. The changing nature of education of speech-language pathology students, diverse student cohorts, and the crisis in finding sufficient clinical education placements mean that establishing the most effective and efficient methods for developing clinical competency in students is needed. To gather empirical information regarding the development of competence in speech-language pathology students; and to determine if growth of competency differs in groups of students completing placements that differ in terms of caseload, intensity and setting. Participants were students in the third year of a four-year undergraduate speech-language pathology degree who completed three clinical placements across the year and were assessed with the COMPASS® competency assessment tool. Competency development for the whole group across the three placements is described. Growth of competency in groups of students completing different placement types is compared. Interval-level data generated from the students' COMPASS® results were subjected to parametric statistical analyses. The whole group of students increased significantly in competency from placement to placement across different placement settings, intensities and client age groups. Groups completing child placements achieved significantly higher growth in competency when compared with the competency growth of students completing adult placements. Growth of competency was not significantly different for students experiencing different intensity of placements, or different placement settings. These results confirm that the competency of speech-language pathology students

  5. Comparison of esophageal placement of Bravo capsule system under direct endoscopic guidance with conventional placement method

    Directory of Open Access Journals (Sweden)

    Aijaz A Sofi

    2010-10-01

    Full Text Available Aijaz A Sofi, Charles Filipiak, Thomas Sodeman, Usman Ahmad, Ali Nawras, Isam DaboulDepartment of Medicine, Division of Gastroenterology, University of Toledo Medical Center, Toledo, Ohio, USABackground: Conventional placement of a wireless esophageal pH monitoring device in the esophagus requires initial endoscopy to determine the distance to the gastroesophageal junction. Blind placement of the capsule by the Bravo delivery system is followed by repeat endoscopy to confirm placement. Alternatively, the capsule can be placed under direct vision during endoscopy. Currently there are no published data comparing the efficiency of one method over the other. The objective of this study was to compare the method of Bravo wireless pH deviceplacement under direct visualization with the conventional method.Methods: A retrospective study involving 58 patients (29 patients with indirect and 29 patients with direct visualization who had Bravo capsule placement. The physician endoscopy procedure notes, nurse’s notes, postprocedure notes, recovery notes, and pH monitoring results were reviewed. The safety of the procedures, length of the procedures, and patient tolerability were evaluated.Results: None of the 58 patients had early detachment of the device and had no immediate procedure-related complications. The overall incidence of complications in both the groups was similar. No failures due to the technique were noted in either group. Average amount of time taken for the procedure was similar in both groups.Conclusion: The technique of placing a Bravo pH device under direct visualization is as safe and effective as the conventional method. In addition, there is an added advantage of avoiding a second endoscopic intubation in the direct visualization technique.Keywords: Bravo capsule, technique, esophageal pH monitoring

  6. ADVANCED DENTAL IMPLANT PLACEMENT TECHNIQUES

    Directory of Open Access Journals (Sweden)

    Alex M. GREENBERG

    2017-12-01

    Full Text Available The availability of in office Cone Beam CT (CBCT scanners, dental implant planning software, CAD CAM milling, and rapid printing technologies allow for the precise placement of dental implants and immediate prosthetic temporization. These technologies allow for flapless implant placement, or open flap bone reduction for “All on 4” techniques with improved preoperative planning and intraoperative performance. CBCT permits practitioners in an office setting with powerful diagnostic capabilities for the evaluation of bone quality and quantity, as well as dental and osseous pathology essential for better informed dental implant treatment. CBCT provides the convenience of in office imaging and decreased radiation exposure. Rapid printing technologies provide decreased time and high accuracy for bone model and surgical guide fabrication.

  7. Economic antecedents of prone infant sleep placement among black mothers.

    Science.gov (United States)

    Bruckner, Tim A

    2008-09-01

    Black infants die from sudden infant death syndrome at twice the incidence observed among non-Hispanic white infants. Explanations for this disparity include a two-fold greater prevalence of prone (i.e., stomach) infant sleep placement among black caregivers. I test the hypothesis that the contraction of state economies may contribute to this disparity by increasing the risk of prone infant sleep placement among black mothers. I retrieved data from the Bureau of Labor Statistics employment series and 33,518 black mothers in 26 states participating in the 1996-2002 Pregnancy Risk Assessment Monitoring System. I use weighted multivariable analyses to control for individual characteristics and state and time trends. Black mothers exhibit an elevated risk of reporting prone placement one month following statewide declines in employment (adjusted odds ratio for a one percent decline = 1.11, 95% CI 1.01 to 1.22). This risk remains elevated after control for individual variables. In contrast, I find no association between the economy and prone placement among white mothers. Statewide economic decline may reduce adherence to the recommended non-prone infant sleep position among black, but not white, mothers. Additional research among black caregivers should determine which mechanisms connect economic downturns to prone infant sleep placement.

  8. placements des gouverneurs

    International Development Research Centre (IDRC) Digital Library (Canada)

    André Lavoie

    Transit. Tout endroit qui n'est pas considéré comme la destination où l'on se rend par affaires. Voyageur. Désigne un gouverneur dans le contexte du présent règlement. 5. Rôles et responsabilités. 5.1. Personnes participant au processus lié aux déplacements des gouverneurs. Toutes les personnes, dont les gouverneurs, ...

  9. Determination of optimal pollution levels through multiple-criteria decision making: an application to the Spanish electricity sector

    International Nuclear Information System (INIS)

    Linares, P.

    1999-01-01

    An efficient pollution management requires the harmonisation of often conflicting economic and environmental aspects. A compromise has to be found, in which social welfare is maximised. The determination of this social optimum has been attempted with different tools, of which the most correct according to neo-classical economics may be the one based on the economic valuation of the externalities of pollution. However, this approach is still controversial, and few decision makers trust the results obtained enough to apply them. But a very powerful alternative exists, which avoids the problem of monetizing physical impacts. Multiple-criteria decision making provides methodologies for dealing with impacts in different units, and for incorporating the preferences of decision makers or society as a whole, thus allowing for the determination of social optima under heterogeneous criteria, which is usually the case of pollution management decisions. In this paper, a compromise programming model is presented for the determination of the optimal pollution levels for the electricity industry in Spain for carbon dioxide, sulphur dioxide, nitrous oxides, and radioactive waste. The preferences of several sectors of society are incorporated explicitly into the model, so that the solution obtained represents the optimal pollution level from a social point of view. Results show that cost minimisation is still the main objective for society, but the simultaneous consideration of the rest of the criteria achieves large pollution reductions at a low cost increment. (Author)

  10. Report of Child Placement Study Committee, January, 1969.

    Science.gov (United States)

    Rhode Island Council of Community Services, Inc., Providence.

    As a first step in determining the effectiveness of programs for children and families, the Rhode Island Council of Community Services made an overall study of the number and type of children in child placement services. The Council based its report on the characteristics of 420 randomly-selected children of which 211 were in foster home; 214 in…

  11. Optimization and comparison of three different methods for the determination of Rn-222 in water

    Energy Technology Data Exchange (ETDEWEB)

    Belloni, P.; Ingrao, G. [ENEA CRE, Casaccia AMB-BIO, Roma (Italy); Cavaioli, M.; Notaro, M.; Torri, G.; Vasselli, R. [ANPA, National Environmental Protection Agency, DISP ARA MET, Roma (Italy); Mancini, C. [Nuclear Engineering Department, University `La Sapienza, Roma (Italy); Santaroni, P. [National Institute of Nutrition, Roma (Italy)

    1995-10-19

    Three different systems for the determination of radon in water have been examined: liquid scintillation counting (LSC), degassification followed by Lucas cell counting (LCC) and gamma counting (GC). Particular care has been devoted to the sampling methodologies of the water. Comparative results for several environmental samples are given. A critical evaluation is also given on the basis of the final aim of the measurements.

  12. Optimization and comparison of three different methods for the determination of Rn-222 in water

    International Nuclear Information System (INIS)

    Belloni, P.; Ingrao, G.; Cavaioli, M.; Notaro, M.; Torri, G.; Vasselli, R.; Mancini, C.; Santaroni, P.

    1995-01-01

    Three different systems for the determination of radon in water have been examined: liquid scintillation counting (LSC), degassification followed by Lucas cell counting (LCC) and gamma counting (GC). Particular care has been devoted to the sampling methodologies of the water. Comparative results for several environmental samples are given. A critical evaluation is also given on the basis of the final aim of the measurements

  13. The optimal scheme of self blood pressure measurement as determined from ambulatory blood pressure recordings

    NARCIS (Netherlands)

    Verberk, Willem J.; Kroon, Abraham A.; Kessels, Alfons G. H.; Lenders, Jacques W. M.; Thien, Theo; van Montfrans, Gert A.; Smit, Andries J.; de Leeuw, Peter W.

    Objective To determine how many self-measurements of blood pressure (BP) should be taken at home in order to obtain a reliable estimate of a patient's BP. Design Participants performed self blood pressure measurement (SBPM) for 7 days (triplicate morning and evening readings). In all of them, office

  14. [Accuracy of computer-guided implant placement and influencing factors].

    Science.gov (United States)

    Jinmeng, Li; Guomin, Ou

    2017-02-01

    Digital technology is a new trend in implant dentistry and oral medical technology. Stereolithographic surgical guides, which are computer-guided implant placement, have been introduced gradually to the market. Surgeons are attracted to this approach because of it features visualized preoperative planning, simple surgical procedure, flapless implant, and immediate restoration. However, surgeons are concerned about the accuracy and complications of this approach. This review aims to introduce the classification of computer-guided implant placement. The advantages, disadvantages, and accuracy of this approach are also analyzed. Moreover, factors that may affect the outcomes of computer-guided implant placement are determined. Results will provide a reference to surgeons regarding the clinical application of this approach.

  15. Commercial breaks vs. product placement: what works for young consumers?

    Directory of Open Access Journals (Sweden)

    Ovidiu Mircea ŢIEREAN

    2015-06-01

    Full Text Available The article presents the results of a quantitative marketing research conducted on young consumers from Braşov County regarding their perceptions about commercial breaks and product placement during the most important reality shows. The purpose of this research is to determine to what extent young consumers watch the evening shows and to what extent they remember the brands advertised during commercial breaks and product placement within the shows. For young consumers, the evening shows are time spent with family and friends. A large majority do not watch the commercial breaks and they mostly remember brands that also practice product placement during the shows. There is a direct corelation between numbers of shows watched and the percentage of consumers who remember the main sponsors for evening shows.

  16. Humanitarian engineering placements in our own communities

    Science.gov (United States)

    VanderSteen, J. D. J.; Hall, K. R.; Baillie, C. A.

    2010-05-01

    There is an increasing interest in the humanitarian engineering curriculum, and a service-learning placement could be an important component of such a curriculum. International placements offer some important pedagogical advantages, but also have some practical and ethical limitations. Local community-based placements have the potential to be transformative for both the student and the community, although this potential is not always seen. In order to investigate the role of local placements, qualitative research interviews were conducted. Thirty-two semi-structured research interviews were conducted and analysed, resulting in a distinct outcome space. It is concluded that local humanitarian engineering placements greatly complement international placements and are strongly recommended if international placements are conducted. More importantly it is seen that we are better suited to address the marginalised in our own community, although it is often easier to see the needs of an outside populace.

  17. Consumer Buying Behaviour; A Factor of Compulsive Buying Prejudiced by Windowsill Placement

    OpenAIRE

    Hameed, Irfan; Soomro, Yasir

    2012-01-01

    This empirical research investigates the impact of windowsill placement on the compulsive buying behavior of consumers on three different types of products i.e., convenience products, shopping products, and specialty products. Positive effect of windowsill placement on all three types of product categories has been hypothesized. The categorical regression (Optimal scaling) was used to test the hypotheses. The data was collected via self administered questionnaire from Pakistan through systema...

  18. Optimization of the Analytical Method Using HPLC with Fluorescence Detection to Determine Selected Polycyclic Aromatic Compounds in Clean Water Samples

    International Nuclear Information System (INIS)

    Garcia Alonso, S.; Perez Pastor, R. M.

    2013-01-01

    A study on the comparison and evaluation of 3 miniaturized extraction methods for the determination of selected PACs in clear waters is presented. Three types of liquid-liquid extraction were used for chromatographic analysis by HPLC with fluorescence detection. The main objective was the optimization and development of simple, rapid and low cost methods, minimizing the use of extracting solvent volume. The work also includes a study on the scope of the methods developed at low and high levels of concentration and intermediate precision. (Author)

  19. Determination of an optimal dose of medetomidine-ketamine-buprenorphine for anaesthesia in the Cape ground squirrel (Xerus inauris)

    OpenAIRE

    K. E. Joubert; T. Serfontein; M. Scantlebury; M B Manjerovic; P. W. Bateman; M B Manjerovic; P. W. Bateman; N. C. Bennett; J. M. Waterman

    2011-01-01

    The optimal dose of medetomidine-ketamine-buprenorphine was determined in 25 Cape ground squirrels (Xerus inauris) undergoing surgical implantation of a temperature logger into the abdominal cavity. At the end of anaesthesia, the squirrels were given atipamezole intramuscularly to reverse the effects of medetomidine. The mean dose of medetomidine was 67.6±9.2 μg/kg, ketamine 13.6±1.9 mg/kg and buprenorphine 0.5±0.06 μg/kg. Induction time was 3.1 ± 1.4 min. This produced surgical anaesthesia f...

  20. Determination of the optimal tempering temperature in hard facing of the forging dies

    Directory of Open Access Journals (Sweden)

    Milan Mutavdžić

    2012-05-01

    Full Text Available Here is analyzed selection of the optimal technology for heat treatment during the reparation of the damaged forging dies. Those tools are manufactured from alloyed tool steels for operation at elevated temperatures. Those steels are prone to self-hardening, so in reparatory hard-facing they must be preheated, additionally heated and tempered. During the tempering, in temperature interval 500-600°C, a secondary increase of hardness and decrease of impact toughness occurs, the so-called reversible tempering brittleness. Here is shown that it can be avoided by application of metallurgical and technological measures. Metallurgical measures assume adequate selection of steels. Since the considered steels are per se prone to tempering brittleness, we conducted experimental investigations to define the technological measures to avoid it. Tests on models were conducted: tempering from different temperatures, slow heating and cooling in still air. Hardness measurements showed that at 520°C, the secondary increase of hardness occurs, with drop of the impact toughness. Additional hard-facing tests included samples tempered at various regimes. Samples were prepared for mechanical and metallographic investigations. Results presented illustrate influence of additional heat treatment on structure, hardness and mechanical properties of the hard-faced layers. This enabled establishing the possibility of avoiding the tempering brittleness through technological measures. 

  1. Using orthogonal design to determine optimal conditions for biodegradation of phenanthrene in mangrove sediment slurry.

    Science.gov (United States)

    Chen, Jian Lin; Au, Kwai Chi; Wong, Yuk Shan; Tam, Nora Fung Yee

    2010-04-15

    In the present paper, the effects of four factors, each at three levels, on biodegradation of phenanthrene, a 3-ring PAH, in contaminated mangrove sediment slurry were investigated using the orthogonal experimental design. The factors and levels were (i) sediment types (clay loam, clayey and sandy); (ii) different inoculums (Sphingomonas sp., a mixture of Sphingomonas sp. and Mycobacterium sp., and without inoculum); (iii) presence of other PAHs (fluorene, pyrene, and none); and (iv) different salinities (5, 15 and 25 ppt). Variance analysis based on the percentages of Phe biodegradation showed that the presence of other PAHs had little effect on phenanthrene biodegradation. The kinetics of phenanthrene biodegradation in all experiments was best fitted by the first order rate model. The highest first order rate constant, k value was 0.1172 h(-1) with 97% Phe degradation; while the lowest k value was 0.0004 and phenanthrene was not degraded throughout the 7-d experiment. The p values of k for the four factors followed the same trend as that for the biodegradation percentage. Difference analysis revealed that optimal phenanthrene biodegradation would take place in clay loam sediment slurry at low salinity (5 to 15 ppt) with the inoculation of both Sphingomonas sp. and Mycobacterium sp. 2009 Elsevier B.V. All rights reserved.

  2. The Potential for Zinc Stable Isotope Techniques and Modelling to Determine Optimal Zinc Supplementation

    Science.gov (United States)

    Tran, Cuong D.; Gopalsamy, Geetha L.; Mortimer, Elissa K.; Young, Graeme P.

    2015-01-01

    It is well recognised that zinc deficiency is a major global public health issue, particularly in young children in low-income countries with diarrhoea and environmental enteropathy. Zinc supplementation is regarded as a powerful tool to correct zinc deficiency as well as to treat a variety of physiologic and pathologic conditions. However, the dose and frequency of its use as well as the choice of zinc salt are not clearly defined regardless of whether it is used to treat a disease or correct a nutritional deficiency. We discuss the application of zinc stable isotope tracer techniques to assess zinc physiology, metabolism and homeostasis and how these can address knowledge gaps in zinc supplementation pharmacokinetics. This may help to resolve optimal dose, frequency, length of administration, timing of delivery to food intake and choice of zinc compound. It appears that long-term preventive supplementation can be administered much less frequently than daily but more research needs to be undertaken to better understand how best to intervene with zinc in children at risk of zinc deficiency. Stable isotope techniques, linked with saturation response and compartmental modelling, also have the potential to assist in the continued search for simple markers of zinc status in health, malnutrition and disease. PMID:26035248

  3. Determination of the Cascade Reservoir Operation for Optimal Firm-Energy Using Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Azmeri

    2013-08-01

    Full Text Available Indonesia today face a new paradigm in water management where aim to apply integrated water resources management has become unavoidable task in purpose of achieving greater level of effectiveness and efficiency. On of most interesting case study is the case of Citarum river, one of the most potential river for water supply in West Java, Indonesia. Alongside the river, Saguling, Cirata and Djuanda Reservoirs had been constructed in series/cascade. Saguling and Cirata reservoirs are particularly operated for hydroelectric power and Djuanda is multipurpose reservoir mainly operated for irrigation and contribute domestic water supply for Jakarta (capital city of Indonesia. Basically all reservoirs are relying on same resources, therefore this condition has considered addressing management and operational problem. Therefore, an approach toward new management and operation system are urgently required in order to achieve the effective and efficient output and to avoid conflicts of water used. This study aims to obtain energy production from Citarum Cascade Reservoir System using Genetic Algorithms optimization with the objective function to maximize firm-energy. Firm-energy is the minimum energy requirements must be available in a certain time period. Then, the result obtained by using the energy produced by GA is compared to the conventional searching technique of Non-Linier Programming (NLP. The GA derived operating curves reveal the higher energy and firm-energy than NLP model

  4. Parameter Determination of Milling Process Using a Novel Teaching-Learning-Based Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Zhibo Zhai

    2015-01-01

    Full Text Available Cutting parameter optimization dramatically affects the production time, cost, profit rate, and the quality of the final products, in milling operations. Aiming to select the optimum machining parameters in multitool milling operations such as corner milling, face milling, pocket milling, and slot milling, this paper presents a novel version of TLBO, TLBO with dynamic assignment learning strategy (DATLBO, in which all the learners are divided into three categories based on their results in “Learner Phase”: good learners, moderate learners, and poor ones. Good learners are self-motivated and try to learn by themselves; each moderate learner uses a probabilistic approach to select one of good learners to learn; each poor learner also uses a probabilistic approach to select several moderate learners to learn. The CEC2005 contest benchmark problems are first used to illustrate the effectiveness of the proposed algorithm. Finally, the DATLBO algorithm is applied to a multitool milling process based on maximum profit rate criterion with five practical technological constraints. The unit time, unit cost, and profit rate from the Handbook (HB, Feasible Direction (FD method, Genetic Algorithm (GA method, five other TLBO variants, and DATLBO are compared, illustrating that the proposed approach is more effective than HB, FD, GA, and five other TLBO variants.

  5. Ultrasound Signal Analysis Applied to Determine the Optimal Contrast Dose for Echographic Examinations

    Directory of Open Access Journals (Sweden)

    Roberto FRANCHINI

    2010-12-01

    Full Text Available In recent years the understanding of the behaviour of currently available ultrasound contrast agents (UCAs, in the form of gas-filled microbubbles encapsulated in elastic shells, has significantly improved thanks to “ad hoc” designed “in vitro” studies. However, in several studies there has been a tendency to use high UCA concentrations, potentially reducing the safety of microbubbles in clinical applications. In this study we investigated a possible strategy to improve microbubble safety by reducing the injection dose and employing low ultrasound intensities. We measured the achievable contrast enhancement insonifying microbubbles at different low concentrations (range 0.01-0.10 µL/mL using a very low mechanical index (MI=0.08. Our results, based on the use of advanced techniques for signal processing and spectrum analysis, showed that UCA backscatter strongly depends on microbubble concentration also in the considered low range, providing useful indications towards the definition of an optimal low contrast dose, effectively employable at low MIs.

  6. The Potential for Zinc Stable Isotope Techniques and Modelling to Determine Optimal Zinc Supplementation

    Directory of Open Access Journals (Sweden)

    Cuong D. Tran

    2015-05-01

    Full Text Available It is well recognised that zinc deficiency is a major global public health issue, particularly in young children in low-income countries with diarrhoea and environmental enteropathy. Zinc supplementation is regarded as a powerful tool to correct zinc deficiency as well as to treat a variety of physiologic and pathologic conditions. However, the dose and frequency of its use as well as the choice of zinc salt are not clearly defined regardless of whether it is used to treat a disease or correct a nutritional deficiency. We discuss the application of zinc stable isotope tracer techniques to assess zinc physiology, metabolism and homeostasis and how these can address knowledge gaps in zinc supplementation pharmacokinetics. This may help to resolve optimal dose, frequency, length of administration, timing of delivery to food intake and choice of zinc compound. It appears that long-term preventive supplementation can be administered much less frequently than daily but more research needs to be undertaken to better understand how best to intervene with zinc in children at risk of zinc deficiency. Stable isotope techniques, linked with saturation response and compartmental modelling, also have the potential to assist in the continued search for simple markers of zinc status in health, malnutrition and disease.

  7. MDCT urography: retrospective determination of optimal delay time after intravenous contrast administration

    International Nuclear Information System (INIS)

    Meindl, Thomas; Coppenrath, Eva; Kahlil, Rami; Reiser, Maximilian F.; Mueller-Lisse, U.G.; Mueller-Lisse, Ulrike L.

    2006-01-01

    The optimal delay time after intravenous (i.v.) administration of contrast medium (CM) for opacifcation of the upper urinary tract (UUT) for multidetector computed tomography urography (MDCTU) was investigated. UUT opacification was retrospectively evaluated in 36 four-row MDCTU examinations. Single- (n=10) or dual-phase (n=26) MDCTU was performed with at least 5-min delay after i.v. CM. UUT was divided into four sections: intrarenal collecting system (IRCS), proximal, middle and distal ureter. Two independent readers rated UUT opacification: 1, none; 2, partial; 3, complete. Numbers and percentages of scores, and the 5%, 25%, 50%, 75% and 95% percentiles of delay time were calculated for each UUT section. After removing diseased segments, 344 segments were analysed. IRCS, proximal and middle ureter were completely opacified in 94% (81/86), 93% (80/86) and 77% (66/86) of cases, respectively. Median delay time was 15 min for complete opacification. The distal ureter was completely opacified in 37% (32/86) of cases and not opacified in 26% (22/86). Median delay time for complete opacification was 11 min with 25% and 75% percentiles of 10 and 16 min, respectively. At MDCTU, opacification of the IRCS, proximal and middle ureter was hardly sensitive to delay time. Delay times between 10 and 16 min were favourable in the distal ureter. (orig.)

  8. Development of An Optimization Method for Determining Automation Rate in Nuclear Power Plants

    International Nuclear Information System (INIS)

    Lee, Seung Min; Seong, Poong Hyun; Kim, Jong Hyun

    2014-01-01

    Since automation was introduced in various industrial fields, it has been known that automation provides positive effects like greater efficiency and fewer human errors, and negative effect defined as out-of-the-loop (OOTL). Thus, before introducing automation in nuclear field, the estimation of the positive and negative effects of automation on human operators should be conducted. In this paper, by focusing on CPS, the optimization method to find an appropriate proportion of automation is suggested by integrating the suggested cognitive automation rate and the concepts of the level of ostracism. The cognitive automation rate estimation method was suggested to express the reduced amount of human cognitive loads, and the level of ostracism was suggested to express the difficulty in obtaining information from the automation system and increased uncertainty of human operators' diagnosis. The maximized proportion of automation that maintains the high level of attention for monitoring the situation is derived by an experiment, and the automation rate is estimated by the suggested automation rate estimation method. It is expected to derive an appropriate inclusion proportion of the automation system avoiding the OOTL problem and having maximum efficacy at the same time

  9. A Placement Heuristic for a Commercial Decision Support System for Container Vessel Stowage

    DEFF Research Database (Denmark)

    Delgado-Ortegon, Alberto; Jensen, Rune Møller; Guilbert, Nicolas

    2013-01-01

    with their users and almost all require fast feedback from the optimization algorithms. We propose a placement heuristic that serves as the optimization component of a decision support system to interactively generate container vessel stowage plans, a complex problem with high economical impact within the shipping...

  10. Determining the optimal size of small molecule mixtures for high throughput NMR screening

    International Nuclear Information System (INIS)

    Mercier, Kelly A.; Powers, Robert

    2005-01-01

    High-throughput screening (HTS) using NMR spectroscopy has become a common component of the drug discovery effort and is widely used throughout the pharmaceutical industry. NMR provides additional information about the nature of small molecule-protein interactions compared to traditional HTS methods. In order to achieve comparable efficiency, small molecules are often screened as mixtures in NMR-based assays. Nevertheless, an analysis of the efficiency of mixtures and a corresponding determination of the optimum mixture size (OMS) that minimizes the amount of material and instrumentation time required for an NMR screen has been lacking. A model for calculating OMS based on the application of the hypergeometric distribution function to determine the probability of a 'hit' for various mixture sizes and hit rates is presented. An alternative method for the deconvolution of large screening mixtures is also discussed. These methods have been applied in a high-throughput NMR screening assay using a small, directed library

  11. A Laplace method for under-determined Bayesian optimal experimental designs

    KAUST Repository

    Long, Quan

    2014-12-17

    In Long et al. (2013), a new method based on the Laplace approximation was developed to accelerate the estimation of the post-experimental expected information gains (Kullback–Leibler divergence) in model parameters and predictive quantities of interest in the Bayesian framework. A closed-form asymptotic approximation of the inner integral and the order of the corresponding dominant error term were obtained in the cases where the parameters are determined by the experiment. In this work, we extend that method to the general case where the model parameters cannot be determined completely by the data from the proposed experiments. We carry out the Laplace approximations in the directions orthogonal to the null space of the Jacobian matrix of the data model with respect to the parameters, so that the information gain can be reduced to an integration against the marginal density of the transformed parameters that are not determined by the experiments. Furthermore, the expected information gain can be approximated by an integration over the prior, where the integrand is a function of the posterior covariance matrix projected over the aforementioned orthogonal directions. To deal with the issue of dimensionality in a complex problem, we use either Monte Carlo sampling or sparse quadratures for the integration over the prior probability density function, depending on the regularity of the integrand function. We demonstrate the accuracy, efficiency and robustness of the proposed method via several nonlinear under-determined test cases. They include the designs of the scalar parameter in a one dimensional cubic polynomial function with two unidentifiable parameters forming a linear manifold, and the boundary source locations for impedance tomography in a square domain, where the unknown parameter is the conductivity, which is represented as a random field.

  12. Optimized and Validated Spectrophotometric Methods for the Determination of Enalapril Maleate in Commercial Dosage Forms

    OpenAIRE

    Rahman, Nafisur; Haque, Sk Manirul

    2008-01-01

    Four simple, rapid and sensitive spectrophotometric methods have been proposed for the determination of enalapril maleate in pharmaceutical formulations. The first method is based on the reaction of carboxylic acid group of enalapril maleate with a mixture of potassium iodate (KIO3) and iodide (KI) to form yellow colored product in aqueous medium at 25 ± 1°C .The reaction is followed spectrophotometrically by measuring the absorbance at 352 nm. The second, third and fourth methods are based o...

  13. A network society communicative model for optimizing the Refugee Status Determination (RSD procedures

    Directory of Open Access Journals (Sweden)

    Andrea Pacheco Pacífico

    2013-01-01

    Full Text Available This article recommends a new way to improve Refugee Status Determination (RSD procedures by proposing a network society communicative model based on active involvement and dialogue among all implementing partners. This model, named after proposals from Castells, Habermas, Apel, Chimni, and Betts, would be mediated by the United Nations High Commissioner for Refugees (UNHCR, whose role would be modeled after that of the International Committee of the Red Cross (ICRC practice.

  14. Spectrophotometric determination of anionic surfactants: optimization by response surface methodology and application to Algiers bay wastewater.

    Science.gov (United States)

    Sini, Karima; Idouhar, Madjid; Ahmia, Aida-Cherifa; Ferradj, Abdelhak; Tazerouti, Ammal

    2017-11-23

    A simple analytical method for quantitative determination of an anionic surfactant in aqueous solutions without liquid-liquid extraction is described. The method is based on the formation of a green-colored ion associate between sodium dodecylbenzenesulfonate (SDBS) and cationic dye, Brilliant Green (BG) in acidic medium. Spectral changes of the dye by addition of SDBS are studied by visible spectrophotometry at maximum wave length of 627 nm. The interactions and micellar properties of SDBS and cationic dye are also investigated using surface tension method. The pH, the molar ratio ([BG]/[SDBS]), and the shaking time of the solutions are considered as the main parameters which affect the formation of the ion pair. Determination of AS in distilled water gives a significant detection limit up to 3 × 10 -6  M. The response surface methodology (RSM) is applied to study the absorbance. A Box-Behnken is a model designed to the establishment of responses given by parameters with great probability. This model is set up by using the three main parameters at three levels. Analysis of variance shows that only two parameters affect the absorbance of the ion pair. The statistical results obtained are interesting and give us real possibility to reach optimum conditions for the formation of the ion pair. As the proposed method is free from interferences from major constituents of water, it has been successfully applied to the determination of anionic surfactant contents in wastewaters samples collected from Algiers bay.

  15. Determining the optimal age for recording the retinal vascular pattern image of lambs.

    Science.gov (United States)

    Rojas-Olivares, M A; Caja, G; Carné, S; Salama, A A K; Adell, N; Puig, P

    2012-03-01

    Newborn Ripollesa lambs (n = 143) were used to assess the optimal age at which the vascular pattern of the retina can be used as a reference for identification and traceability. Retinal images from both eyes were recorded from birth to yearling (d 1, 8, 30, 82, 180, and 388 of age) in duplicate (2,534 images) using a digital camera specially designed for livestock (Optibrand, Fort Collins, CO). Intra- and inter-age image comparisons (9,316 pairs of images) were carried out, and matching score (MS) was used as the exclusion criterion of lamb identity (MS ovino mayor," 6 mo of age and ~35 kg of BW, n = 59); and yearling replacement lambs (YR; >12 mo of age and ~50 kg of BW, n = 25). Values of MS were treated with a model based on the 1-inflated bivariate beta distribution, and treated data were compared by using a likelihood ratio test. Intra-age image comparisons showed that average MS and percentage of images with MS ≥70 increased (P 0.05); no differences were detected for 30-d images (97.4 and 98.0%, respectively, for RR and YR lambs; P > 0.05). Total percentage of matching was achieved when images were obtained from older lambs (180 and 388 d). In conclusion, retinal imaging was a useful tool for verifying the identity and auditing the traceability of live lambs from suckling to yearling. Matching scores were satisfactory when the reference retinal images were obtained from 1-mo-old or older lambs.

  16. Determining the optimal surveillance interval after a colonoscopic polypectomy for the Korean population?

    Directory of Open Access Journals (Sweden)

    Jung Lok Lee

    2017-01-01

    Full Text Available Background/Aims: Western surveillance strategies cannot be directly adapted to the Korean population. The aim of this study was to estimate the risk of metachronous neoplasia and the optimal surveillance interval in the Korean population.Methods: Clinical and pathological data from index colonoscopy performed between June 2006 and July 2008 and who had surveillance colonoscopies up to May 2015 were compared between low- and high-risk adenoma (LRA and HRA groups. The 3- and 5-year cumulative risk of metachronous colorectal neoplasia in both groups were compared.Results: Among 895 eligible patients, surveillance colonoscopy was performed in 399 (44.6%. Most (83.3% patients with LRA had a surveillance colonoscopy within 5 years and 70.2% of patients with HRA had a surveillance colonoscopy within 3 years. The cumulative risk of metachronous advanced adenoma was 3.2% within 5 years in the LRA group and only 1.7% within 3 years in the HRA group. The risk of metachronous neoplasia was similar between the surveillance interval of <5 and ≥5 years in the LRA group; however, it was slightly higher at surveillance interval of ≥3 than <3 years in the HRA group (9.4% vs. 2.4%. In multivariate analysis, age and the ≥3-year surveillance interval were significant independent risk factors for metachronous advanced adenoma (P=0.024 and P=0.030, respectively.Conclusions: Patients had a surveillance colonoscopy before the recommended guidelines despite a low risk of metachronous neoplasia. However, the risk of metachronous advanced adenoma was increased in elderly patients and those with a ≥3-year surveillance interval.

  17. Cadmium and lead determination by ICPMS: Method optimization and application in carabao milk samples

    Directory of Open Access Journals (Sweden)

    Riza A. Magbitang

    2012-06-01

    Full Text Available A method utilizing inductively coupled plasma mass spectrometry (ICPMS as the element-selective detector with microwave-assisted nitric acid digestion as the sample pre-treatment technique was developed for the simultaneous determination of cadmium (Cd and lead (Pb in milk samples. The estimated detection limits were 0.09ìg kg-1 and 0.33ìg kg-1 for Cd and Pb, respectively. The method was linear in the concentration range 0.01 to 500ìg kg-1with correlation coefficients of 0.999 for both analytes.The method was validated using certified reference material BCR 150 and the determined values for Cd and Pb were 18.24 ± 0.18 ìg kg-1 and 807.57 ± 7.07ìg kg-1, respectively. Further validation using another certified reference material, NIST 1643e, resulted in determined concentrations of 6.48 ± 0.10 ìg L-1 for Cd and 21.96 ± 0.87 ìg L-1 for Pb. These determined values agree well with the certified values in the reference materials.The method was applied to processed and raw carabao milk samples collected in Nueva Ecija, Philippines.The Cd levels determined in the samples were in the range 0.11 ± 0.07 to 5.17 ± 0.13 ìg kg-1 for the processed milk samples, and 0.11 ± 0.07 to 0.45 ± 0.09 ìg kg-1 for the raw milk samples. The concentrations of Pb were in the range 0.49 ± 0.21 to 5.82 ± 0.17 ìg kg-1 for the processed milk samples, and 0.72 ± 0.18 to 6.79 ± 0.20 ìg kg-1 for the raw milk samples.

  18. 22 CFR 96.50 - Placement and post-placement monitoring until final adoption in incoming cases.

    Science.gov (United States)

    2010-04-01

    ... assumes responsibility for making another placement of the child. (e) The agency or person acts promptly... appropriate in light of the child's age and maturity and, when required by State law, obtains the consent of... origin, if that is determined to be in the child's best interests; (3) How the child's wishes, age...

  19. Use of Monte Carlo Simulations to Determine Optimal Carbapenem Dosing in Critically Ill Patients Receiving Prolonged Intermittent Renal Replacement Therapy.

    Science.gov (United States)

    Lewis, Susan J; Kays, Michael B; Mueller, Bruce A

    2016-10-01

    Pharmacokinetic/pharmacodynamic analyses with Monte Carlo simulations (MCSs) can be used to integrate prior information on model parameters into a new renal replacement therapy (RRT) to develop optimal drug dosing when pharmacokinetic trials are not feasible. This study used MCSs to determine initial doripenem, imipenem, meropenem, and ertapenem dosing regimens for critically ill patients receiving prolonged intermittent RRT (PIRRT). Published body weights and pharmacokinetic parameter estimates (nonrenal clearance, free fraction, volume of distribution, extraction coefficients) with variability were used to develop a pharmacokinetic model. MCS of 5000 patients evaluated multiple regimens in 4 different PIRRT effluent/duration combinations (4 L/h × 10 hours or 5 L/h × 8 hours in hemodialysis or hemofiltration) occurring at the beginning or 14-16 hours after drug infusion. The probability of target attainment (PTA) was calculated using ≥40% free serum concentrations above 4 times the minimum inhibitory concentration (MIC) for the first 48 hours. Optimal doses were defined as the smallest daily dose achieving ≥90% PTA in all PIRRT combinations. At the MIC of 2 mg/L for Pseudomonas aeruginosa, optimal doses were doripenem 750 mg every 8 hours, imipenem 1 g every 8 hours or 750 mg every 6 hours, and meropenem 1 g every 12 hours or 1 g pre- and post-PIRRT. Ertapenem 500 mg followed by 500 mg post-PIRRT was optimal at the MIC of 1 mg/L for Streptococcus pneumoniae. Incorporating data from critically ill patients receiving RRT into MCS resulted in markedly different carbapenem dosing regimens in PIRRT from those recommended for conventional RRTs because of the unique drug clearance characteristics of PIRRT. These results warrant clinical validation. © 2016, The American College of Clinical Pharmacology.

  20. The Determination of the Optimal Material Proportion in Natural Fiber-Cement Composites Using Design of Mixture Experiments

    Directory of Open Access Journals (Sweden)

    Aramphongphun Chuckaphun

    2016-01-01

    Full Text Available This research aims to determine the optimal material proportion in a natural fiber-cement composite as an alternative to an asbestos fibercement composite while the materials cost is minimized and the properties still comply with Thai Industrial Standard (TIS for applications of profile sheet roof tiles. Two experimental sets were studied in this research. First, a three-component mixture of (i virgin natural fiber, (ii synthetic fiber and (iii cement was studied while the proportion of calcium carbonate was kept constant. Second, an additional material, recycled natural fiber from recycled paper, was used in the mixture. The four-component mixture was then studied. Constrained mixture design was applied to design the two experimental sets above. The experimental data were then analyzed to build the mixture model. In addition, the cost of each material was used to build the materials cost model. These two mathematical models were then employed to optimize the material proportion of the natural fiber-cement composites. In the three-component mixture, it was found that the optimal material proportion was as follows: 3.14% virgin natural fiber, 1.20% synthetic fiber and 75.67% cement while the materials cost was reduced by 12%. In the four-component mixture, it was found that the optimal material proportion was as follows: 3.00% virgin natural fiber, 0.50% recycled natural fiber, 1.08% synthetic fiber, and 75.42% cement. The materials cost was reduced by 14%. The confirmation runs of 30 experiments were also analyzed statistically to verify the results.

  1. An open-source genetic algorithm for determining optimal seed distributions for low-dose-rate prostate brachytherapy.

    Science.gov (United States)

    McGeachy, P; Madamesila, J; Beauchamp, A; Khan, R

    2015-01-01

    An open source optimizer that generates seed distributions for low-dose-rate prostate brachytherapy was designed, tested, and validated. The optimizer was a simple genetic algorithm (SGA) that, given a set of prostate and urethra contours, determines the optimal seed distribution in terms of coverage of the prostate with the prescribed dose while avoiding hotspots within the urethra. The algorithm was validated in a retrospective study on 45 previously contoured low-dose-rate prostate brachytherapy patients. Dosimetric indices were evaluated to ensure solutions adhered to clinical standards. The SGA performance was further benchmarked by comparing solutions obtained from a commercial optimizer (inverse planning simulated annealing [IPSA]) with the same cohort of 45 patients. Clinically acceptable target coverage by the prescribed dose (V100) was obtained for both SGA and IPSA, with a mean ± standard deviation of 98 ± 2% and 99.5 ± 0.5%, respectively. For the prostate D90, SGA and IPSA yielded 177 ± 8 Gy and 186 ± 7 Gy, respectively, which were both clinically acceptable. Both algorithms yielded reasonable dose to the rectum, with V100 < 0.3 cc. A reduction in dose to the urethra was seen using SGA. SGA solutions showed a slight prostate volume dependence, with smaller prostates (<25 cc) yielding less desirable, although still clinically viable, dosimetric outcomes. SGA plans used, on average, fewer needles than IPSA (21 vs. 24, respectively), which may lead to a reduction in urinary toxicity and edema that alters post-implant dosimetry. An open source SGA was validated that provides a research tool for the brachytherapy community. Copyright © 2015 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  2. A scientific model to determine the optimal radiographer staffing component in a nuclear medicine department

    International Nuclear Information System (INIS)

    Shipanga, A.N.; Ellmann, A.

    2004-01-01

    Full text: Introduction: Nuclear medicine in South Africa is developing fast. Much has changed since the constitution of a scientific model for determining an optimum number of radiographer posts in a Nuclear Medicine department in the late 1980's. Aim: The aim of this study was to ascertain whether the number of radiographers required by a Nuclear Medicine department can still be determined according to the norms established in 1988. Methods: A quantitative study using non-experimental evaluation design was conducted to determine the ratios between current radiographer workload and staffing norms. The workload ratios were analysed using the procedures statistics of the Nuclear Medicine department at Tygerberg Hospital. Radiographers provided data about their activities related to patient procedures, including information about the condition of the patients, activities in the radiopharmaceutical laboratory, and patient related administrative tasks. These were factored into an equation relating this data to working hours, including vacation and sick leave. The calculation of Activity Standards and an annual Standard Workload was used to finally calculate the staffing requirements for a Nuclear Medicine department. Results: Preliminary data confirmed that old staffing norms cannot be used in a modern Nuclear Medicine department. Protocols for several types of study have changed, including the additional acquisition of tomographic studies. Interest in the use of time-consuming non-imaging studies has been revived and should be factored Into the equation. Conclusions: All Nuclear Medicine departments In South Africa, where the types of studies performed have changed over the past years, should look carefully at their radiographer staffing ratio to ascertain whether the number of radiographers needed is adequate for the current workload. (author)

  3. Continuous production of itraconazole-based solid dispersions by hot melt extrusion: Preformulation, optimization and design space determination.

    Science.gov (United States)

    Thiry, Justine; Lebrun, Pierre; Vinassa, Chloe; Adam, Marine; Netchacovitch, Lauranne; Ziemons, Eric; Hubert, Philippe; Krier, Fabrice; Evrard, Brigitte

    2016-12-30

    The purpose of this work was to increase the solubility and the dissolution rate of itraconazole, which was chosen as the model drug, by obtaining an amorphous solid dispersion by hot melt extrusion. Therefore, an initial preformulation study was conducted using differential scanning calorimetry, thermogravimetric analysis and Hansen's solubility parameters in order to find polymers which would have the ability to form amorphous solid dispersions with itraconazole. Afterwards, the four polymers namely Kollidon ® VA64, Kollidon ® 12PF, Affinisol ® HPMC and Soluplus ® , that met the set criteria were used in hot melt extrusion along with 25wt.% of itraconazole. Differential scanning confirmed that all four polymers were able to amorphize itraconazole. A stability study was then conducted in order to see which polymer would keep itraconazole amorphous as long as possible. Soluplus ® was chosen and, the formulation was fine-tuned by adding some excipients (AcDiSol ® , sodium bicarbonate and poloxamer) during the hot melt extrusion process in order to increase the release rate of itraconazole. In parallel, the range limits of the hot melt extrusion process parameters were determined. A design of experiment was performed within the previously defined ranges in order to optimize simultaneously the formulation and the process parameters. The optimal formulation was the one containing 2.5wt.% of AcDiSol ® produced at 155°C and 100rpm. When tested with a biphasic dissolution test, more than 80% of itraconazole was released in the organic phase after 8h. Moreover, this formulation showed the desired thermoformability value. From these results, the design space around the optimum was determined. It corresponds to the limits within which the process would give the optimized product. It was observed that a temperature between 155 and 170°C allowed a high flexibility on the screw speed, from about 75 to 130rpm. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Determining optimal planning target volume and image guidance policy for post-prostatectomy intensity modulated radiotherapy.

    Science.gov (United States)

    Bell, Linda J; Cox, Jennifer; Eade, Thomas; Rinks, Marianne; Herschtal, Alan; Kneebone, Andrew

    2015-07-26

    There is limited information available on the optimal Planning Target Volume (PTV) expansions and image guidance for post-prostatectomy intensity modulated radiotherapy (PP-IMRT). As the prostate bed does not move in a uniform manner, there is a rationale for anisotropic PTV margins with matching to soft tissue. The aim of this study is to find the combination of PTV expansion and image guidance policy for PP-IMRT that provides the best balance of target coverage whilst minimising dose to the organs at risk. The Cone Beam CT (CBCT) images (n = 377) of 40 patients who received PP-IMRT with daily online alignment to bony anatomy (BA) were reviewed. Six different PTV expansions were assessed: 3 published PTV expansions (0.5 cm uniform, 1 cm uniform, and 1 + 0.5 cm posterior) and 3 further anisotropic PTV expansions (Northern Sydney Cancer Centre (NSCC), van Herk, and smaller anisotropic). Each was assessed for size, bladder and rectum coverage and geographic miss. Each CBCT was rematched using a superior soft tissue (SST) and averaged soft tissue (AST) match. Potential geographic miss was assessed using all PTV expansions except the van Herk margin. The 0.5 cm uniform expansion yielded the smallest PTV (median volume = 222.3 cc) and the 1 cm uniform expansion yielded the largest (361.7 cc). The Van Herk expansion includes the largest amount of bladder (28.0 %) and rectum (36.0 %) and the 0.5 cm uniform expansion the smallest (17.1 % bladder; 10.2 % rectum). The van Herk PTV expansion had the least geographic miss with BA matching (4.2 %) and the 0.5 cm uniform margin (28.4 %) the greatest. BA matching resulted in the highest geographic miss rate for all PTVs, followed by SST matching and AST matching. Changing from BA to an AST match decreases potential geographic miss by half to two thirds, depending on the PTV expansion, to image guidance policy for PP-IMRT is daily average soft tissue matching using CBCT scans with a small anisotropic PTV expansion of 0

  5. Optimization and validation of a nonaqueous micellar electrokinetic chromatography method for determination of polycyclic musks in perfumes.

    Science.gov (United States)

    Lopez-Gazpio, Josu; Garcia-Arrona, Rosa; Ostra, Miren; Millán, Esmeralda

    2012-06-01

    A nonaqueous micellar electrokinetic chromatography method was developed for determination of Tonalide®, Galaxolide®, and Traseolide® polycyclic musks (PCMs). These compounds are widely used as fragrance ingredients in cosmetics. The method was optimized by using a three variable Box-Behnken experimental design and response surface methodology. A modified chromatographic response function was defined in order to adequately weigh the terms in the response function. After optimization of experimental conditions, an electrolyte solution of 195 mM SDS and 40 mM NaH(2) PO(4) in formamide was selected for the separation of the three PCMs, and the applied voltage was fixed at 30 kV. The nonaqueous MEKC method was then checked in terms of linearity, limits of detection and quantification, repeatability, intermediate precision and accuracy, providing appropriate values (i.e. RSD values for precision never exceeding 7%, and accuracy 96-107%). Nonaqueous MEKC for determination of the selected compounds was successfully applied to the analysis of commercial perfume samples. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Determining the Optimal Protocol for Measuring an Albuminuria Class Transition in Clinical Trials in Diabetic Kidney Disease.

    Science.gov (United States)

    Kröpelin, Tobias F; de Zeeuw, Dick; Remuzzi, Giuseppe; Bilous, Rudy; Parving, Hans-Henrik; Heerspink, Hiddo J L

    2016-11-01

    Albuminuria class transition (normo- to micro- to macroalbuminuria) is used as an intermediate end point to assess renoprotective drug efficacy. However, definitions of such class transition vary between trials. To determine the most optimal protocol, we evaluated the approaches used in four clinical trials testing the effect of renin-angiotensin-aldosterone system intervention on albuminuria class transition in patients with diabetes: the BENEDICT, the DIRECT, the ALTITUDE, and the IRMA-2 Trial. The definition of albuminuria class transition used in each trial differed from the definitions used in the other trials by the number (one, two, or three) of consecutively collected urine samples at each study visit, the time interval between study visits, the requirement of an additional visit to confirm the class transition, and the requirement of a percentage increase in albuminuria from baseline in addition to the class transition. In Cox regression analysis, neither increasing the number of urine samples collected at a single study visit nor differences in the other variables used to define albuminuria class transition altered the average drug effect. However, the SEM of the treatment effect increased (decreased precision) with stricter end point definitions, resulting in a loss of statistical significance. In conclusion, the optimal albuminuria transition end point for use in drug intervention trials can be determined with a single urine collection for albuminuria assessment per study visit. A confirmation of the end point or a requirement of a minimal percentage change in albuminuria from baseline seems unnecessary. Copyright © 2016 by the American Society of Nephrology.

  7. An optimized and validated SPE-LC-MS/MS method for the determination of caffeine and paraxanthine in hair.

    Science.gov (United States)

    De Kesel, Pieter M M; Lambert, Willy E; Stove, Christophe P

    2015-11-01

    Caffeine is the probe drug of choice to assess the phenotype of the drug metabolizing enzyme CYP1A2. Typically, molar concentration ratios of paraxanthine, caffeine's major metabolite, to its precursor are determined in plasma following administration of a caffeine test dose. The aim of this study was to develop and validate an LC-MS/MS method for the determination of caffeine and paraxanthine in hair. The different steps of a hair extraction procedure were thoroughly optimized. Following a three-step decontamination procedure, caffeine and paraxanthine were extracted from 20 mg of ground hair using a solution of protease type VIII in Tris buffer (pH 7.5). Resulting hair extracts were cleaned up on Strata-X™ SPE cartridges. All samples were analyzed on a Waters Acquity UPLC® system coupled to an AB SCIEX API 4000™ triple quadrupole mass spectrometer. The final method was fully validated based on international guidelines. Linear calibration lines for caffeine and paraxanthine ranged from 20 to 500 pg/mg. Precision (%RSD) and accuracy (%bias) were below 12% and 7%, respectively. The isotopically labeled internal standards compensated for the ion suppression observed for both compounds. Relative matrix effects were below 15%RSD. The recovery of the sample preparation procedure was high (>85%) and reproducible. Caffeine and paraxanthine were stable in hair for at least 644 days. The effect of the hair decontamination procedure was evaluated as well. Finally, the applicability of the developed procedure was demonstrated by determining caffeine and paraxanthine concentrations in hair samples of ten healthy volunteers. The optimized and validated method for determination of caffeine and paraxanthine in hair proved to be reliable and may serve to evaluate the potential of hair analysis for CYP1A2 phenotyping. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Optimization and Validation of Quantitative Spectrophotometric Methods for the Determination of Alfuzosin in Pharmaceutical Formulations

    Directory of Open Access Journals (Sweden)

    M. Vamsi Krishna

    2007-01-01

    Full Text Available Three accurate, simple and precise spectrophotometric methods for the determination of alfuzosin hydrochloride in bulk drugs and tablets are developed. The first method is based on the reaction of alfuzosin with ninhydrin reagent in N, N'-dimethylformamide medium (DMF producing a colored product which absorbs maximally at 575 nm. Beer’s law is obeyed in the concentration range 12.5-62.5 µg/mL of alfuzosin. The second method is based on the reaction of drug with ascorbic acid in DMF medium resulting in the formation of a colored product, which absorbs maximally at 530 nm. Beer’s law is obeyed in the concentration 10-50 µg/mL of alfuzosin. The third method is based on the reaction of alfuzosin with p-benzoquinone (PBQ to form a colored product with λmax at 400 nm. The products of the reaction were stable for 2 h at room temperature. The optimum experimental parameters for the reactions have been studied. The validity of the described procedures was assessed. Statistical analysis of the results has been carried out revealing high accuracy and good precision. The proposed methods could be used for the determination of alfuzosin in pharmaceutical formulations. The procedures were rapid, simple and suitable for quality control application.

  9. The optimation of radon-222 determination in water by Lucas cell technique

    International Nuclear Information System (INIS)

    Andrejkovicova, S.; Kuruc, J.; Kovacsova, A.; Mackova, J.; Rajec, P.

    2003-01-01

    The aim of this work was to determine detection efficiency ε, volume activity a v , low detection limit, minimal detection activity for radon. There were collecting several samples of water: water from tap water, mineral water, thermal water, water from wells and bottled drinking water. As we expected, the lowest values of volume activities of radon were reached in bottled drinking water (0.1 - 4.9 Bq/dm 3 ). Higher values were reached in water from tap water and natural mineral water (2.5 - 14.9 Bq/dm 3 ). The highest volume activities of radon were obtained in thermal water and water from wells (17.2 - 107.9 Bq/dm 3 ). Method for determination of radon in water was verified at Institute of Preventive and Clinical Medicine, Bratislava, Slovakia. Results of radon concentration in waters are in accordance with an uppermost - accepted value of radon in water. The volume activity of radon in our samples has never been higher as a limit value has allowed (300 Bq/dm 3 ). (authors)

  10. A New Algorithm for Determining Ultimate Pit Limits Based on Network Optimization

    Directory of Open Access Journals (Sweden)

    Ali Asghar Khodayari

    2013-12-01

    Full Text Available One of the main concerns of the mining industry is to determine ultimate pit limits. Final pit is a collection of blocks, which can be removed with maximum profit while following restrictions on the slope of the mine’s walls. The size, location and final shape of an open-pit are very important in designing the location of waste dumps, stockpiles, processing plants, access roads and other surface facilities as well as in developing a production program. There are numerous methods for designing ultimate pit limits. Some of these methods, such as floating cone algorithm, are heuristic and do not guarantee to generate optimum pit limits. Other methods, like Lerchs–Grossmann algorithm, are rigorous and always generate the true optimum pit limits. In this paper, a new rigorous algorithm is introduced. The main logic in this method is that only positive blocks, which can pay costs of their overlying non-positive blocks, are able to appear in the final pit. Those costs may be paid either by positive block itself or jointly with other positive blocks, which have the same overlying negative blocks. This logic is formulated using a network model as a Linear Programming (LP problem. This algorithm can be applied to two- and three-dimension block models. Since there are many commercial programs available for solving LP problems, pit limits in large block models can be determined easily by using this method.

  11. Optimization of irradiation conditions for determination of LD50 in pigs

    International Nuclear Information System (INIS)

    Prochazka, Z.; Hampl, J.; Sedlacek, M.; Rodak, L.

    1975-01-01

    Radiation LDsub(50/30) values were determined in 36 twelve-week-old pigs (with a mean body weight of 21 kg) exposed to whole-body X-ray irradiation on a revolvable table rotated at a rate of 2.5 rpm using the following conditions: 180 kV, 15 mA, focal distance 79 cm, HVT 0.9 mm Cu, dose rate 2.42 x 10 -3 to 2.68 x 10 -3 C kg -1 min -1 (9.4 to 10.4 R/min) depending upon the animal size. The coefficient of mean irradiation uniformity was 1.4. Under these conditions the LDsub(50/30) for pigs was found to be 5.89 x 10 -2 C kg -1 , (228.3 R) with the biological range of effectiveness being 5.22 x 10 -2 to 6.90 x 10 -2 C kg -1 (202.4 to 267.6 R) Further experiments on 77 pigs showed that the LD 50 determined in this study had actually the median lethal effect. (orig.) [de

  12. Optimized determination of calcium in grape juice, wines, and other alcoholic beverages by atomic absorption spectrometry.

    Science.gov (United States)

    Olalla, Manuel; González, Maria Cruz; Cabrera, Carmen; Gimenez, Rafael; López, Maria Carmen

    2002-01-01

    This paper describes a study of the different methods of sample preparation for the determination of calcium in grape juice, wines, and other alcoholic beverages by flame atomic absorption spectrometry; results are also reported for the practical application of these methods to the analysis of commercial samples produced in Spain. The methods tested included dealcoholization, dry mineralization, and wet mineralization with heating by using different acids and/or mixtures of acids. The sensitivity, detection limit, accuracy, precision, and selectiviy of each method were established. Such research is necessary because of the better analytical indexes obtained after acid digestion of the sample, as recommended by the European Union, which advocates the direct method. In addition, although high-temperature mineralization with an HNO3-HCIO4 mixture gave the best analytical results, mineralization with nitric acid at 80 degrees C for 15 min gave the most satisfactory results in all cases, including those for wines with high levels of sugar and beverages with high alcoholic content. The results for table wines subjected to the latter treatment had an accuracy of 98.70-99.90%, a relative standard deviation of 2.46%, a detection limit of 19.0 microg/L, and a determination limit of 31.7 microg/L. The method was found to be sufficiently sensitive and selective. It was applied to the determination of Ca in grape juice, different types of wines, and beverages with high alcoholic content, all of which are produced and widely consumed in Spain. The values obtained for Ca were 90.00 +/- 20.40 mg/L in the grape juices, 82.30 +/- 23.80 mg/L in the white wines, 85.00 +/- 30.25 mg/L in the sweet wines, 84.92 +/- 23.11 mg/L in the red wines, 85.75 +/- 27.65 mg/L in the rosé wines, 9.51 +/- 6.65 mg/L in the brandies, 11.53 +/- 6.55 mg/L in the gin, 7.3 +/- 6.32 mg/L in the pacharán, and 8.41 +/- 4.85 mg/L in the anisettes. The method is therefore useful for routine analysis in the

  13. Determination of the optimal time of vaccination against infectious bursal disease virus (Gumboro) in Algeria.

    Science.gov (United States)

    Besseboua, Omar; Ayad, Abdelhanine; Benbarek, Hama

    2015-04-30

    This study was conducted to determine the effect of maternally derived antibody (MDA) on live vaccine against infectious bursal disease. A total of 140 chicks selected from vaccinated parent stock were used in this investigation. In a preset vaccination schedule, blood samples were collected to check for the actual effect. It was noticed that on day 1 the chicks contained a high level (6400.54 ± 2993.67) of maternally derived antibody that gradually decreased below a positive level within 21 days (365.86 ± 634.46). It was found that a high level of MDA interferes with the vaccine virus, resulting in no immune response. For better immune response, it is suggested that the chickens should be vaccinated at day 21, as the uniformity of MDA is poor (coefficient of the variation [CV] > 30%), and boosted at day 28. Indeed, two vaccinations are necessary to achieve good protection against infectious bursal disease virus of the entire flock.

  14. Determination of the optimal proportions of public and private funds in project budget management

    Science.gov (United States)

    Pykhtin, Kirill; Simankina, Tatyana; Karmokova, Kristina; Zonova, Alevtina

    2017-10-01

    Although the historical period of public-private partnership in the Russian federation is rather short, yet this type of cooperation of private entrepreneurs and authorities became the major driver of growth in such areas as construction, utilities, infrastructure and energetics. However, even though the experience of foreign countries is much larger than of Russia, great number of human resources are still consumed within disputes and disquisitions in order to assess the ratio of private and public funds. The present paper is based on the idea that this ratio can be determined for each of the industries with the use of statistical data. The authors offered the change in project cost range within the project classification regarding to the “project scale” characteristic.

  15. Centrifugation protocols: tests to determine optimal lithium heparin and citrate plasma sample quality.

    Science.gov (United States)

    Dimeski, Goce; Solano, Connie; Petroff, Mark K; Hynd, Matthew

    2011-05-01

    Currently, no clear guidelines exist for the most appropriate tests to determine sample quality from centrifugation protocols for plasma sample types with both lithium heparin in gel barrier tubes for biochemistry testing and citrate tubes for coagulation testing. Blood was collected from 14 participants in four lithium heparin and one serum tube with gel barrier. The plasma tubes were centrifuged at four different centrifuge settings and analysed for potassium (K(+)), lactate dehydrogenase (LD), glucose and phosphorus (Pi) at zero time, poststorage at six hours at 21 °C and six days at 2-8°C. At the same time, three citrate tubes were collected and centrifuged at three different centrifuge settings and analysed immediately for prothrombin time/international normalized ratio, activated partial thromboplastin time, derived fibrinogen and surface-activated clotting time (SACT). The biochemistry analytes indicate plasma is less stable than serum. Plasma sample quality is higher with longer centrifugation time, and much higher g force. Blood cells present in the plasma lyse with time or are damaged when transferred in the reaction vessels, causing an increase in the K(+), LD and Pi above outlined limits. The cells remain active and consume glucose even in cold storage. The SACT is the only coagulation parameter that was affected by platelets >10 × 10(9)/L in the citrate plasma. In addition to the platelet count, a limited but sensitive number of assays (K(+), LD, glucose and Pi for biochemistry, and SACT for coagulation) can be used to determine appropriate centrifuge settings to consistently obtain the highest quality lithium heparin and citrate plasma samples. The findings will aid laboratories to balance the need to provide the most accurate results in the best turnaround time.

  16. Meeting increased demand for total knee replacement and follow-up: determining optimal follow-up.

    Science.gov (United States)

    Meding, J B; Ritter, M A; Davis, K E; Farris, A

    2013-11-01

    The strain on clinic and surgeon resources resulting from a rise in demand for total knee replacement (TKR) requires reconsideration of when and how often patients need to be seen for follow-up. Surgeons will otherwise require increased paramedical staff or need to limit the number of TKRs they undertake. We reviewed the outcome data of 16 414 primary TKRs undertaken at our centre to determine the time to re-operation for any reason and for specific failure mechanisms. Peak risk years for failure were determined by comparing the conditional probability of failure, the number of failures divided by the total number of TKRs cases, for each year. The median times to failure for the most common failure mechanisms were 4.9 years (interquartile range (IQR) 1.7 to 10.7) for femoral and tibial loosening, 1.9 years (IQR 0.8 to 3.9) for infection, 3.1 years (IQR 1.6 to 5.5) for tibial collapse and 5.6 years (IQR 3.4 to 9.3) for instability. The median time to failure for all revisions was 3.3 years (IQR 1.2 to 8.5), with an overall revision rate of 1.7% (n = 282). Results from our patient population suggest that patients be seen for follow-up at six months, one year, three years, eight years, 12 years, and every five years thereafter. Patients with higher pain in the early post-operative period or high body mass index (≥ 41 kg/m(2)) should be monitored more closely.

  17. Determination of tolerance dose uncertainties and optimal design of dose response experiments with small animal numbers

    International Nuclear Information System (INIS)

    Karger, C.P.; Hartmann, G.H.

    2001-01-01

    Background: Dose response experiments aim to determine the complication probability as a function of dose. Adjusting the parameters of the frequently used dose response model P(D)=1/[1+(D 50 /D) k ] to the experimental data, 2 intuitive quantities are obtained: The tolerance dose D 50 and the slope parameter k. For mathematical reasons, however, standard statistic software uses a different set of parameters. Therefore, the resulting fit parameters of the statistic software as well as their standard errors have to be transformed to obtain D 50 and k as well as their standard errors. Material and Methods: The influence of the number of dose levels on the uncertainty of the fit parameters is studied by a simulation for a fixed number of animals. For experiments with small animal numbers, statistical artifacts may prevent the determination of the standard errors of the fit parameters. Consequences on the design of dose response experiments are investigated. Results: Explicit formulas are presented, which allow to calculate the parameters D 50 and k as well as their standard errors from the output of standard statistic software. The simulation shows, that the standard errors of the resulting parameters are independent of the number of dose levels, as long as the total number of animals involved in the experiment, remains constant. Conclusion: Statistical artifacts in experiments containing small animal numbers may be prevented by an adequate design of the experiment. For this, it is suggested to select a higher number of dose levels, rather than using a higher number of animals per dose level. (orig.) [de

  18. Optimal trajectory for the atlantooccipital transarticular screw.

    Science.gov (United States)

    Lee, Kyoung Min; Yeom, Jin S; Lee, Joon Oh; Buchowski, Jacob M; Park, Kun-Woo; Chang, Bong-Soon; Lee, Choon-Ki; Riew, K Daniel

    2010-07-15

    Radiologic evaluation of computed tomography (CT) scans using screw insertion simulation software. To investigate the optimal entry point and trajectory of atlantooccipital transarticular screws. To our knowledge, no large series focusing on the placement of atlantooccipital transarticular screws have been published. We used 1.0-mm sliced CT scans and 3-dimensional screw trajectory software to simulate 4.0-mm screw placement. Four entry points were evaluated. Screw placement success rate, safe range of medial angulation, and screw length using each entry point were determined. CT scans of 126 patients were evaluated, for a total of 252 screws for each entry point. On simulation, the 2 lateral entry points showed significantly higher success rates and safe range of medial angulation than the 2 middle points. The 2 lateral entry points had similar success rates (98.0% for anteriolateral (AL) point and 97.6% for posteriolateral (PL) point). Although the safe range of medial angulation was significantly wider for the AL point (26.1 degrees) than for the PL point (23.7 degrees), the screw lengths were significantly longer for the PL point (32.6 mm) than for the AL point (29.4 mm). For both points, 30 degrees of medial angulation led to highest rate of successful screw placement, but the rate was only 79.4% and 80.2%, respectively. Although there was no significant difference in success rates between AL and PL points, PL is likely the best entry point. Although 30 degrees medial and approximately 5 degrees upward angulation led to the highest rate of successful screw placement, the rate was only around 80%. Given the wide individual variation, we recommend that a preoperative 3-dimensional CT scan be obtained when attempting atlantooccipital transarticular screw fixation.

  19. Optimizing the Use of Electronic Health Records to Identify High-Risk Psychosocial Determinants of Health.

    Science.gov (United States)

    Oreskovic, Nicolas Michel; Maniates, Jennifer; Weilburg, Jeffrey; Choy, Garry

    2017-08-14

    Care coordination programs have traditionally focused on medically complex patients, identifying patients that qualify by analyzing formatted clinical data and claims data. However, not all clinically relevant data reside in claims and formatted data. Recently, there has been increasing interest in including patients with complex psychosocial determinants of health in care coordination programs. Psychosocial risk factors, including social determinants of health, mental health disorders, and substance abuse disorders, are less amenable to rapid and systematic data analyses, as these data are often not collected or stored as formatted data, and due to US Health Insurance Portability and Accountability Act (HIPAA) regulations are often not available as claims data. The objective of our study was to develop a systematic approach using word recognition software to identifying psychosocial risk factors within any part of a patient's electronic health record (EHR). We used QPID (Queriable Patient Inference Dossier), an ontology-driven word recognition software, to scan adult patients' EHRs to identify terms predicting a high-risk patient suitable to be followed in a care coordination program in Massachusetts, USA. Search terms identified high-risk conditions in patients known to be enrolled in a care coordination program, and were then tested against control patients. We calculated precision, recall, and balanced F-measure for the search terms. We identified 22 EHR-available search terms to define psychosocial high-risk status; the presence of 9 or more of these terms predicted that a patient would meet inclusion criteria for a care coordination program. Precision was .80, recall .98, and balanced F-measure .88 for the identified terms. For adult patients insured by Medicaid and enrolled in the program, a mean of 14 terms (interquartile range [IQR] 11-18) were present as identified by the search tool, ranging from 2 to 22 terms. For patients enrolled in the program but

  20. Determination of the optimal energy level in spectral CT imaging for displaying abdominal vessels in pediatric patients

    International Nuclear Information System (INIS)

    Hu, Di; Yu, Tong; Duan, Xiaomin; Peng, Yun; Zhai, Renyou

    2014-01-01

    Purpose: To determine the optimal energy level in contrast-enhanced spectral CT imaging for displaying abdominal vessels in pediatric patients. Materials and methods: This retrospective study was institutional review board approved. 15 children (8 males and 7 females, age range, 6–15 years, mean age 10.1 ± 3.1 years) underwent contrast-enhanced spectral CT imaging for diagnosing solid tumors in abdomen and pelvic areas were included. A single contrast-enhanced scan was performed using a dual energy spectral CT mode with a new split contrast injection scheme (iodixanol at 1–1.5 ml/kg dose. 2/3 first, 1/3 at 7–15 s after the first injection). 101 sets of monochromatic images with photon energies of 40–140 keV with 1 keV interval were reconstructed. Contrast-noise-ratio (CNR) for hepatic portal or vein were generated and compared at every energy level to determine the optimal energy level to maximize CNR. 2 board-certified radiologists interpreted the selected image sets independently for image quality scores. Results: CT values and CNR for the vessels increased as photon energy decreased from 140 to 40 keV: (CT value: 48.29–570.12 HU, CNR: 0.08–14.90) in the abdominal aorta, (58.48–369.73 HU, 0.64–5.87) in the inferior vena cava, and (58.48–369.73 HU, 0.06–6.96) in the portal vein. Monochromatic images at 40–50 keV (average 42.0 ± 4.67 keV) could display vessels above three levels clearly, and with excellent image quality scores of 3.17 ± 0.58 (of 4) (k = 0.50). The CNR values at the optimal energy level were significantly higher than those at 70 keV, an average energy corresponding to the conventional 120 kVp for abdominal CT imaging. Conclusion: Spectral CT imaging provides a set of monochromatic images to optimize image quality and enhance vascular visibility, especially in the hepatic portal and vein systems. The best CNR for displaying abdominal vessels in children was obtained at 42 keV photon energy level