WorldWideScience

Sample records for optimal file size

  1. Influence of cervical preflaring on apical file size determination.

    Science.gov (United States)

    Pecora, J D; Capelli, A; Guerisoli, D M Z; Spanó, J C E; Estrela, C

    2005-07-01

    To investigate the influence of cervical preflaring with different instruments (Gates-Glidden drills, Quantec Flare series instruments and LA Axxess burs) on the first file that binds at working length (WL) in maxillary central incisors. Forty human maxillary central incisors with complete root formation were used. After standard access cavities, a size 06 K-file was inserted into each canal until the apical foramen was reached. The WL was set 1 mm short of the apical foramen. Group 1 received the initial apical instrument without previous preflaring of the cervical and middle thirds of the root canal. Group 2 had the cervical and middle portion of the root canals enlarged with Gates-Glidden drills sizes 90, 110 and 130. Group 3 had the cervical and middle thirds of the root canals enlarged with nickel-titanium Quantec Flare series instruments. Titanium-nitrite treated, stainless steel LA Axxess burs were used for preflaring the cervical and middle portions of root canals from group 4. Each canal was sized using manual K-files, starting with size 08 files with passive movements until the WL was reached. File sizes were increased until a binding sensation was felt at the WL, and the instrument size was recorded for each tooth. The apical region was then observed under a stereoscopic magnifier, images were recorded digitally and the differences between root canal and maximum file diameters were evaluated for each sample. Significant differences were found between experimental groups regarding anatomical diameter at the WL and the first file to bind in the canal (P Flare instruments were ranked in an intermediary position, with no statistically significant differences between them (0.093 mm average). The instrument binding technique for determining anatomical diameter at WL is not precise. Preflaring of the cervical and middle thirds of the root canal improved anatomical diameter determination; the instrument used for preflaring played a major role in determining the

  2. DJFS: Providing Highly Reliable and High‐Performance File System with Small‐Sized

    Directory of Open Access Journals (Sweden)

    Junghoon Kim

    2017-11-01

    Full Text Available File systems and applications try to implement their own update protocols to guarantee data consistency, which is one of the most crucial aspects of computing systems. However, we found that the storage devices are substantially under‐utilized when preserving data consistency because they generate massive storage write traffic with many disk cache flush operations and force‐unit‐access (FUA commands. In this paper, we present DJFS (Delta‐Journaling File System that provides both a high level of performance and data consistency for different applications. We made three technical contributions to achieve our goal. First, to remove all storage accesses with disk cache flush operations and FUA commands, DJFS uses small‐sized NVRAM for a file system journal. Second, to reduce the access latency and space requirements of NVRAM, DJFS attempts to journal compress the differences in the modified blocks. Finally, to relieve explicit checkpointing overhead, DJFS aggressively reflects the checkpoint transactions to file system area in the unit of the specified region. Our evaluation on TPC‐C SQLite benchmark shows that, using our novel optimization schemes, DJFS outperforms Ext4 by up to 64.2 times with only 128 MB of NVRAM.

  3. Cooperative storage of shared files in a parallel computing system with dynamic block size

    Science.gov (United States)

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2015-11-10

    Improved techniques are provided for parallel writing of data to a shared object in a parallel computing system. A method is provided for storing data generated by a plurality of parallel processes to a shared object in a parallel computing system. The method is performed by at least one of the processes and comprises: dynamically determining a block size for storing the data; exchanging a determined amount of the data with at least one additional process to achieve a block of the data having the dynamically determined block size; and writing the block of the data having the dynamically determined block size to a file system. The determined block size comprises, e.g., a total amount of the data to be stored divided by the number of parallel processes. The file system comprises, for example, a log structured virtual parallel file system, such as a Parallel Log-Structured File System (PLFS).

  4. Optimal File-Distribution in Heterogeneous and Asymmetric Storage Networks

    Science.gov (United States)

    Langner, Tobias; Schindelhauer, Christian; Souza, Alexander

    We consider an optimisation problem which is motivated from storage virtualisation in the Internet. While storage networks make use of dedicated hardware to provide homogeneous bandwidth between servers and clients, in the Internet, connections between storage servers and clients are heterogeneous and often asymmetric with respect to upload and download. Thus, for a large file, the question arises how it should be fragmented and distributed among the servers to grant "optimal" access to the contents. We concentrate on the transfer time of a file, which is the time needed for one upload and a sequence of n downloads, using a set of m servers with heterogeneous bandwidths. We assume that fragments of the file can be transferred in parallel to and from multiple servers. This model yields a distribution problem that examines the question of how these fragments should be distributed onto those servers in order to minimise the transfer time. We present an algorithm, called FlowScaling, that finds an optimal solution within running time {O}(m log m). We formulate the distribution problem as a maximum flow problem, which involves a function that states whether a solution with a given transfer time bound exists. This function is then used with a scaling argument to determine an optimal solution within the claimed time complexity.

  5. A model for optimizing file access patterns using spatio-temporal parallelism

    Energy Technology Data Exchange (ETDEWEB)

    Boonthanome, Nouanesengsy [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Patchett, John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Geveci, Berk [Kitware Inc., Clifton Park, NY (United States); Ahrens, James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Bauer, Andy [Kitware Inc., Clifton Park, NY (United States); Chaudhary, Aashish [Kitware Inc., Clifton Park, NY (United States); Miller, Ross G. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Shipman, Galen M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Williams, Dean N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-01-01

    For many years now, I/O read time has been recognized as the primary bottleneck for parallel visualization and analysis of large-scale data. In this paper, we introduce a model that can estimate the read time for a file stored in a parallel filesystem when given the file access pattern. Read times ultimately depend on how the file is stored and the access pattern used to read the file. The file access pattern will be dictated by the type of parallel decomposition used. We employ spatio-temporal parallelism, which combines both spatial and temporal parallelism, to provide greater flexibility to possible file access patterns. Using our model, we were able to configure the spatio-temporal parallelism to design optimized read access patterns that resulted in a speedup factor of approximately 400 over traditional file access patterns.

  6. Optimizing Instruction Scheduling and Register Allocation for Register-File-Connected Clustered VLIW Architectures

    Science.gov (United States)

    Tang, Haijing; Wang, Siye; Zhang, Yanjun

    2013-01-01

    Clustering has become a common trend in very long instruction words (VLIW) architecture to solve the problem of area, energy consumption, and design complexity. Register-file-connected clustered (RFCC) VLIW architecture uses the mechanism of global register file to accomplish the inter-cluster data communications, thus eliminating the performance and energy consumption penalty caused by explicit inter-cluster data move operations in traditional bus-connected clustered (BCC) VLIW architecture. However, the limit number of access ports to the global register file has become an issue which must be well addressed; otherwise the performance and energy consumption would be harmed. In this paper, we presented compiler optimization techniques for an RFCC VLIW architecture called Lily, which is designed for encryption systems. These techniques aim at optimizing performance and energy consumption for Lily architecture, through appropriate manipulation of the code generation process to maintain a better management of the accesses to the global register file. All the techniques have been implemented and evaluated. The result shows that our techniques can significantly reduce the penalty of performance and energy consumption due to access port limitation of global register file. PMID:23970841

  7. Optimizing Instruction Scheduling and Register Allocation for Register-File-Connected Clustered VLIW Architectures

    Directory of Open Access Journals (Sweden)

    Haijing Tang

    2013-01-01

    Full Text Available Clustering has become a common trend in very long instruction words (VLIW architecture to solve the problem of area, energy consumption, and design complexity. Register-file-connected clustered (RFCC VLIW architecture uses the mechanism of global register file to accomplish the inter-cluster data communications, thus eliminating the performance and energy consumption penalty caused by explicit inter-cluster data move operations in traditional bus-connected clustered (BCC VLIW architecture. However, the limit number of access ports to the global register file has become an issue which must be well addressed; otherwise the performance and energy consumption would be harmed. In this paper, we presented compiler optimization techniques for an RFCC VLIW architecture called Lily, which is designed for encryption systems. These techniques aim at optimizing performance and energy consumption for Lily architecture, through appropriate manipulation of the code generation process to maintain a better management of the accesses to the global register file. All the techniques have been implemented and evaluated. The result shows that our techniques can significantly reduce the penalty of performance and energy consumption due to access port limitation of global register file.

  8. Calculating Optimal Inventory Size

    Directory of Open Access Journals (Sweden)

    Ruby Perez

    2010-01-01

    Full Text Available The purpose of the project is to find the optimal value for the Economic Order Quantity Model and then use a lean manufacturing Kanban equation to find a numeric value that will minimize the total cost and the inventory size.

  9. An asynchronous writing method for restart files in the gysela code in prevision of exascale systems*

    Directory of Open Access Journals (Sweden)

    Thomine O.

    2013-12-01

    Full Text Available The present work deals with an optimization procedure developed in the full-f global GYrokinetic SEmi-LAgrangian code (GYSELA. Optimizing the writing of the restart files is necessary to reduce the computing impact of crashes. These files require a very large memory space, and particularly so for very large mesh sizes. The limited bandwidth of the data pipe between the comput- ing nodes and the storage system induces a non-scalable part in the GYSELA code, which increases with the mesh size. Indeed the transfer time of RAM to data depends linearly on the files size. The necessity of non synchronized writing-in-file procedure is therefore crucial. A new GYSELA module has been developed. This asynchronous procedure allows the frequent writ- ing of the restart files, whilst preventing a severe slowing down due to the limited writing bandwidth. This method has been improved to generate a checksum control of the restart files, and automatically rerun the code in case of a crash for any cause.

  10. A New Approach for Optimal Sizing of Standalone Photovoltaic Systems

    OpenAIRE

    Khatib, Tamer; Mohamed, Azah; Sopian, K.; Mahmoud, M.

    2012-01-01

    This paper presents a new method for determining the optimal sizing of standalone photovoltaic (PV) system in terms of optimal sizing of PV array and battery storage. A standalone PV system energy flow is first analysed, and the MATLAB fitting tool is used to fit the resultant sizing curves in order to derive general formulas for optimal sizing of PV array and battery. In deriving the formulas for optimal sizing of PV array and battery, the data considered are based on five sites in Malaysia...

  11. A New Approach for Optimal Sizing of Standalone Photovoltaic Systems

    Directory of Open Access Journals (Sweden)

    Tamer Khatib

    2012-01-01

    Full Text Available This paper presents a new method for determining the optimal sizing of standalone photovoltaic (PV system in terms of optimal sizing of PV array and battery storage. A standalone PV system energy flow is first analysed, and the MATLAB fitting tool is used to fit the resultant sizing curves in order to derive general formulas for optimal sizing of PV array and battery. In deriving the formulas for optimal sizing of PV array and battery, the data considered are based on five sites in Malaysia, which are Kuala Lumpur, Johor Bharu, Ipoh, Kuching, and Alor Setar. Based on the results of the designed example for a PV system installed in Kuala Lumpur, the proposed method gives satisfactory optimal sizing results.

  12. Rotorcraft Optimization Tools: Incorporating Rotorcraft Design Codes into Multi-Disciplinary Design, Analysis, and Optimization

    Science.gov (United States)

    Meyn, Larry A.

    2018-01-01

    One of the goals of NASA's Revolutionary Vertical Lift Technology Project (RVLT) is to provide validated tools for multidisciplinary design, analysis and optimization (MDAO) of vertical lift vehicles. As part of this effort, the software package, RotorCraft Optimization Tools (RCOTOOLS), is being developed to facilitate incorporating key rotorcraft conceptual design codes into optimizations using the OpenMDAO multi-disciplinary optimization framework written in Python. RCOTOOLS, also written in Python, currently supports the incorporation of the NASA Design and Analysis of RotorCraft (NDARC) vehicle sizing tool and the Comprehensive Analytical Model of Rotorcraft Aerodynamics and Dynamics II (CAMRAD II) analysis tool into OpenMDAO-driven optimizations. Both of these tools use detailed, file-based inputs and outputs, so RCOTOOLS provides software wrappers to update input files with new design variable values, execute these codes and then extract specific response variable values from the file outputs. These wrappers are designed to be flexible and easy to use. RCOTOOLS also provides several utilities to aid in optimization model development, including Graphical User Interface (GUI) tools for browsing input and output files in order to identify text strings that are used to identify specific variables as optimization input and response variables. This paper provides an overview of RCOTOOLS and its use

  13. Optimal sizing method for stand-alone photovoltaic power systems

    Energy Technology Data Exchange (ETDEWEB)

    Groumpos, P P; Papageorgiou, G

    1987-01-01

    The total life-cycle cost of stand-alone photovoltaic (SAPV) power systems is mathematically formulated. A new optimal sizing algorithm for the solar array and battery capacity is developed. The optimum value of a balancing parameter, M, for the optimal sizing of SAPV system components is derived. The proposed optimal sizing algorithm is used in an illustrative example, where a more economical life-cycle cost has bene obtained. The question of cost versus reliability is briefly discussed.

  14. Optimal Image Data Compression For Whole Slide Images

    Directory of Open Access Journals (Sweden)

    J. Isola

    2016-06-01

    Differences in WSI file sizes of scanned images deemed “visually lossless” were significant. If we set Hamamatsu Nanozoomer .NDPI file size (using its default “jpeg80 quality” as 100%, the size of a “visually lossless” JPEG2000 file was only 15-20% of that. Comparisons to Aperio and 3D-Histech files (.svs and .mrxs at their default settings yielded similar results. A further optimization of JPEG2000 was done by treating empty slide area as uniform white-grey surface, which could be maximally compressed. Using this algorithm, JPEG2000 file sizes were only half, or even smaller, of original JPEG2000. Variation was due to the proportion of empty slide area on the scan. We anticipate that wavelet-based image compression methods, such as JPEG2000, have a significant advantage in saving storage costs of scanned whole slide image. In routine pathology laboratories applying WSI technology widely to their histology material, absolute cost savings can be substantial.  

  15. A Software Tool for Optimal Sizing of PV Systems in Malaysia

    Directory of Open Access Journals (Sweden)

    Tamer Khatib

    2012-01-01

    Full Text Available This paper presents a MATLAB based user friendly software tool called as PV.MY for optimal sizing of photovoltaic (PV systems. The software has the capabilities of predicting the metrological variables such as solar energy, ambient temperature and wind speed using artificial neural network (ANN, optimizes the PV module/ array tilt angle, optimizes the inverter size and calculate optimal capacities of PV array, battery, wind turbine and diesel generator in hybrid PV systems. The ANN based model for metrological prediction uses four meteorological variables, namely, sun shine ratio, day number and location coordinates. As for PV system sizing, iterative methods are used for determining the optimal sizing of three types of PV systems, which are standalone PV system, hybrid PV/wind system and hybrid PV/diesel generator system. The loss of load probability (LLP technique is used for optimization in which the energy sources capacities are the variables to be optimized considering very low LLP. As for determining the optimal PV panels tilt angle and inverter size, the Liu and Jordan model for solar energy incident on a tilt surface is used in optimizing the monthly tilt angle, while a model for inverter efficiency curve is used in the optimization of inverter size.

  16. Scale economies and optimal size in the Swiss gas distribution sector

    International Nuclear Information System (INIS)

    Alaeifar, Mozhgan; Farsi, Mehdi; Filippini, Massimo

    2014-01-01

    This paper studies the cost structure of Swiss gas distribution utilities. Several econometric models are applied to a panel of 26 companies over 1996–2000. Our main objective is to estimate the optimal size and scale economies of the industry and to study their possible variation with respect to network characteristics. The results indicate the presence of unexploited scale economies. However, very large companies in the sample and companies with a disproportionate mixture of output and density present an exception. Furthermore, the estimated optimal size for majority of companies in the sample has shown a value far greater than the actual size, suggesting remarkable efficiency gains by reorganization of the industry. The results also highlight the effect of customer density on optimal size. Networks with higher density or greater complexity have a lower optimal size. - highlights: • Presence of unexploited scale economies for small and medium sized companies. • Scale economies vary considerably with customer density. • Higher density or greater complexity is associated with lower optimal size. • Optimal size varies across the companies through unobserved heterogeneity. • Firms with low density can gain more from expanding firm size

  17. A Simulation Framework for Optimal Energy Storage Sizing

    Directory of Open Access Journals (Sweden)

    Carlos Suazo-Martínez

    2014-05-01

    Full Text Available Despite the increasing interest in Energy Storage Systems (ESS, quantification of their technical and economical benefits remains a challenge. To assess the use of ESS, a simulation approach for ESS optimal sizing is presented. The algorithm is based on an adapted Unit Commitment, including ESS operational constraints, and the use of high performance computing (HPC. Multiple short-term simulations are carried out within a multiple year horizon. Evaluation is performed for Chile's Northern Interconnected Power System (SING. The authors show that a single year evaluation could lead to sub-optimal results when evaluating optimal ESS size. Hence, it is advisable to perform long-term evaluations of ESS. Additionally, the importance of detailed simulation for adequate assessment of ESS contributions and to fully capture storage value is also discussed. Furthermore, the robustness of the optimal sizing approach is evaluated by means of a sensitivity analyses. The results suggest that regulatory frameworks should recognize multiple value streams from storage in order to encourage greater ESS integration.

  18. On the optimal sizing problem

    DEFF Research Database (Denmark)

    Vidal, Rene Victor Valqui

    1994-01-01

    The paper studies the problem of determining the number and dimensions of sizes of apparel so as to maximize profits. It develops a simple one-variable bisection search algorithm that gives the optimal solution. An example is solved interactively using a Macintosh LC and Math CAD, a mathematical...

  19. Gaussian variable neighborhood search for the file transfer scheduling problem

    Directory of Open Access Journals (Sweden)

    Dražić Zorica

    2016-01-01

    Full Text Available This paper presents new modifications of Variable Neighborhood Search approach for solving the file transfer scheduling problem. To obtain better solutions in a small neighborhood of a current solution, we implement two new local search procedures. As Gaussian Variable Neighborhood Search showed promising results when solving continuous optimization problems, its implementation in solving the discrete file transfer scheduling problem is also presented. In order to apply this continuous optimization method to solve the discrete problem, mapping of uncountable set of feasible solutions into a finite set is performed. Both local search modifications gave better results for the large size instances, as well as better average performance for medium and large size instances. One local search modification achieved significant acceleration of the algorithm. The numerical experiments showed that the results obtained by Gaussian modifications are comparable with the results obtained by standard VNS based algorithms, developed for combinatorial optimization. In some cases Gaussian modifications gave even better results. [Projekat Ministarstava nauke Republike Srbije, br. 174010

  20. Optimal Sizing and Control Strategy Design for Heavy Hybrid Electric Truck

    Directory of Open Access Journals (Sweden)

    Yuan Zou

    2012-01-01

    Full Text Available Due to the complexity of the hybrid powertrain, the control is highly involved to improve the collaborations of the different components. For the specific powertrain, the components' sizing just gives the possibility to propel the vehicle and the control will realize the function of the propulsion. Definitely the components' sizing also gives the constraints to the control design, which cause a close coupling between the sizing and control strategy design. This paper presents a parametric study focused on sizing of the powertrain components and optimization of the power split between the engine and electric motor for minimizing the fuel consumption. A framework is put forward to accomplish the optimal sizing and control design for a heavy parallel pre-AMT hybrid truck under the natural driving schedule. The iterative plant-controller combined optimization methodology is adopted to optimize the key parameters of the plant and control strategy simultaneously. A scalable powertrain model based on a bilevel optimization framework is built. Dynamic programming is applied to find the optimal control in the inner loop with a prescribed cycle. The parameters are optimized in the outer loop. The results are analysed and the optimal sizing and control strategy are achieved simultaneously.

  1. Determination of the optimal sample size for a clinical trial accounting for the population size.

    Science.gov (United States)

    Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin

    2017-07-01

    The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. The effect of nanoparticle size on theranostic systems: the optimal particle size for imaging is not necessarily optimal for drug delivery

    Science.gov (United States)

    Dreifuss, Tamar; Betzer, Oshra; Barnoy, Eran; Motiei, Menachem; Popovtzer, Rachela

    2018-02-01

    Theranostics is an emerging field, defined as combination of therapeutic and diagnostic capabilities in the same material. Nanoparticles are considered as an efficient platform for theranostics, particularly in cancer treatment, as they offer substantial advantages over both common imaging contrast agents and chemotherapeutic drugs. However, the development of theranostic nanoplatforms raises an important question: Is the optimal particle for imaging also optimal for therapy? Are the specific parameters required for maximal drug delivery, similar to those required for imaging applications? Herein, we examined this issue by investigating the effect of nanoparticle size on tumor uptake and imaging. Anti-epidermal growth factor receptor (EGFR)-conjugated gold nanoparticles (GNPs) in different sizes (diameter range: 20-120 nm) were injected to tumor bearing mice and their uptake by tumors was measured, as well as their tumor visualization capabilities as tumor-targeted CT contrast agent. Interestingly, the results showed that different particles led to highest tumor uptake or highest contrast enhancement, meaning that the optimal particle size for drug delivery is not necessarily optimal for tumor imaging. These results have important implications on the design of theranostic nanoplatforms.

  3. Optimized distributed systems achieve significant performance improvement on sorted merging of massive VCF files.

    Science.gov (United States)

    Sun, Xiaobo; Gao, Jingjing; Jin, Peng; Eng, Celeste; Burchard, Esteban G; Beaty, Terri H; Ruczinski, Ingo; Mathias, Rasika A; Barnes, Kathleen; Wang, Fusheng; Qin, Zhaohui S

    2018-06-01

    Sorted merging of genomic data is a common data operation necessary in many sequencing-based studies. It involves sorting and merging genomic data from different subjects by their genomic locations. In particular, merging a large number of variant call format (VCF) files is frequently required in large-scale whole-genome sequencing or whole-exome sequencing projects. Traditional single-machine based methods become increasingly inefficient when processing large numbers of files due to the excessive computation time and Input/Output bottleneck. Distributed systems and more recent cloud-based systems offer an attractive solution. However, carefully designed and optimized workflow patterns and execution plans (schemas) are required to take full advantage of the increased computing power while overcoming bottlenecks to achieve high performance. In this study, we custom-design optimized schemas for three Apache big data platforms, Hadoop (MapReduce), HBase, and Spark, to perform sorted merging of a large number of VCF files. These schemas all adopt the divide-and-conquer strategy to split the merging job into sequential phases/stages consisting of subtasks that are conquered in an ordered, parallel, and bottleneck-free way. In two illustrating examples, we test the performance of our schemas on merging multiple VCF files into either a single TPED or a single VCF file, which are benchmarked with the traditional single/parallel multiway-merge methods, message passing interface (MPI)-based high-performance computing (HPC) implementation, and the popular VCFTools. Our experiments suggest all three schemas either deliver a significant improvement in efficiency or render much better strong and weak scalabilities over traditional methods. Our findings provide generalized scalable schemas for performing sorted merging on genetics and genomics data using these Apache distributed systems.

  4. Efficiency optimized control of medium-size induction motor drives

    DEFF Research Database (Denmark)

    Abrahamsen, F.; Blaabjerg, Frede; Pedersen, John Kim

    2000-01-01

    The efficiency of a variable speed induction motor drive can be optimized by adaption of the motor flux level to the load torque. In small drives (<10 kW) this can be done without considering the relatively small converter losses, but for medium-size drives (10-1000 kW) the losses can not be disr......The efficiency of a variable speed induction motor drive can be optimized by adaption of the motor flux level to the load torque. In small drives (... not be disregarded without further analysis. The importance of the converter losses on efficiency optimization in medium-size drives is analyzed in this paper. Based on the experiments with a 90 kW drive it is found that it is not critical if the converter losses are neglected in the control, except...... that the robustness towards load disturbances may unnecessarily be reduced. Both displacement power factor and model-based efficiency optimizing control methods perform well in medium-size drives. The last strategy is also tested on a 22 kW drive with good results....

  5. Optimal capacitor placement and sizing using combined fuzzy ...

    African Journals Online (AJOL)

    Then the sizing of the capacitors is modeled as an optimization problem and the objective function (loss minimization) is solved using Hybrid Particle Swarm Optimization (HPSO) technique. A case study with an IEEE 34 bus distribution feeder is presented to illustrate the applicability of the algorithm. A comparison is made ...

  6. Combined Optimal Sizing and Control for a Hybrid Tracked Vehicle

    Directory of Open Access Journals (Sweden)

    Huei Peng

    2012-11-01

    Full Text Available The optimal sizing and control of a hybrid tracked vehicle is presented and solved in this paper. A driving schedule obtained from field tests is used to represent typical tracked vehicle operations. Dynamics of the diesel engine-permanent magnetic AC synchronous generator set, the lithium-ion battery pack, and the power split between them are modeled and validated through experiments. Two coupled optimizations, one for the plant parameters, forming the outer optimization loop and one for the control strategy, forming the inner optimization loop, are used to achieve minimum fuel consumption under the selected driving schedule. The dynamic programming technique is applied to find the optimal controller in the inner loop while the component parameters are optimized iteratively in the outer loop. The results are analyzed, and the relationship between the key parameters is observed to keep the optimal sizing and control simultaneously.

  7. Linear Model for Optimal Distributed Generation Size Predication

    Directory of Open Access Journals (Sweden)

    Ahmed Al Ameri

    2017-01-01

    Full Text Available This article presents a linear model predicting optimal size of Distributed Generation (DG that addresses the minimum power loss. This method is based fundamentally on strong coupling between active power and voltage angle as well as between reactive power and voltage magnitudes. This paper proposes simplified method to calculate the total power losses in electrical grid for different distributed generation sizes and locations. The method has been implemented and tested on several IEEE bus test systems. The results show that the proposed method is capable of predicting approximate optimal size of DG when compared with precision calculations. The method that linearizes a complex model showed a good result, which can actually reduce processing time required. The acceptable accuracy with less time and memory required can help the grid operator to assess power system integrated within large-scale distribution generation.

  8. Offspring fitness and individual optimization of clutch size

    Science.gov (United States)

    Both, C.; Tinbergen, J. M.; Noordwijk, A. J. van

    1998-01-01

    Within-year variation in clutch size has been claimed to be an adaptation to variation in the individual capacity to raise offspring. We tested this hypothesis by manipulating brood size to one common size, and predicted that if clutch size is individually optimized, then birds with originally large clutches have a higher fitness than birds with originally small clutches. No evidence was found that fitness was related to the original clutch size, and in this population clutch size is thus not related to the parental capacity to raise offspring. However, offspring from larger original clutches recruited better than their nest mates that came from smaller original clutches. This suggests that early maternal or genetic variation in viability is related to clutch size.

  9. Sizing Optimization and Strength Analysis for Spread-type Gear Reducers

    Directory of Open Access Journals (Sweden)

    Wei-Hsuan Hsu

    2014-08-01

    Full Text Available A reducer is now developed towards the trend of customization service and cost-saving. In this study, a sizing program for the reducer has been developed in order to replace the manual sizing process. We aim at the total center distance of the gear reducer for optimization to reduce gear volume and weight. Also, we checked constrains such as, tooth root bending, tooth contact strength, gear shaft endangered cross-section, bearing life, gear shaft deflection, and torsion angle deformation, etc., to obtain reliable drive strength. Comparisons of sizes and weights before and after optimization confirm that the purpose for reducing production cost is achieved.

  10. Optimal unit sizing for small-scale integrated energy systems using multi-objective interval optimization and evidential reasoning approach

    International Nuclear Information System (INIS)

    Wei, F.; Wu, Q.H.; Jing, Z.X.; Chen, J.J.; Zhou, X.X.

    2016-01-01

    This paper proposes a comprehensive framework including a multi-objective interval optimization model and evidential reasoning (ER) approach to solve the unit sizing problem of small-scale integrated energy systems, with uncertain wind and solar energies integrated. In the multi-objective interval optimization model, interval variables are introduced to tackle the uncertainties of the optimization problem. Aiming at simultaneously considering the cost and risk of a business investment, the average and deviation of life cycle cost (LCC) of the integrated energy system are formulated. In order to solve the problem, a novel multi-objective optimization algorithm, MGSOACC (multi-objective group search optimizer with adaptive covariance matrix and chaotic search), is developed, employing adaptive covariance matrix to make the search strategy adaptive and applying chaotic search to maintain the diversity of group. Furthermore, ER approach is applied to deal with multiple interests of an investor at the business decision making stage and to determine the final unit sizing solution from the Pareto-optimal solutions. This paper reports on the simulation results obtained using a small-scale direct district heating system (DH) and a small-scale district heating and cooling system (DHC) optimized by the proposed framework. The results demonstrate the superiority of the multi-objective interval optimization model and ER approach in tackling the unit sizing problem of integrated energy systems considering the integration of uncertian wind and solar energies. - Highlights: • Cost and risk of investment in small-scale integrated energy systems are considered. • A multi-objective interval optimization model is presented. • A novel multi-objective optimization algorithm (MGSOACC) is proposed. • The evidential reasoning (ER) approach is used to obtain the final optimal solution. • The MGSOACC and ER can tackle the unit sizing problem efficiently.

  11. AHP-Based Optimal Selection of Garment Sizes for Online Shopping

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Garment online shopping has been accepted by more and more consumers in recent years. In online shopping, a buyer only chooses the garment size judged by his own experience without trying-on, so the selected garment may not be the fittest one for the buyer due to the variety of body's figures. Thus, we propose a method of optimal selection of garment sizes for online shopping based on Analytic Hierarchy Process (AHP). The hierarchical structure model for optimal selection of garment sizes is structured and the fittest garment for a buyer is found by calculating the matching degrees between individual's measurements and the corresponding key-part values of ready-to-wear clothing sizes. In order to demonstrate its feasibility, we provide an example of selecting the fittest sizes of men's bottom. The result shows that the proposed method is useful in online clothing sales application.

  12. Modeling and optimization of wet sizing process

    International Nuclear Information System (INIS)

    Thai Ba Cau; Vu Thanh Quang and Nguyen Ba Tien

    2004-01-01

    Mathematical simulation on basis of Stock law has been done for wet sizing process on cylinder equipment of laboratory and semi-industrial scale. The model consists of mathematical equations describing relations between variables, such as: - Resident time distribution function of emulsion particles in the separating zone of the equipment depending on flow-rate, height, diameter and structure of the equipment. - Size-distribution function in the fine and coarse parts depending on resident time distribution function of emulsion particles, characteristics of the material being processed, such as specific density, shapes, and characteristics of the environment of classification, such as specific density, viscosity. - Experimental model was developed on data collected from an experimental cylindrical equipment with diameter x height of sedimentation chamber equal to 50 x 40 cm for an emulsion of zirconium silicate in water. - Using this experimental model allows to determine optimal flow-rate in order to obtain product with desired grain size in term of average size or size distribution function. (author)

  13. A chaos wolf optimization algorithm with self-adaptive variable step-size

    Science.gov (United States)

    Zhu, Yong; Jiang, Wanlu; Kong, Xiangdong; Quan, Lingxiao; Zhang, Yongshun

    2017-10-01

    To explore the problem of parameter optimization for complex nonlinear function, a chaos wolf optimization algorithm (CWOA) with self-adaptive variable step-size was proposed. The algorithm was based on the swarm intelligence of wolf pack, which fully simulated the predation behavior and prey distribution way of wolves. It possessed three intelligent behaviors such as migration, summons and siege. And the competition rule as "winner-take-all" and the update mechanism as "survival of the fittest" were also the characteristics of the algorithm. Moreover, it combined the strategies of self-adaptive variable step-size search and chaos optimization. The CWOA was utilized in parameter optimization of twelve typical and complex nonlinear functions. And the obtained results were compared with many existing algorithms, including the classical genetic algorithm, the particle swarm optimization algorithm and the leader wolf pack search algorithm. The investigation results indicate that CWOA possess preferable optimization ability. There are advantages in optimization accuracy and convergence rate. Furthermore, it demonstrates high robustness and global searching ability.

  14. A methodology for optimal sizing of autonomous hybrid PV/wind system

    International Nuclear Information System (INIS)

    Diaf, S.; Diaf, D.; Belhamel, M.; Haddadi, M.; Louche, A.

    2007-01-01

    The present paper presents a methodology to perform the optimal sizing of an autonomous hybrid PV/wind system. The methodology aims at finding the configuration, among a set of systems components, which meets the desired system reliability requirements, with the lowest value of levelized cost of energy. Modelling a hybrid PV/wind system is considered as the first step in the optimal sizing procedure. In this paper, more accurate mathematical models for characterizing PV module, wind generator and battery are proposed. The second step consists to optimize the sizing of a system according to the loss of power supply probability (LPSP) and the levelized cost of energy (LCE) concepts. Considering various types and capacities of system devices, the configurations, which can meet the desired system reliability, are obtained by changing the type and size of the devices systems. The configuration with the lowest LCE gives the optimal choice. Applying this method to an assumed PV/wind hybrid system to be installed at Corsica Island, the simulation results show that the optimal configuration, which meet the desired system reliability requirements (LPSP=0) with the lowest LCE, is obtained for a system comprising a 125 W photovoltaic module, one wind generator (600 W) and storage batteries (using 253 Ah). On the other hand, the device system choice plays an important role in cost reduction as well as in energy production

  15. Economic Optimization of Component Sizing for Residential Battery Storage Systems

    Directory of Open Access Journals (Sweden)

    Holger C. Hesse

    2017-06-01

    Full Text Available Battery energy storage systems (BESS coupled with rooftop-mounted residential photovoltaic (PV generation, designated as PV-BESS, draw increasing attention and market penetration as more and more such systems become available. The manifold BESS deployed to date rely on a variety of different battery technologies, show a great variation of battery size, and power electronics dimensioning. However, given today’s high investment costs of BESS, a well-matched design and adequate sizing of the storage systems are prerequisites to allow profitability for the end-user. The economic viability of a PV-BESS depends also on the battery operation, storage technology, and aging of the system. In this paper, a general method for comprehensive PV-BESS techno-economic analysis and optimization is presented and applied to the state-of-art PV-BESS to determine its optimal parameters. Using a linear optimization method, a cost-optimal sizing of the battery and power electronics is derived based on solar energy availability and local demand. At the same time, the power flow optimization reveals the best storage operation patterns considering a trade-off between energy purchase, feed-in remuneration, and battery aging. Using up to date technology-specific aging information and the investment cost of battery and inverter systems, three mature battery chemistries are compared; a lead-acid (PbA system and two lithium-ion systems, one with lithium-iron-phosphate (LFP and another with lithium-nickel-manganese-cobalt (NMC cathode. The results show that different storage technology and component sizing provide the best economic performances, depending on the scenario of load demand and PV generation.

  16. Optimizing Input/Output Using Adaptive File System Policies

    Science.gov (United States)

    Madhyastha, Tara M.; Elford, Christopher L.; Reed, Daniel A.

    1996-01-01

    Parallel input/output characterization studies and experiments with flexible resource management algorithms indicate that adaptivity is crucial to file system performance. In this paper we propose an automatic technique for selecting and refining file system policies based on application access patterns and execution environment. An automatic classification framework allows the file system to select appropriate caching and pre-fetching policies, while performance sensors provide feedback used to tune policy parameters for specific system environments. To illustrate the potential performance improvements possible using adaptive file system policies, we present results from experiments involving classification-based and performance-based steering.

  17. A Transistor Sizing Tool for Optimization of Analog CMOS Circuits: TSOp

    OpenAIRE

    Y.C.Wong; Syafeeza A. R; N. A. Hamid

    2015-01-01

    Optimization of a circuit by transistor sizing is often a slow, tedious and iterative manual process which relies on designer intuition. It is highly desirable to automate the transistor sizing process towards being able to rapidly design high performance integrated circuit. Presented here is a simple but effective algorithm for automatically optimizing the circuit parameters by exploiting the relationships among the genetic algorithm's coefficient values derived from the analog circuit desig...

  18. Automatic analog IC sizing and optimization constrained with PVT corners and layout effects

    CERN Document Server

    Lourenço, Nuno; Horta, Nuno

    2017-01-01

    This book introduces readers to a variety of tools for automatic analog integrated circuit (IC) sizing and optimization. The authors provide a historical perspective on the early methods proposed to tackle automatic analog circuit sizing, with emphasis on the methodologies to size and optimize the circuit, and on the methodologies to estimate the circuit’s performance. The discussion also includes robust circuit design and optimization and the most recent advances in layout-aware analog sizing approaches. The authors describe a methodology for an automatic flow for analog IC design, including details of the inputs and interfaces, multi-objective optimization techniques, and the enhancements made in the base implementation by using machine leaning techniques. The Gradient model is discussed in detail, along with the methods to include layout effects in the circuit sizing. The concepts and algorithms of all the modules are thoroughly described, enabling readers to reproduce the methodologies, improve the qual...

  19. A chaos wolf optimization algorithm with self-adaptive variable step-size

    Directory of Open Access Journals (Sweden)

    Yong Zhu

    2017-10-01

    Full Text Available To explore the problem of parameter optimization for complex nonlinear function, a chaos wolf optimization algorithm (CWOA with self-adaptive variable step-size was proposed. The algorithm was based on the swarm intelligence of wolf pack, which fully simulated the predation behavior and prey distribution way of wolves. It possessed three intelligent behaviors such as migration, summons and siege. And the competition rule as “winner-take-all” and the update mechanism as “survival of the fittest” were also the characteristics of the algorithm. Moreover, it combined the strategies of self-adaptive variable step-size search and chaos optimization. The CWOA was utilized in parameter optimization of twelve typical and complex nonlinear functions. And the obtained results were compared with many existing algorithms, including the classical genetic algorithm, the particle swarm optimization algorithm and the leader wolf pack search algorithm. The investigation results indicate that CWOA possess preferable optimization ability. There are advantages in optimization accuracy and convergence rate. Furthermore, it demonstrates high robustness and global searching ability.

  20. Distributing File-Based Data to Remote Sites Within the BABAR Collaboration

    International Nuclear Information System (INIS)

    Gowdy, Stephen J.

    2002-01-01

    BABAR [1] uses two formats for its data: Objectivity database and root [2] files. This poster concerns the distribution of the latter--for Objectivity data see [3]. The BABAR analysis data is stored in root files--one per physics run and analysis selection channel--maintained in a large directory tree. Currently BABAR has more than 4.5 TBytes in 200,000 root files. This data is (mostly) produced at SLAC, but is required for analysis at universities and research centers throughout the us and Europe. Two basic problems confront us when we seek to import bulk data from slac to an institute's local storage via the network. We must determine which files must be imported (depending on the local site requirements and which files have already been imported), and we must make the optimum use of the network when transferring the data. Basic ftp-like tools (ftp, scp, etc) do not attempt to solve the first problem. More sophisticated tools like rsync [4], the widely-used mirror/synchronization program, compare local and remote file systems, checking for changes (based on file date, size and, if desired, an elaborate checksum) in order to only copy new or modified files. However rsync allows for only limited file selection. Also when, as in BABAR, an extremely large directory structure must be scanned, rsync can take several hours just to determine which files need to be copied. Although rsync (and scp) provides on-the-fly compression, it does not allow us to optimize the network transfer by using multiple streams, adjusting the tcp window size, or separating encrypted authentication from unencrypted data channels

  1. Optimal sizing of a run-of-river small hydropower plant

    International Nuclear Information System (INIS)

    Anagnostopoulos, John S.; Papantonis, Dimitris E.

    2007-01-01

    The sizing of a small hydropower plant of the run-of-river type is very critical for the cost effectiveness of the investment. In the present work, a numerical method is used for the optimal sizing of such a plant that comprises two hydraulic turbines operating in parallel, which can be of different type and size in order to improve its efficiency. The study and analysis of the plant performance is conducted using a newly developed evaluation algorithm that simulates in detail the plant operation during the year and computes its production results and economic indices. A parametric study is performed first in order to quantify the impact of some important construction and operation factors. Next, a stochastic evolutionary algorithm is implemented for the optimization process. The examined optimization problem uses data of a specific site and is solved in the single and two-objective modes, considering, together with economic, some additional objectives, as maximization of the produced energy and the best exploitation of the water stream potential. Analyzing the results of various optimizations runs, it becomes possible to identify the most advantageous design alternatives to realize the project. It was found that the use of two turbines of different size can enhance sufficiently both the energy production of the plant and the economic results of the investment. Finally, the sensitivity of the plant performance to other external parameters can be easily studied with the present method, and some indicative results are given for different financial or hydrologic conditions

  2. Simplified Method of Optimal Sizing of a Renewable Energy Hybrid System for Schools

    Directory of Open Access Journals (Sweden)

    Jiyeon Kim

    2016-11-01

    Full Text Available Schools are a suitable public building for renewable energy systems. Renewable energy hybrid systems (REHSs have recently been introduced in schools following a new national regulation that mandates renewable energy utilization. An REHS combines the common renewable-energy sources such as geothermal heat pumps, solar collectors for water heating, and photovoltaic systems with conventional energy systems (i.e., boilers and air-source heat pumps. Optimal design of an REHS by adequate sizing is not a trivial task because it usually requires intensive work including detailed simulation and demand/supply analysis. This type of simulation-based approach for optimization is difficult to implement in practice. To address this, this paper proposes simplified sizing equations for renewable-energy systems of REHSs. A conventional optimization process is used to calculate the optimal combinations of an REHS for cases of different numbers of classrooms and budgets. On the basis of the results, simplified sizing equations that use only the number of classrooms as the input are proposed by regression analysis. A verification test was carried out using an initial conventional optimization process. The results show that the simplified sizing equations predict similar sizing results to the initial process, consequently showing similar capital costs within a 2% error.

  3. Optimal placement and sizing of multiple distributed generating units in distribution

    Directory of Open Access Journals (Sweden)

    D. Rama Prabha

    2016-06-01

    Full Text Available Distributed generation (DG is becoming more important due to the increase in the demands for electrical energy. DG plays a vital role in reducing real power losses, operating cost and enhancing the voltage stability which is the objective function in this problem. This paper proposes a multi-objective technique for optimally determining the location and sizing of multiple distributed generation (DG units in the distribution network with different load models. The loss sensitivity factor (LSF determines the optimal placement of DGs. Invasive weed optimization (IWO is a population based meta-heuristic algorithm based on the behavior of weeds. This algorithm is used to find optimal sizing of the DGs. The proposed method has been tested for different load models on IEEE-33 bus and 69 bus radial distribution systems. This method has been compared with other nature inspired optimization methods. The simulated results illustrate the good applicability and performance of the proposed method.

  4. Experimental Analysis of File Transfer Rates over Wide-Area Dedicated Connections

    Energy Technology Data Exchange (ETDEWEB)

    Rao, Nageswara S. [ORNL; Liu, Qiang [ORNL; Sen, Satyabrata [ORNL; Hinkel, Gregory Carl [ORNL; Imam, Neena [ORNL; Foster, Ian [University of Chicago; Kettimuthu, R. [Argonne National Laboratory (ANL); Settlemyer, Bradley [Los Alamos National Laboratory (LANL); Wu, Qishi [University of Memphis; Yun, Daqing [Harrisburg University

    2016-12-01

    File transfers over dedicated connections, supported by large parallel file systems, have become increasingly important in high-performance computing and big data workflows. It remains a challenge to achieve peak rates for such transfers due to the complexities of file I/O, host, and network transport subsystems, and equally importantly, their interactions. We present extensive measurements of disk-to-disk file transfers using Lustre and XFS file systems mounted on multi-core servers over a suite of 10 Gbps emulated connections with 0-366 ms round trip times. Our results indicate that large buffer sizes and many parallel flows do not always guarantee high transfer rates. Furthermore, large variations in the measured rates necessitate repeated measurements to ensure confidence in inferences based on them. We propose a new method to efficiently identify the optimal joint file I/O and network transport parameters using a small number of measurements. We show that for XFS and Lustre with direct I/O, this method identifies configurations achieving 97% of the peak transfer rate while probing only 12% of the parameter space.

  5. Implementing size-optimal discrete neural networks require analog circuitry

    Energy Technology Data Exchange (ETDEWEB)

    Beiu, V.

    1998-12-01

    This paper starts by overviewing results dealing with the approximation capabilities of neural networks, as well as bounds on the size of threshold gate circuits. Based on a constructive solution for Kolmogorov`s superpositions the authors show that implementing Boolean functions can be done using neurons having an identity transfer function. Because in this case the size of the network is minimized, it follows that size-optimal solutions for implementing Boolean functions can be obtained using analog circuitry. Conclusions and several comments on the required precision are ending the paper.

  6. Finite-size effect on optimal efficiency of heat engines.

    Science.gov (United States)

    Tajima, Hiroyasu; Hayashi, Masahito

    2017-07-01

    The optimal efficiency of quantum (or classical) heat engines whose heat baths are n-particle systems is given by the strong large deviation. We give the optimal work extraction process as a concrete energy-preserving unitary time evolution among the heat baths and the work storage. We show that our optimal work extraction turns the disordered energy of the heat baths to the ordered energy of the work storage, by evaluating the ratio of the entropy difference to the energy difference in the heat baths and the work storage, respectively. By comparing the statistical mechanical optimal efficiency with the macroscopic thermodynamic bound, we evaluate the accuracy of the macroscopic thermodynamics with finite-size heat baths from the statistical mechanical viewpoint. We also evaluate the quantum coherence effect on the optimal efficiency of the cycle processes without restricting their cycle time by comparing the classical and quantum optimal efficiencies.

  7. Layout Optimization of Structures with Finite-size Features using Multiresolution Analysis

    DEFF Research Database (Denmark)

    Chellappa, S.; Diaz, A. R.; Bendsøe, Martin P.

    2004-01-01

    A scheme for layout optimization in structures with multiple finite-sized heterogeneities is presented. Multiresolution analysis is used to compute reduced operators (stiffness matrices) representing the elastic behavior of material distributions with heterogeneities of sizes that are comparable...

  8. Performance Optimization of Irreversible Air Heat Pumps Considering Size Effect

    Science.gov (United States)

    Bi, Yuehong; Chen, Lingen; Ding, Zemin; Sun, Fengrui

    2018-06-01

    Considering the size of an irreversible air heat pump (AHP), heating load density (HLD) is taken as thermodynamic optimization objective by using finite-time thermodynamics. Based on an irreversible AHP with infinite reservoir thermal-capacitance rate model, the expression of HLD of AHP is put forward. The HLD optimization processes are studied analytically and numerically, which consist of two aspects: (1) to choose pressure ratio; (2) to distribute heat-exchanger inventory. Heat reservoir temperatures, heat transfer performance of heat exchangers as well as irreversibility during compression and expansion processes are important factors influencing on the performance of an irreversible AHP, which are characterized with temperature ratio, heat exchanger inventory as well as isentropic efficiencies, respectively. Those impacts of parameters on the maximum HLD are thoroughly studied. The research results show that HLD optimization can make the size of the AHP system smaller and improve the compactness of system.

  9. Multi-objective analytical model for optimal sizing of stand-alone photovoltaic water pumping systems

    International Nuclear Information System (INIS)

    Olcan, Ceyda

    2015-01-01

    Highlights: • An analytical optimal sizing model is proposed for PV water pumping systems. • The objectives are chosen as deficiency of power supply and life-cycle costs. • The crop water requirements are estimated for a citrus tree yard in Antalya. • The optimal tilt angles are calculated for fixed, seasonal and monthly changes. • The sizing results showed the validity of the proposed analytical model. - Abstract: Stand-alone photovoltaic (PV) water pumping systems effectively use solar energy for irrigation purposes in remote areas. However the random variability and unpredictability of solar energy makes difficult the penetration of PV implementations and complicate the system design. An optimal sizing of these systems proves to be essential. This paper recommends a techno-economic optimization model to determine optimally the capacity of the components of PV water pumping system using a water storage tank. The proposed model is developed regarding the reliability and cost indicators, which are the deficiency of power supply probability and life-cycle costs, respectively. The novelty is that the proposed optimization model is analytically defined for two-objectives and it is able to find a compromise solution. The sizing of a stand-alone PV water pumping system comprises a detailed analysis of crop water requirements and optimal tilt angles. Besides the necessity of long solar radiation and temperature time series, the accurate forecasts of water supply needs have to be determined. The calculation of the optimal tilt angle for yearly, seasonally and monthly frequencies results in higher system efficiency. It is, therefore, suggested to change regularly the tilt angle in order to maximize solar energy output. The proposed optimal sizing model incorporates all these improvements and can accomplish a comprehensive optimization of PV water pumping systems. A case study is conducted considering the irrigation of citrus trees yard located in Antalya, Turkey

  10. Optimal Multi-Level Lot Sizing for Requirements Planning Systems

    OpenAIRE

    Earle Steinberg; H. Albert Napier

    1980-01-01

    The wide spread use of advanced information systems such as Material Requirements Planning (MRP) has significantly altered the practice of dependent demand inventory management. Recent research has focused on development of multi-level lot sizing heuristics for such systems. In this paper, we develop an optimal procedure for the multi-period, multi-product, multi-level lot sizing problem by modeling the system as a constrained generalized network with fixed charge arcs and side constraints. T...

  11. Component sizing optimization of plug-in hybrid electric vehicles

    International Nuclear Information System (INIS)

    Wu, Xiaolan; Cao, Binggang; Li, Xueyan; Xu, Jun; Ren, Xiaolong

    2011-01-01

    Plug-in hybrid electric vehicles (PHEVs) are considered as one of the most promising means to improve the near-term sustainability of the transportation and stationary energy sectors. This paper describes a methodology for the optimization of PHEVs component sizing using parallel chaos optimization algorithm (PCOA). In this approach, the objective function is defined so as to minimize the drivetrain cost. In addition, the driving performance requirements are considered as constraints. Finally, the optimization process is performed over three different all electric range (AER) and two types of batteries. The results from computer simulation show the effectiveness of the approach and the reduction in drivetrian cost while ensuring the vehicle performance.

  12. Optimizing CT radiation dose based on patient size and image quality: the size-specific dose estimate method

    Energy Technology Data Exchange (ETDEWEB)

    Larson, David B. [Stanford University School of Medicine, Department of Radiology, Stanford, CA (United States)

    2014-10-15

    The principle of ALARA (dose as low as reasonably achievable) calls for dose optimization rather than dose reduction, per se. Optimization of CT radiation dose is accomplished by producing images of acceptable diagnostic image quality using the lowest dose method available. Because it is image quality that constrains the dose, CT dose optimization is primarily a problem of image quality rather than radiation dose. Therefore, the primary focus in CT radiation dose optimization should be on image quality. However, no reliable direct measure of image quality has been developed for routine clinical practice. Until such measures become available, size-specific dose estimates (SSDE) can be used as a reasonable image-quality estimate. The SSDE method of radiation dose optimization for CT abdomen and pelvis consists of plotting SSDE for a sample of examinations as a function of patient size, establishing an SSDE threshold curve based on radiologists' assessment of image quality, and modifying protocols to consistently produce doses that are slightly above the threshold SSDE curve. Challenges in operationalizing CT radiation dose optimization include data gathering and monitoring, managing the complexities of the numerous protocols, scanners and operators, and understanding the relationship of the automated tube current modulation (ATCM) parameters to image quality. Because CT manufacturers currently maintain their ATCM algorithms as secret for proprietary reasons, prospective modeling of SSDE for patient populations is not possible without reverse engineering the ATCM algorithm and, hence, optimization by this method requires a trial-and-error approach. (orig.)

  13. Optimizing the maximum reported cluster size in the spatial scan statistic for ordinal data.

    Science.gov (United States)

    Kim, Sehwi; Jung, Inkyung

    2017-01-01

    The spatial scan statistic is an important tool for spatial cluster detection. There have been numerous studies on scanning window shapes. However, little research has been done on the maximum scanning window size or maximum reported cluster size. Recently, Han et al. proposed to use the Gini coefficient to optimize the maximum reported cluster size. However, the method has been developed and evaluated only for the Poisson model. We adopt the Gini coefficient to be applicable to the spatial scan statistic for ordinal data to determine the optimal maximum reported cluster size. Through a simulation study and application to a real data example, we evaluate the performance of the proposed approach. With some sophisticated modification, the Gini coefficient can be effectively employed for the ordinal model. The Gini coefficient most often picked the optimal maximum reported cluster sizes that were the same as or smaller than the true cluster sizes with very high accuracy. It seems that we can obtain a more refined collection of clusters by using the Gini coefficient. The Gini coefficient developed specifically for the ordinal model can be useful for optimizing the maximum reported cluster size for ordinal data and helpful for properly and informatively discovering cluster patterns.

  14. Optimizing the crystal size and habit of beta-sitosterol in suspension

    DEFF Research Database (Denmark)

    von Bonsdorff-Nikander, Anna; Rantanen, Jukka; Christiansen, Leena

    2003-01-01

    surfactant, polysorbate 80, has on crystal size distribution and the polymorphic form. This study describes the optimization of the crystallization process, with the object of preparing crystals as small as possible. Particle size distribution and habit were analyzed using optical microscopy, and the crystal...

  15. An Integrated GIS, optimization and simulation framework for optimal PV size and location in campus area environments

    International Nuclear Information System (INIS)

    Kucuksari, Sadik; Khaleghi, Amirreza M.; Hamidi, Maryam; Zhang, Ye; Szidarovszky, Ferenc; Bayraksan, Guzin; Son, Young-Jun

    2014-01-01

    Highlights: • The optimal size and locations for PV units for campus environments are achieved. • The GIS module finds the suitable rooftops and their panel capacity. • The optimization module maximizes the long-term profit of PV installations. • The simulation module evaluates the voltage profile of the distribution network. • The proposed work has been successfully demonstrated for a real university campus. - Abstract: Finding the optimal size and locations for Photovoltaic (PV) units has been a major challenge for distribution system planners and researchers. In this study, a framework is proposed to integrate Geographical Information Systems (GIS), mathematical optimization, and simulation modules to obtain the annual optimal placement and size of PV units for the next two decades in a campus area environment. First, a GIS module is developed to find the suitable rooftops and their panel capacity considering the amount of solar radiation, slope, elevation, and aspect. The optimization module is then used to maximize the long-term net profit of PV installations considering various costs of investment, inverter replacement, operation, and maintenance as well as savings from consuming less conventional energy. A voltage profile of the electricity distribution network is then investigated in the simulation module. In the case of voltage limit violation by intermittent PV generations or load fluctuations, two mitigation strategies, reallocation of the PV units or installation of a local storage unit, are suggested. The proposed framework has been implemented in a real campus area, and the results show that it can effectively be used for long-term installation planning of PV panels considering both the cost and power quality

  16. Securing the AliEn File Catalogue - Enforcing authorization with accountable file operations

    International Nuclear Information System (INIS)

    Schreiner, Steffen; Banerjee, Subho Sankar; Betev, Latchezar; Carminati, Federico; Vladimirovna Datskova, Olga; Furano, Fabrizio; Grigoras, Alina; Grigoras, Costin; Mendez Lorenzo, Patricia; Peters, Andreas Joachim; Saiz, Pablo; Bagnasco, Stefano; Zhu Jianlin

    2011-01-01

    The AliEn Grid Services, as operated by the ALICE Collaboration in its global physics analysis grid framework, is based on a central File Catalogue together with a distributed set of storage systems and the possibility to register links to external data resources. This paper describes several identified vulnerabilities in the AliEn File Catalogue access protocol regarding fraud and unauthorized file alteration and presents a more secure and revised design: a new mechanism, called LFN Booking Table, is introduced in order to keep track of access authorization in the transient state of files entering or leaving the File Catalogue. Due to a simplification of the original Access Envelope mechanism for xrootd-protocol-based storage systems, fundamental computational improvements of the mechanism were achieved as well as an up to 50% reduction of the credential's size. By extending the access protocol with signed status messages from the underlying storage system, the File Catalogue receives trusted information about a file's size and checksum and the protocol is no longer dependent on client trust. Altogether, the revised design complies with atomic and consistent transactions and allows for accountable, authentic, and traceable file operations. This paper describes these changes as part and beyond the development of AliEn version 2.19.

  17. A Quantitative Comparison Between Size, Shape, Topology and Simultaneous Optimization for Truss Structures

    Directory of Open Access Journals (Sweden)

    T.E. Müller

    Full Text Available Abstract There are typically three broad categories of structural optimization namely size, shape and topology. Over the past few decades various researchers have focused on developing techniques for optimizing structures by considering either one or a combination of these aspects. In this paper the efficiency of these techniques are investigated in an effort to quantify the improvement of the result obtained by utilizing a more complex optimization routine. The percentage of the structural weight saved and computational effort required are used as measures to compare these techniques. The well-known genetic algorithm with elitism is used to perform these tests on various benchmark structures found in literature. Some of the results that are obtained include that a simultaneous approach produces, on average, a 22 % better solution than a simple size optimization and a 12 % improvement when compared to a staged approach where the size, shape and topology of the structure is considered sequentially. From these results, it is concluded that a significant saving can be made by using a more complex optimization routine, such as a simultaneous approach.

  18. An Improved Genetic Algorithm for Optimal Stationary Energy Storage System Locating and Sizing

    Directory of Open Access Journals (Sweden)

    Bin Wang

    2014-10-01

    Full Text Available The application of a stationary ultra-capacitor energy storage system (ESS in urban rail transit allows for the recuperation of vehicle braking energy for increasing energy savings as well as for a better vehicle voltage profile. This paper aims to obtain the best energy savings and voltage profile by optimizing the location and size of ultra-capacitors. This paper firstly raises the optimization objective functions from the perspectives of energy savings, regenerative braking cancellation and installation cost, respectively. Then, proper mathematical models of the DC (direct current traction power supply system are established to simulate the electrical load-flow of the traction supply network, and the optimization objections are evaluated in the example of a Chinese metro line. Ultimately, a methodology for optimal ultra-capacitor energy storage system locating and sizing is put forward based on the improved genetic algorithm. The optimized result shows that certain preferable and compromised schemes of ESSs’ location and size can be obtained, acting as a compromise between satisfying better energy savings, voltage profile and lower installation cost.

  19. Optimal Investment Timing and Size of a Logistics Park: A Real Options Perspective

    Directory of Open Access Journals (Sweden)

    Dezhi Zhang

    2017-01-01

    Full Text Available This paper uses a real options approach to address optimal timing and size of a logistics park investment with logistics demand volatility. Two important problems are examined: when should an investment be introduced, and what size should it be? A real option model is proposed to explicitly incorporate the effect of government subsidies on logistics park investment. Logistic demand that triggers the threshold for investment in a logistics park project is explored analytically. Comparative static analyses of logistics park investment are also carried out. Our analytical results show that (1 investors will select smaller sized logistics parks and prepone the investment if government subsidies are considered; (2 the real option will postpone the optimal investment timing of logistics parks compared with net present value approach; and (3 logistic demands can significantly affect the optimal investment size and timing of logistics park investment.

  20. Component sizing optimization of plug-in hybrid electric vehicles

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Xiaolan; Cao, Binggang; Li, Xueyan; Xu, Jun; Ren, Xiaolong [School of Mechanical Engineering, Xi' an Jiaotong University, Xi' an, 710049 (China)

    2011-03-15

    Plug-in hybrid electric vehicles (PHEVs) are considered as one of the most promising means to improve the near-term sustainability of the transportation and stationary energy sectors. This paper describes a methodology for the optimization of PHEVs component sizing using parallel chaos optimization algorithm (PCOA). In this approach, the objective function is defined so as to minimize the drivetrain cost. In addition, the driving performance requirements are considered as constraints. Finally, the optimization process is performed over three different all electric range (AER) and two types of batteries. The results from computer simulation show the effectiveness of the approach and the reduction in drivetrian cost while ensuring the vehicle performance. (author)

  1. Closed-loop optimization of chromatography column sizing strategies in biopharmaceutical manufacture.

    Science.gov (United States)

    Allmendinger, Richard; Simaria, Ana S; Turner, Richard; Farid, Suzanne S

    2014-10-01

    This paper considers a real-world optimization problem involving the identification of cost-effective equipment sizing strategies for the sequence of chromatography steps employed to purify biopharmaceuticals. Tackling this problem requires solving a combinatorial optimization problem subject to multiple constraints, uncertain parameters, and time-consuming fitness evaluations. An industrially-relevant case study is used to illustrate that evolutionary algorithms can identify chromatography sizing strategies with significant improvements in performance criteria related to process cost, time and product waste over the base case. The results demonstrate also that evolutionary algorithms perform best when infeasible solutions are repaired intelligently, the population size is set appropriately, and elitism is combined with a low number of Monte Carlo trials (needed to account for uncertainty). Adopting this setup turns out to be more important for scenarios where less time is available for the purification process. Finally, a data-visualization tool is employed to illustrate how user preferences can be accounted for when it comes to selecting a sizing strategy to be implemented in a real industrial setting. This work demonstrates that closed-loop evolutionary optimization, when tuned properly and combined with a detailed manufacturing cost model, acts as a powerful decisional tool for the identification of cost-effective purification strategies. © 2013 The Authors. Journal of Chemical Technology & Biotechnology published by John Wiley & Sons Ltd on behalf of Society of Chemical Industry.

  2. Assessment Studies regarding the Optimal Sizing of Wind Integrated Hybrid Power Plants for Off-Grid Systems

    DEFF Research Database (Denmark)

    Petersen, Lennart; Iov, Florin; Tarnowski, German Claudio

    2018-01-01

    The paper focusses on the optimal sizing of off-grid hybrid power plants including wind power generation. A modular and scalable system topology as well as an optimal sizing algorithm for the HPP has been presented in a previous publication. In this paper, the sizing process is evaluated by means...... of assessment studies. The aim is to address the impact of renewable resource data, the required power supply availability and reactive power load demand on the optimal sizing of wind integrated off-grid HPPs....

  3. A model for optimal offspring size in fish, including live-bearing and parental effects.

    Science.gov (United States)

    Jørgensen, Christian; Auer, Sonya K; Reznick, David N

    2011-05-01

    Since Smith and Fretwell's seminal article in 1974 on the optimal offspring size, most theory has assumed a trade-off between offspring number and offspring fitness, where larger offspring have better survival or fitness, but with diminishing returns. In this article, we use two ubiquitous biological mechanisms to derive the shape of this trade-off: the offspring's growth rate combined with its size-dependent mortality (predation). For a large parameter region, we obtain the same sigmoid relationship between offspring size and offspring survival as Smith and Fretwell, but we also identify parameter regions where the optimal offspring size is as small or as large as possible. With increasing growth rate, the optimal offspring size is smaller. We then integrate our model with strategies of parental care. Egg guarding that reduces egg mortality favors smaller or larger offspring, depending on how mortality scales with size. For live-bearers, the survival of offspring to birth is a function of maternal survival; if the mother's survival increases with her size, then the model predicts that larger mothers should produce larger offspring. When using parameters for Trinidadian guppies Poecilia reticulata, differences in both growth and size-dependent predation are required to predict observed differences in offspring size between wild populations from high- and low-predation environments.

  4. Geometric size optimization and behavior analysis of a dual-cooled annular fuel

    International Nuclear Information System (INIS)

    Deng Yangbin; Wu Yingwei; Zhang Dalin; Tian Wenxi; Qiu Suizheng; Su Guanghui; Zhang Weixu; Wu Junmei

    2014-01-01

    The dual-cooled annular fuel is one of the innovative fuel concepts, which allows substantial power density increase while maintaining safety margins comparing with that used in currently operating PWRs. In this study, a thermal-hydraulic calculation code, on the basis of inner and outer cooling balance theory, was independently developed to optimize the geometric size of dual-cooled annular fuel elements. The optimization results show that the fuel element with the optimal geometric sizes presents fantastic symmetry in temperature distribution. The optimized geometric sizes agree well with the sizes obtained by MIT (Massachusetts Institute of Technology), which on the other side validates the code reliability and accuracy as well. In addition, a thermo-mechanical-burnup coupling code was developed to study the thermodynamic and mechanical characteristics of fuel elements with considering the irradiation and burnup effects. This coupling program was applied to perform the behavior analysis of annular fuels. The calculation results show that, when the power density increases on the order of up to 50%, the dual-cooled annular fuel elements have much lower fuel temperature and much less fission gas release comparing with conventional fuel rods. Furthermore, the results indicate that the thicknesses of inner and outer gas gap cannot remain the same with the burnup increasing due to the mechanical deformations of fuel pellets and claddings, which results in significantly asymmetric temperature distribution especially at the last phase of burnup. (author)

  5. Contribution to the optimal sizing of the hybrid photovoltaic systems

    International Nuclear Information System (INIS)

    Dimitrov, Dimitar

    2009-01-01

    In this thesis, hybrid photovoltaic (HPV) systems are considered, in which the electricity is generated by a photovoltaic generator, and additionally by a diesel genset. Within this, a software tool for optimal sizing and designing was developed, which was used for optimization of HPV systems, aimed for supplying a small rural village. For optimization, genetic algorithms were used, optimizing 10 HPV system parameters (rated power of the components, battery capacity, dispatching strategy parameters etc.). The optimization objective is to size and design systems that continuously supply the load, with the lowest net electricity cost. In order to speed up the optimization process, the most suitable genetic algorithm settings were chosen by an in-depth previous analysis. Using measurements, the characteristics of PV generator working in real conditions were obtained. According to this, input values for the PV generator simulation model were adapted. It is introduced a quasi-steady battery simulation model, which avoid the voltage and state-of-the-charge value variation problems, when constant current charging/discharging, within a time step interval, is used. This model takes into account the influence of the battery temperature to its operational characteristics. There were also introduced simulation model improvements to the other components in the HPV systems. Using long-term measurement records, validity of solar radiation and air temperature data was checked. It was also analyzed the sensitivity of the obtained optimized HPV systems to the variation of the prices of the: components, fuel and economic rates. Based on the values of multi-decade records for more locations in the Balkan region, it was estimated the occurrence probability of the solar radiation values. This was used for analysing the sensitivity of some HPV performances to the expected stochastic variations of the solar radiation values. (Author)

  6. Hydrogen production system from photovoltaic panels: experimental characterization and size optimization

    International Nuclear Information System (INIS)

    Ferrari, M.L.; Rivarolo, M.; Massardo, A.F.

    2016-01-01

    Highlights: • Plant optimization for hydrogen generation from renewable sources. • Experimental tests on a 42 kW alkaline electrolyser. • Time-dependent hierarchical thermo-economic optimization. • Italian case for electricity costs and solar irradiation (Savona). - Abstract: In this paper an approach for the determination of the optimal size and management of a plant for hydrogen production from renewable source (photovoltaic panels) is presented. Hydrogen is produced by a pressurized alkaline electrolyser (42 kW) installed at the University Campus of Savona (Italy) in 2014 and fed by electrical energy produced by photovoltaic panels. Experimental tests have been carried out in order to analyze the performance curve of the electrolyser in different operative conditions, investigating the influence of the different parameters on the efficiency. The results have been implemented in a software tool in order to describe the behavior of the systems in off-design conditions. Since the electrical energy produced by photovoltaic panels and used to feed the electrolyser is strongly variable because of the random nature of the solar irradiance, a time-dependent hierarchical thermo-economic analysis is carried out to evaluate both the optimal size and the management approach related to the system, considering a fixed size of 1 MW for the photovoltaic panels. The thermo-economic analysis is performed with the software tool W-ECoMP, developed by the authors’ research group: the Italian energy scenario is considered, investigating the impact of electricity cost on the results as well.

  7. Size and Topology Optimization for Trusses with Discrete Design Variables by Improved Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    Yue Wu

    2017-01-01

    Full Text Available Firefly Algorithm (FA, for short is inspired by the social behavior of fireflies and their phenomenon of bioluminescent communication. Based on the fundamentals of FA, two improved strategies are proposed to conduct size and topology optimization for trusses with discrete design variables. Firstly, development of structural topology optimization method and the basic principle of standard FA are introduced in detail. Then, in order to apply the algorithm to optimization problems with discrete variables, the initial positions of fireflies and the position updating formula are discretized. By embedding the random-weight and enhancing the attractiveness, the performance of this algorithm is improved, and thus an Improved Firefly Algorithm (IFA, for short is proposed. Furthermore, using size variables which are capable of including topology variables and size and topology optimization for trusses with discrete variables is formulated based on the Ground Structure Approach. The essential techniques of variable elastic modulus technology and geometric construction analysis are applied in the structural analysis process. Subsequently, an optimization method for the size and topological design of trusses based on the IFA is introduced. Finally, two numerical examples are shown to verify the feasibility and efficiency of the proposed method by comparing with different deterministic methods.

  8. Improving File System Performance by Striping

    Science.gov (United States)

    Lam, Terance L.; Kutler, Paul (Technical Monitor)

    1998-01-01

    This document discusses the performance and advantages of striped file systems on the SGI AD workstations. Performance of several striped file system configurations are compared and guidelines for optimal striping are recommended.

  9. Simultaneous Optimization of Topology and Component Sizes for Double Planetary Gear Hybrid Powertrains

    Directory of Open Access Journals (Sweden)

    Weichao Zhuang

    2016-05-01

    Full Text Available Hybrid powertrain technologies are successful in the passenger car market and have been actively developed in recent years. Optimal topology selection, component sizing, and controls are required for competitive hybrid vehicles, as multiple goals must be considered simultaneously: fuel efficiency, emissions, performance, and cost. Most of the previous studies explored these three design dimensions separately. In this paper, two novel frameworks combining these three design dimensions together are presented and compared. One approach is nested optimization which searches through the whole design space exhaustively. The second approach is called enhanced iterative optimization, which executes the topology optimization and component sizing alternately. A case study shows that the later method can converge to the global optimal design generated from the nested optimization, and is much more computationally efficient. In addition, we also address a known issue of optimal designs: their sensitivity to parameters, such as varying vehicle weight, which is a concern especially for the design of hybrid buses. Therefore, the iterative optimization process is applied to design a robust multi-mode hybrid electric bus under different loading scenarios as the final design challenge of this paper.

  10. Microeconomic principles explain an optimal genome size in bacteria.

    Science.gov (United States)

    Ranea, Juan A G; Grant, Alastair; Thornton, Janet M; Orengo, Christine A

    2005-01-01

    Bacteria can clearly enhance their survival by expanding their genetic repertoire. However, the tight packing of the bacterial genome and the fact that the most evolved species do not necessarily have the biggest genomes suggest there are other evolutionary factors limiting their genome expansion. To clarify these restrictions on size, we studied those protein families contributing most significantly to bacterial-genome complexity. We found that all bacteria apply the same basic and ancestral 'molecular technology' to optimize their reproductive efficiency. The same microeconomics principles that define the optimum size in a factory can also explain the existence of a statistical optimum in bacterial genome size. This optimum is reached when the bacterial genome obtains the maximum metabolic complexity (revenue) for minimal regulatory genes (logistic cost).

  11. The Sizing and Optimization Language (SOL): A computer language to improve the user/optimizer interface

    Science.gov (United States)

    Lucas, S. H.; Scotti, S. J.

    1989-01-01

    The nonlinear mathematical programming method (formal optimization) has had many applications in engineering design. A figure illustrates the use of optimization techniques in the design process. The design process begins with the design problem, such as the classic example of the two-bar truss designed for minimum weight as seen in the leftmost part of the figure. If formal optimization is to be applied, the design problem must be recast in the form of an optimization problem consisting of an objective function, design variables, and constraint function relations. The middle part of the figure shows the two-bar truss design posed as an optimization problem. The total truss weight is the objective function, the tube diameter and truss height are design variables, with stress and Euler buckling considered as constraint function relations. Lastly, the designer develops or obtains analysis software containing a mathematical model of the object being optimized, and then interfaces the analysis routine with existing optimization software such as CONMIN, ADS, or NPSOL. This final state of software development can be both tedious and error-prone. The Sizing and Optimization Language (SOL), a special-purpose computer language whose goal is to make the software implementation phase of optimum design easier and less error-prone, is presented.

  12. An Adaptive Genetic Algorithm with Dynamic Population Size for Optimizing Join Queries

    OpenAIRE

    Vellev, Stoyan

    2008-01-01

    The problem of finding the optimal join ordering executing a query to a relational database management system is a combinatorial optimization problem, which makes deterministic exhaustive solution search unacceptable for queries with a great number of joined relations. In this work an adaptive genetic algorithm with dynamic population size is proposed for optimizing large join queries. The performance of the algorithm is compared with that of several classical non-determinis...

  13. Long term file migration. Part I: file reference patterns

    International Nuclear Information System (INIS)

    Smith, A.J.

    1978-08-01

    In most large computer installations, files are moved between on-line disk and mass storage (tape, integrated mass storage device) either automatically by the system or specifically at the direction of the user. This is the first of two papers which study the selection of algorithms for the automatic migration of files between mass storage and disk. The use of the text editor data sets at the Stanford Linear Accelerator Center (SLAC) computer installation is examined through the analysis of thirteen months of file reference data. Most files are used very few times. Of those that are used sufficiently frequently that their reference patterns may be examined, about a third show declining rates of reference during their lifetime; of the remainder, very few (about 5%) show correlated interreference intervals, and interreference intervals (in days) appear to be more skewed than would occur with the Bernoulli process. Thus, about two-thirds of all sufficiently active files appear to be referenced as a renewal process with a skewed interreference distribution. A large number of other file reference statistics (file lifetimes, interference distributions, moments, means, number of uses/file, file sizes, file rates of reference, etc.) are computed and presented. The results are applied in the following paper to the development and comparative evaluation of file migration algorithms. 17 figures, 13 tables

  14. Is patient size important in dose determination and optimization in cardiology?

    International Nuclear Information System (INIS)

    Reay, J; Chapple, C L; Kotre, C J

    2003-01-01

    Patient dose determination and optimization have become more topical in recent years with the implementation of the Medical Exposures Directive into national legislation, the Ionising Radiation (Medical Exposure) Regulations. This legislation incorporates a requirement for new equipment to provide a means of displaying a measure of patient exposure and introduces the concept of diagnostic reference levels. It is normally assumed that patient dose is governed largely by patient size; however, in cardiology, where procedures are often very complex, the significance of patient size is less well understood. This study considers over 9000 cardiology procedures, undertaken throughout the north of England, and investigates the relationship between patient size and dose. It uses simple linear regression to calculate both correlation coefficients and significance levels for data sorted by both room and individual clinician for the four most common examinations, left ventrical and/or coronary angiography, single vessel stent insertion and single vessel angioplasty. This paper concludes that the correlation between patient size and dose is weak for the procedures considered. It also illustrates the use of an existing method for removing the effect of patient size from dose survey data. This allows typical doses and, therefore, reference levels to be defined for the purposes of dose optimization

  15. Optimal sample preparation for nanoparticle metrology (statistical size measurements) using atomic force microscopy

    International Nuclear Information System (INIS)

    Hoo, Christopher M.; Doan, Trang; Starostin, Natasha; West, Paul E.; Mecartney, Martha L.

    2010-01-01

    Optimal deposition procedures are determined for nanoparticle size characterization by atomic force microscopy (AFM). Accurate nanoparticle size distribution analysis with AFM requires non-agglomerated nanoparticles on a flat substrate. The deposition of polystyrene (100 nm), silica (300 and 100 nm), gold (100 nm), and CdSe quantum dot (2-5 nm) nanoparticles by spin coating was optimized for size distribution measurements by AFM. Factors influencing deposition include spin speed, concentration, solvent, and pH. A comparison using spin coating, static evaporation, and a new fluid cell deposition method for depositing nanoparticles is also made. The fluid cell allows for a more uniform and higher density deposition of nanoparticles on a substrate at laminar flow rates, making nanoparticle size analysis via AFM more efficient and also offers the potential for nanoparticle analysis in liquid environments.

  16. The Jade File System. Ph.D. Thesis

    Science.gov (United States)

    Rao, Herman Chung-Hwa

    1991-01-01

    File systems have long been the most important and most widely used form of shared permanent storage. File systems in traditional time-sharing systems, such as Unix, support a coherent sharing model for multiple users. Distributed file systems implement this sharing model in local area networks. However, most distributed file systems fail to scale from local area networks to an internet. Four characteristics of scalability were recognized: size, wide area, autonomy, and heterogeneity. Owing to size and wide area, techniques such as broadcasting, central control, and central resources, which are widely adopted by local area network file systems, are not adequate for an internet file system. An internet file system must also support the notion of autonomy because an internet is made up by a collection of independent organizations. Finally, heterogeneity is the nature of an internet file system, not only because of its size, but also because of the autonomy of the organizations in an internet. The Jade File System, which provides a uniform way to name and access files in the internet environment, is presented. Jade is a logical system that integrates a heterogeneous collection of existing file systems, where heterogeneous means that the underlying file systems support different file access protocols. Because of autonomy, Jade is designed under the restriction that the underlying file systems may not be modified. In order to avoid the complexity of maintaining an internet-wide, global name space, Jade permits each user to define a private name space. In Jade's design, we pay careful attention to avoiding unnecessary network messages between clients and file servers in order to achieve acceptable performance. Jade's name space supports two novel features: (1) it allows multiple file systems to be mounted under one direction; and (2) it permits one logical name space to mount other logical name spaces. A prototype of Jade was implemented to examine and validate its

  17. Stabilization of microgrid with intermittent renewable energy sources by SMES with optimal coil size

    International Nuclear Information System (INIS)

    Saejia, M.; Ngamroo, I.

    2011-01-01

    A controller design of a superconducting magnetic energy storage unit is proposed. The structure of a power controller is the practical proportional-integral (PI). The PI parameters and coil size are tuned by a particle swarm optimization. The proposed method is able to effectively alleviate power fluctuations. It is well known that the superconducting coil is the vital part of a superconducting magnetic energy storage (SMES) unit. This paper deals with the power controller design of a SMES unit with an optimal coil size for stabilization of an isolated microgrid. The study microgrid consists of renewable energy sources with intermittent power outputs i.e., wind and photovoltaic. Since power generations from such renewable sources are unpredictable and variable, these result in power fluctuations in a microgrid. To stabilize power fluctuations, a SMES unit with a fast control of active and reactive power can be applied. The structure of a power controller is the practical proportional-integral (PI). Based on the minimization of the variance of power fluctuations from renewable sources as well as the initial stored energy of SMES, the optimal PI parameters and coil size are automatically and simultaneously tuned by a particle swarm optimization. Simulation studies show that the proposed SMES controller with an optimal coil size is able to effectively alleviate power fluctuations under various power patterns from intermittent renewable sources.

  18. Stabilization of microgrid with intermittent renewable energy sources by SMES with optimal coil size

    Energy Technology Data Exchange (ETDEWEB)

    Saejia, M., E-mail: samongkol@gmail.com [School of Electrical Engineering, Faculty of Engineering, King Mongkut' s Institute of Technology Ladkrabang, Bangkok 10520 (Thailand); Ngamroo, I. [School of Electrical Engineering, Faculty of Engineering, King Mongkut' s Institute of Technology Ladkrabang, Bangkok 10520 (Thailand)

    2011-11-15

    A controller design of a superconducting magnetic energy storage unit is proposed. The structure of a power controller is the practical proportional-integral (PI). The PI parameters and coil size are tuned by a particle swarm optimization. The proposed method is able to effectively alleviate power fluctuations. It is well known that the superconducting coil is the vital part of a superconducting magnetic energy storage (SMES) unit. This paper deals with the power controller design of a SMES unit with an optimal coil size for stabilization of an isolated microgrid. The study microgrid consists of renewable energy sources with intermittent power outputs i.e., wind and photovoltaic. Since power generations from such renewable sources are unpredictable and variable, these result in power fluctuations in a microgrid. To stabilize power fluctuations, a SMES unit with a fast control of active and reactive power can be applied. The structure of a power controller is the practical proportional-integral (PI). Based on the minimization of the variance of power fluctuations from renewable sources as well as the initial stored energy of SMES, the optimal PI parameters and coil size are automatically and simultaneously tuned by a particle swarm optimization. Simulation studies show that the proposed SMES controller with an optimal coil size is able to effectively alleviate power fluctuations under various power patterns from intermittent renewable sources.

  19. Optimization of detector pixel size for stent visualization in x-ray fluoroscopy

    International Nuclear Information System (INIS)

    Jiang Yuhao; Wilson, David L.

    2006-01-01

    Pixel size is of great interest in the flat-panel detector design because of its potential impact on image quality. In the particular case of angiographic x-ray fluoroscopy, small pixels are required in order to adequately visualize interventional devices such as guidewires and stents which have wire diameters as small as 200 and 50 μm, respectively. We used quantitative experimental and modeling techniques to investigate the optimal pixel size for imaging stents. Image quality was evaluated by the ability of subjects to perform two tasks: detect the presence of a stent and discriminate a partially deployed stent from a fully deployed one in synthetic images. With measurements at 50, 100, 200, and 300 μm, the 100 μm pixel size gave the maximum contrast sensitivity for the detection experiment with the idealized direct detector. For an idealized indirect detector with a scintillating layer, an optimal pixel size was obtained at 200 μm pixel size. A channelized human observer model predicted a peak at 150 and 170 μm, for the idealized direct and indirect detectors, respectively. With regard to the stent deployment task for both detector types, smaller pixel sizes are favored and there is a steep drop in performance with larger pixels. In general, with the increasing exposures, the model and measurements give the enhanced contrast sensitivities and a smaller optimal pixel size. The effects of electronic noise and fill factor were investigated using the model. We believe that the experimental results and human observer model predications can help guide the flat-panel detector design. In addition, the human observer model should work on the similar images and be applicable to the future model and actual flat-panel implementations

  20. GTZ: a fast compression and cloud transmission tool optimized for FASTQ files.

    Science.gov (United States)

    Xing, Yuting; Li, Gen; Wang, Zhenguo; Feng, Bolun; Song, Zhuo; Wu, Chengkun

    2017-12-28

    The dramatic development of DNA sequencing technology is generating real big data, craving for more storage and bandwidth. To speed up data sharing and bring data to computing resource faster and cheaper, it is necessary to develop a compression tool than can support efficient compression and transmission of sequencing data onto the cloud storage. This paper presents GTZ, a compression and transmission tool, optimized for FASTQ files. As a reference-free lossless FASTQ compressor, GTZ treats different lines of FASTQ separately, utilizes adaptive context modelling to estimate their characteristic probabilities, and compresses data blocks with arithmetic coding. GTZ can also be used to compress multiple files or directories at once. Furthermore, as a tool to be used in the cloud computing era, it is capable of saving compressed data locally or transmitting data directly into cloud by choice. We evaluated the performance of GTZ on some diverse FASTQ benchmarks. Results show that in most cases, it outperforms many other tools in terms of the compression ratio, speed and stability. GTZ is a tool that enables efficient lossless FASTQ data compression and simultaneous data transmission onto to cloud. It emerges as a useful tool for NGS data storage and transmission in the cloud environment. GTZ is freely available online at: https://github.com/Genetalks/gtz .

  1. Optimizing Greenhouse Rice Production: What Is the Best Pot Size?

    OpenAIRE

    Eddy, Robert; Acosta, Kevin; Liu, Yisi; Russell, Michael

    2016-01-01

    This publication describes our studies to determine the best pot size to optimize greenhouse rice production. We recommend 9-cm (4-inch) diameter square pot. Pots as small as 7-cm diameter yielded seed. This version is updated to include observations of larger pots with multiple plants. Photos of the plants growing under differing pot sizes are provided. This document is one entry in a series of questions and answers originally posted to the Purdue University Department of Horticulture & L...

  2. The Sizing and Optimization Language, (SOL): Computer language for design problems

    Science.gov (United States)

    Lucas, Stephen H.; Scotti, Stephen J.

    1988-01-01

    The Sizing and Optimization Language, (SOL), a new high level, special purpose computer language was developed to expedite application of numerical optimization to design problems and to make the process less error prone. SOL utilizes the ADS optimization software and provides a clear, concise syntax for describing an optimization problem, the OPTIMIZE description, which closely parallels the mathematical description of the problem. SOL offers language statements which can be used to model a design mathematically, with subroutines or code logic, and with existing FORTRAN routines. In addition, SOL provides error checking and clear output of the optimization results. Because of these language features, SOL is best suited to model and optimize a design concept when the model consits of mathematical expressions written in SOL. For such cases, SOL's unique syntax and error checking can be fully utilized. SOL is presently available for DEC VAX/VMS systems. A SOL package is available which includes the SOL compiler, runtime library routines, and a SOL reference manual.

  3. Optimizing battery sizes of plug-in hybrid and extended range electric vehicles for different user types

    International Nuclear Information System (INIS)

    Redelbach, Martin; Özdemir, Enver Doruk; Friedrich, Horst E.

    2014-01-01

    There are ambitious greenhouse gas emission (GHG) targets for the manufacturers of light duty vehicles. To reduce the GHG emissions, plug-in hybrid electric vehicle (PHEV) and extended range electric vehicle (EREV) are promising powertrain technologies. However, the battery is still a very critical component due to the high production cost and heavy weight. This paper introduces a holistic approach for the optimization of the battery size of PHEVs and EREVs under German market conditions. The assessment focuses on the heterogeneity across drivers, by analyzing the impact of different driving profiles on the optimal battery setup from total cost of ownership (TCO) perspective. The results show that the battery size has a significant effect on the TCO. For an average German driver (15,000 km/a), battery capacities of 4 kWh (PHEV) and 6 kWh (EREV) would be cost optimal by 2020. However, these values vary strongly with the driving profile of the user. Moreover, the optimal battery size is also affected by external factors, e.g. electricity and fuel prices or battery production cost. Therefore, car manufacturers should develop a modular design for their batteries, which allows adapting the storage capacity to meet the individual customer requirements instead of “one size fits all”. - Highlights: • Optimization of the battery size of PHEVs and EREVs under German market conditions. • Focus on heterogeneity across drivers (e.g. mileage, trip distribution, speed). • Optimal battery size strongly depends on the driving profile and energy prices. • OEMs require a modular design for their batteries to meet individual requirements

  4. Optimal system sizing in grid-connected photovoltaic applications

    Science.gov (United States)

    Simoens, H. M.; Baert, D. H.; de Mey, G.

    A costs/benefits analysis for optimizing the combination of photovoltaic (PV) panels, batteries and an inverter for grid interconnected systems at a 500 W/day Belgian residence is presented. It is assumed that some power purchases from the grid will always be necessary, and that excess PV power can be fed into the grid. A minimal value for the cost divided by the performance is defined for economic optimization. Shortages and excesses are calculated for PV panels of 0.5-10 kWp output, with consideration given to the advantages of a battery back-up. The minimal economic value is found to increase with the magnitude of PV output, and an inverter should never be rated at more than half the array maximum output. A maximum panel size for the Belgian residence is projected to be 6 kWp.

  5. Tapping insertional torque allows prediction for better pedicle screw fixation and optimal screw size selection.

    Science.gov (United States)

    Helgeson, Melvin D; Kang, Daniel G; Lehman, Ronald A; Dmitriev, Anton E; Luhmann, Scott J

    2013-08-01

    There is currently no reliable technique for intraoperative assessment of pedicle screw fixation strength and optimal screw size. Several studies have evaluated pedicle screw insertional torque (IT) and its direct correlation with pullout strength. However, there is limited clinical application with pedicle screw IT as it must be measured during screw placement and rarely causes the spine surgeon to change screw size. To date, no study has evaluated tapping IT, which precedes screw insertion, and its ability to predict pedicle screw pullout strength. The objective of this study was to investigate tapping IT and its ability to predict pedicle screw pullout strength and optimal screw size. In vitro human cadaveric biomechanical analysis. Twenty fresh-frozen human cadaveric thoracic vertebral levels were prepared and dual-energy radiographic absorptiometry scanned for bone mineral density (BMD). All specimens were osteoporotic with a mean BMD of 0.60 ± 0.07 g/cm(2). Five specimens (n=10) were used to perform a pilot study, as there were no previously established values for optimal tapping IT. Each pedicle during the pilot study was measured using a digital caliper as well as computed tomography measurements, and the optimal screw size was determined to be equal to or the first size smaller than the pedicle diameter. The optimal tap size was then selected as the tap diameter 1 mm smaller than the optimal screw size. During optimal tap size insertion, all peak tapping IT values were found to be between 2 in-lbs and 3 in-lbs. Therefore, the threshold tapping IT value for optimal pedicle screw and tap size was determined to be 2.5 in-lbs, and a comparison tapping IT value of 1.5 in-lbs was selected. Next, 15 test specimens (n=30) were measured with digital calipers, probed, tapped, and instrumented using a paired comparison between the two threshold tapping IT values (Group 1: 1.5 in-lbs; Group 2: 2.5 in-lbs), randomly assigned to the left or right pedicle on each

  6. Fractal analysis of mandibular trabecular bone: optimal tile sizes for the tile counting method.

    Science.gov (United States)

    Huh, Kyung-Hoe; Baik, Jee-Seon; Yi, Won-Jin; Heo, Min-Suk; Lee, Sam-Sun; Choi, Soon-Chul; Lee, Sun-Bok; Lee, Seung-Pyo

    2011-06-01

    This study was performed to determine the optimal tile size for the fractal dimension of the mandibular trabecular bone using a tile counting method. Digital intraoral radiographic images were obtained at the mandibular angle, molar, premolar, and incisor regions of 29 human dry mandibles. After preprocessing, the parameters representing morphometric characteristics of the trabecular bone were calculated. The fractal dimensions of the processed images were analyzed in various tile sizes by the tile counting method. The optimal range of tile size was 0.132 mm to 0.396 mm for the fractal dimension using the tile counting method. The sizes were closely related to the morphometric parameters. The fractal dimension of mandibular trabecular bone, as calculated with the tile counting method, can be best characterized with a range of tile sizes from 0.132 to 0.396 mm.

  7. A review on recent size optimization methodologies for standalone solar and wind hybrid renewable energy system

    International Nuclear Information System (INIS)

    Al-falahi, Monaaf D.A.; Jayasinghe, S.D.G.; Enshaei, H.

    2017-01-01

    Highlights: • Possible combinations and configurations for standalone PV-WT HES were discussed. • Most recently used assessment parameters for standalone PV-WT HES were explained. • Optimization algorithms and software tools were comprehensively reviewed. • The recent trend of using hybrid algorithms over single algorithms was discussed. • Optimization algorithms for sizing standalone PV-WT HES were critically compared. - Abstract: Electricity demand in remote and island areas are generally supplied by diesel or other fossil fuel based generation systems. Nevertheless, due to the increasing cost and harmful emissions of fossil fuels there is a growing trend to use standalone hybrid renewable energy systems (HRESs). Due to the complementary characteristics, matured technologies and availability in most areas, hybrid systems with solar and wind energy have become the popular choice in such applications. However, the intermittency and high net present cost are the challenges associated with solar and wind energy systems. In this context, optimal sizing is a key factor to attain a reliable supply at a low cost through these standalone systems. Therefore, there has been a growing interest to develop algorithms for size optimization in standalone HRESs. The optimal sizing methodologies reported so far can be broadly categorized as classical algorithms, modern techniques and software tools. Modern techniques, based on single artificial intelligence (AI) algorithms, are becoming more popular than classical algorithms owing to their capabilities in solving complex optimization problems. Moreover, in recent years, there has been a clear trend to use hybrid algorithms over single algorithms mainly due to their ability to provide more promising optimization results. This paper aims to present a comprehensive review on recent developments in size optimization methodologies, as well as a critical comparison of single algorithms, hybrid algorithms, and software tools

  8. Index files for Belle II - very small skim containers

    Science.gov (United States)

    Sevior, Martin; Bloomfield, Tristan; Kuhr, Thomas; Ueda, I.; Miyake, H.; Hara, T.

    2017-10-01

    The Belle II experiment[1] employs the root file format[2] for recording data and is investigating the use of “index-files” to reduce the size of data skims. These files contain pointers to the location of interesting events within the total Belle II data set and reduce the size of data skims by 2 orders of magnitude. We implement this scheme on the Belle II grid by recording the parent file metadata and the event location within the parent file. While the scheme works, it is substantially slower than a normal sequential read of standard skim files using default root file parameters. We investigate the performance of the scheme by adjusting the “splitLevel” and “autoflushsize” parameters of the root files in the parent data files.

  9. Optimal placement and sizing of wind / solar based DG sources in distribution system

    Science.gov (United States)

    Guan, Wanlin; Guo, Niao; Yu, Chunlai; Chen, Xiaoguang; Yu, Haiyang; Liu, Zhipeng; Cui, Jiapeng

    2017-06-01

    Proper placement and sizing of Distributed Generation (DG) in distribution system can obtain maximum potential benefits. This paper proposes quantum particle swarm algorithm (QPSO) based wind turbine generation unit (WTGU) and photovoltaic (PV) array placement and sizing approach for real power loss reduction and voltage stability improvement of distribution system. Performance modeling of wind and solar generation system are described and classified into PQ\\PQ (V)\\PI type models in power flow. Considering the WTGU and PV based DGs in distribution system is geographical restrictive, the optimal area and DG capacity limits of each bus in the setting area need to be set before optimization, the area optimization method is proposed . The method has been tested on IEEE 33-bus radial distribution systems to demonstrate the performance and effectiveness of the proposed method.

  10. Search for the optimal size of printed circuit boards for mechanical structures for electronic equipment

    Directory of Open Access Journals (Sweden)

    Yefimenko A. A.

    2014-12-01

    Full Text Available The authors present a method, an algorithm and a program, designed to determine the optimal size of printed circuit boards (PCB of mechanical structures and different kinds of electronic equipment. The PCB filling factor is taken as an optimization criterion. The method allows one to quickly determine the dependence of the filling factor on the size of the PCB for various components.

  11. Intelligent sizing of a series hybrid electric power-train system based on Chaos-enhanced accelerated particle swarm optimization

    International Nuclear Information System (INIS)

    Zhou, Quan; Zhang, Wei; Cash, Scott; Olatunbosun, Oluremi; Xu, Hongming; Lu, Guoxiang

    2017-01-01

    Highlights: • A novel algorithm for hybrid electric powertrain intelligent sizing is introduced and applied. • The proposed CAPSO algorithm is capable of finding the real optimal result with much higher reputation. • Logistic mapping is the most effective strategy to build CAPSO. • The CAPSO gave more reliable results and increased the efficiency by 1.71%. - Abstract: This paper firstly proposed a novel HEV sizing method using the Chaos-enhanced Accelerated Particle Swarm Optimization (CAPSO) algorithm and secondly provided a demonstration on sizing a series hybrid electric powertrain with investigations of chaotic mapping strategies to achieve the global optimization. In this paper, the intelligent sizing of a series hybrid electric powertrain is formulated as an integer multi-objective optimization issue by modelling the powertrain system. The intelligent sizing mechanism based on APSO is then introduced, and 4 types of the most effective chaotic mapping strategy are investigated to upgrade the standard APSO into CAPSO algorithms for intelligent sizing. The evaluation of the intelligent sizing systems based on standard APSO and CAPSOs are then performed. The Monte Carlo analysis and reputation evaluation indicate that the CAPSO outperforms the standard APSO for finding the real optimal sizing result with much higher reputation, and CAPSO with logistic mapping strategy is the most effective algorithm for HEV powertrain components intelligent sizing. In addition, this paper also performs the sensitivity analysis and Pareto analysis to help engineers customize the intelligent sizing system.

  12. Optimized bolt tightening strategies for gasketed flanged pipe joints of different sizes

    International Nuclear Information System (INIS)

    Abid, Muhammad; Khan, Ayesha; Nash, David Hugh; Hussain, Masroor; Wajid, Hafiz Abdul

    2016-01-01

    Achieving a proper preload in the bolts of a gasketed bolted flanged pipe joint during joint assembly is considered important for its optimized performance. This paper presents results of detailed non-linear finite element analysis of an optimized bolt tightening strategy of different joint sizes for achieving proper preload close to the target stress values. Industrial guidelines are considered for applying recommended target stress values with TCM (torque control method) and SCM (stretch control method) using a customized optimization algorithm. Different joint components performance is observed and discussed in detail.

  13. Impact of Battery’s Model Accuracy on Size Optimization Process of a Standalone Photovoltaic System

    Directory of Open Access Journals (Sweden)

    Ibrahim Anwar Ibrahim

    2016-09-01

    Full Text Available This paper presents a comparative study between two proposed size optimization methods based on two battery’s models. Simple and complex battery models are utilized to optimally size a standalone photovoltaic system. Hourly meteorological data are used in this research for a specific site. Results show that by using the complex model of the battery, the cost of the system is reduced by 31%. In addition, by using the complex battery model, the sizes of the PV array and the battery are reduced by 5.6% and 30%, respectively, as compared to the case which is based on the simple battery model. This shows the importance of utilizing accurate battery models in sizing standalone photovoltaic systems.

  14. A normative inference approach for optimal sample sizes in decisions from experience

    Science.gov (United States)

    Ostwald, Dirk; Starke, Ludger; Hertwig, Ralph

    2015-01-01

    “Decisions from experience” (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the “sampling paradigm,” which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the “optimal” sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE. PMID:26441720

  15. Optimal sizing study of hybrid wind/PV/diesel power generation unit

    Energy Technology Data Exchange (ETDEWEB)

    Belfkira, Rachid; Zhang, Lu; Barakat, Georges [Groupe de Recherche en Electrotechnique et Automatique du Havre, University of Le Havre, 25 rue Philippe Lebon, BP 1123, 76063 Le Havre (France)

    2011-01-15

    In this paper, a methodology of sizing optimization of a stand-alone hybrid wind/PV/diesel energy system is presented. This approach makes use of a deterministic algorithm to suggest, among a list of commercially available system devices, the optimal number and type of units ensuring that the total cost of the system is minimized while guaranteeing the availability of the energy. The collection of 6 months of data of wind speed, solar radiation and ambient temperature recorded for every hour of the day were used. The mathematical modeling of the main elements of the hybrid wind/PV/diesel system is exposed showing the more relevant sizing variables. A deterministic algorithm is used to minimize the total cost of the system while guaranteeing the satisfaction of the load demand. A comparison between the total cost of the hybrid wind/PV/diesel energy system with batteries and the hybrid wind/PV/diesel energy system without batteries is presented. The reached results demonstrate the practical utility of the used sizing methodology and show the influence of the battery storage on the total cost of the hybrid system. (author)

  16. Optimal Sizing and Placement of Battery Energy Storage in Distribution System Based on Solar Size for Voltage Regulation

    Energy Technology Data Exchange (ETDEWEB)

    Nazaripouya, Hamidreza [Univ. of California, Los Angeles, CA (United States); Wang, Yubo [Univ. of California, Los Angeles, CA (United States); Chu, Peter [Univ. of California, Los Angeles, CA (United States); Pota, Hemanshu R. [Univ. of California, Los Angeles, CA (United States); Gadh, Rajit [Univ. of California, Los Angeles, CA (United States)

    2016-07-26

    This paper proposes a new strategy to achieve voltage regulation in distributed power systems in the presence of solar energy sources and battery storage systems. The goal is to find the minimum size of battery storage and its corresponding location in the network based on the size and place of the integrated solar generation. The proposed method formulates the problem by employing the network impedance matrix to obtain an analytical solution instead of using a recursive algorithm such as power flow. The required modifications for modeling the slack and PV buses (generator buses) are utilized to increase the accuracy of the approach. The use of reactive power control to regulate the voltage regulation is not always an optimal solution as in distribution systems R/X is large. In this paper the minimum size and the best place of battery storage is achieved by optimizing the amount of both active and reactive power exchanged by battery storage and its gridtie inverter (GTI) based on the network topology and R/X ratios in the distribution system. Simulation results for the IEEE 14-bus system verify the effectiveness of the proposed approach.

  17. Organometallic approach to polymer-protected antibacterial silver nanoparticles: optimal nanoparticle size-selection for bacteria interaction

    Energy Technology Data Exchange (ETDEWEB)

    Crespo, Julian; Garcia-Barrasa, Jorge; Lopez-de-Luzuriaga, Jose M.; Monge, Miguel, E-mail: miguel.monge@unirioja.es; Olmos, M. Elena [Universidad de La Rioja, Centro de Investigacion en Sintesis Quimica (CISQ), Departamento de Quimica (Spain); Saenz, Yolanda; Torres, Carmen [Centro de Investigacion Biomedica de La Rioja, Area de Microbiologia Molecular (Spain)

    2012-12-15

    The optimal size-specific affinity of silver nanoparticles (Ag NPs) towards E. coli bacteria has been studied. For this purpose, Ag NPs coated with polyvinylpyrrolidone (PVP) and cellulose acetate (CA) have been prepared using an organometallic approach. The complex NBu{sub 4}[Ag(C{sub 6}F{sub 5}){sub 2}] has been treated with AgClO{sub 4} in a 1:1 molar ratio giving rise to the nanoparticle precursor [Ag(C{sub 6}F{sub 5})] in solution. Addition of an excess of PVP (1) or CA (2) and 5 h of reflux in tetrahydrofuran (THF) at 66 Degree-Sign C leads to Ag NPs of small size (4.8 {+-} 3.0 nm for PVP-Ag NPs and 3.0 {+-} 1.2 nm for CA-Ag NPs) that coexist in both cases with larger nanoparticles between 7 and 25 nm. Both nanomaterials display a high antibacterial effectiveness against E. coli. The TEM analysis of the nanoparticle-bacterial cell membrane interaction shows an optimal size-specific affinity for PVP-Ag NPs of 5.4 {+-} 0.7 nm in the presence of larger size silver nanoparticles.Graphical AbstractAn organometallic approach permits the synthesis of small size silver nanoparticles (ca 5 nm) as a main population in the presence of larger size nanoparticles. Optimal silver nanoparticle size-selection (5.4 nm) for the interaction with the bacterial membrane is achieved.

  18. Optimal Sizing of a Stand-Alone Hybrid Power System Based on Battery/Hydrogen with an Improved Ant Colony Optimization

    Directory of Open Access Journals (Sweden)

    Weiqiang Dong

    2016-09-01

    Full Text Available A distributed power system with renewable energy sources is very popular in recent years due to the rapid depletion of conventional sources of energy. Reasonable sizing for such power systems could improve the power supply reliability and reduce the annual system cost. The goal of this work is to optimize the size of a stand-alone hybrid photovoltaic (PV/wind turbine (WT/battery (B/hydrogen system (a hybrid system based on battery and hydrogen (HS-BH for reliable and economic supply. Two objectives that take the minimum annual system cost and maximum system reliability described as the loss of power supply probability (LPSP have been addressed for sizing HS-BH from a more comprehensive perspective, considering the basic demand of load, the profit from hydrogen, which is produced by HS-BH, and an effective energy storage strategy. An improved ant colony optimization (ACO algorithm has been presented to solve the sizing problem of HS-BH. Finally, a simulation experiment has been done to demonstrate the developed results, in which some comparisons have been done to emphasize the advantage of HS-BH with the aid of data from an island of Zhejiang, China.

  19. The Galley Parallel File System

    Science.gov (United States)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    Most current multiprocessor file systems are designed to use multiple disks in parallel, using the high aggregate bandwidth to meet the growing I/0 requirements of parallel scientific applications. Many multiprocessor file systems provide applications with a conventional Unix-like interface, allowing the application to access multiple disks transparently. This interface conceals the parallelism within the file system, increasing the ease of programmability, but making it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. In addition to providing an insufficient interface, most current multiprocessor file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic scientific multiprocessor workloads. We discuss Galley's file structure and application interface, as well as the performance advantages offered by that interface.

  20. Optimal Placement and Sizing of PV-STATCOM in Power Systems Using Empirical Data and Adaptive Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Reza Sirjani

    2018-03-01

    Full Text Available Solar energy is a source of free, clean energy which avoids the destructive effects on the environment that have long been caused by power generation. Solar energy technology rivals fossil fuels, and its development has increased recently. Photovoltaic (PV solar farms can only produce active power during the day, while at night, they are completely idle. At the same time, though, active power should be supported by reactive power. Reactive power compensation in power systems improves power quality and stability. The use during the night of a PV solar farm inverter as a static synchronous compensator (or PV-STATCOM device has recently been proposed which can improve system performance and increase the utility of a PV solar farm. In this paper, a method for optimal PV-STATCOM placement and sizing is proposed using empirical data. Considering the objectives of power loss and cost minimization as well as voltage improvement, two sub-problems of placement and sizing, respectively, are solved by a power loss index and adaptive particle swarm optimization (APSO. Test results show that APSO not only performs better in finding optimal solutions but also converges faster compared with bee colony optimization (BCO and lightening search algorithm (LSA. Installation of a PV solar farm, STATCOM, and PV-STATCOM in a system are each evaluated in terms of efficiency and cost.

  1. Optimal sizing of a hybrid grid-connected photovoltaic and wind power system

    International Nuclear Information System (INIS)

    González, Arnau; Riba, Jordi-Roger; Rius, Antoni; Puig, Rita

    2015-01-01

    Highlights: • Hybrid renewable energy systems are efficient mechanisms to generate electrical power. • This work optimally sizes hybrid grid-connected photovoltaic–wind power systems. • It deals with hourly wind, solar irradiation and electricity demand data. • The system cost is minimized while matching the electricity supply with the demand. • A sensitivity analysis to detect the most critical design variables has been done. - Abstract: Hybrid renewable energy systems (HRES) have been widely identified as an efficient mechanism to generate electrical power based on renewable energy sources (RES). This kind of energy generation systems are based on the combination of one or more RES allowing to complement the weaknesses of one with strengths of another and, therefore, reducing installation costs with an optimized installation. To do so, optimization methodologies are a trendy mechanism because they allow attaining optimal solutions given a certain set of input parameters and variables. This work is focused on the optimal sizing of hybrid grid-connected photovoltaic–wind power systems from real hourly wind and solar irradiation data and electricity demand from a certain location. The proposed methodology is capable of finding the sizing that leads to a minimum life cycle cost of the system while matching the electricity supply with the local demand. In the present article, the methodology is tested by means of a case study in which the actual hourly electricity retail and market prices have been implemented to obtain realistic estimations of life cycle costs and benefits. A sensitivity analysis that allows detecting to which variables the system is more sensitive has also been performed. Results presented show that the model responds well to changes in the input parameters and variables while providing trustworthy sizing solutions. According to these results, a grid-connected HRES consisting of photovoltaic (PV) and wind power technologies would be

  2. A Metadata-Rich File System

    Energy Technology Data Exchange (ETDEWEB)

    Ames, S; Gokhale, M B; Maltzahn, C

    2009-01-07

    Despite continual improvements in the performance and reliability of large scale file systems, the management of file system metadata has changed little in the past decade. The mismatch between the size and complexity of large scale data stores and their ability to organize and query their metadata has led to a de facto standard in which raw data is stored in traditional file systems, while related, application-specific metadata is stored in relational databases. This separation of data and metadata requires considerable effort to maintain consistency and can result in complex, slow, and inflexible system operation. To address these problems, we have developed the Quasar File System (QFS), a metadata-rich file system in which files, metadata, and file relationships are all first class objects. In contrast to hierarchical file systems and relational databases, QFS defines a graph data model composed of files and their relationships. QFS includes Quasar, an XPATH-extended query language for searching the file system. Results from our QFS prototype show the effectiveness of this approach. Compared to the defacto standard, the QFS prototype shows superior ingest performance and comparable query performance on user metadata-intensive operations and superior performance on normal file metadata operations.

  3. Multi-objective energy management optimization and parameter sizing for proton exchange membrane hybrid fuel cell vehicles

    International Nuclear Information System (INIS)

    Hu, Zunyan; Li, Jianqiu; Xu, Liangfei; Song, Ziyou; Fang, Chuan; Ouyang, Minggao; Dou, Guowei; Kou, Gaihong

    2016-01-01

    Highlights: • Fuel economy, lithium battery size and powertrain system durability are incorporated in optimization. • A multi-objective power allocation strategy by taking battery size into consideration is proposed. • Influences of battery capacity and auxiliary power on strategy design are explored. • Battery capacity and fuel cell service life for the system life cycle cost are optimized. - Abstract: The powertrain system of a typical proton electrolyte membrane hybrid fuel cell vehicle contains a lithium battery package and a fuel cell stack. A multi-objective optimization for this powertrain system of a passenger car, taking account of fuel economy and system durability, is discussed in this paper. Based on an analysis of the optimum results obtained by dynamic programming, a soft-run strategy was proposed for real-time and multi-objective control algorithm design. The soft-run strategy was optimized by taking lithium battery size into consideration, and implemented using two real-time algorithms. When compared with the optimized dynamic programming results, the power demand-based control method proved more suitable for powertrain systems equipped with larger capacity batteries, while the state of charge based control method proved superior in other cases. On this basis, the life cycle cost was optimized by considering both lithium battery size and equivalent hydrogen consumption. The battery capacity selection proved more flexible, when powertrain systems are equipped with larger capacity batteries. Finally, the algorithm has been validated in a fuel cell city bus. It gets a good balance of fuel economy and system durability in a three months demonstration operation.

  4. Topological and sizing optimization of reinforced ribs for a machining centre

    Science.gov (United States)

    Chen, T. Y.; Wang, C. B.

    2008-01-01

    The topology optimization technique is applied to improve rib designs of a machining centre. The ribs of the original design are eliminated and new ribs are generated by topology optimization in the same 3D design space containing the original ribs. Two-dimensional plate elements are used to replace the optimum rib topologies formed by 3D rectangular elements. After topology optimization, sizing optimization is used to determine the optimum thicknesses of the ribs. When forming the optimum design problem, multiple configurations of the structure are considered simultaneously. The objective is to minimize rib weight. Static constraints confine displacements of the cutting tool and the workpiece due to cutting forces and the heat generated by spindle bearings. The dynamic constraint requires the fundamental natural frequency of the structure to be greater than a given value in order to reduce dynamic deflection. Compared with the original design, the improvement resulting from this approach is significant.

  5. Testing the Forensic Interestingness of Image Files Based on Size and Type

    Science.gov (United States)

    2017-09-01

    down to 0.18% (Rowe, 2015). 7 III. IMAGE FILE FORMATS When scanning a computer hard drive, many kinds of pictures are found. Digital images are not...3  III.  IMAGE FILE FORMATS ...Interchange Format JPEG Joint Photographic Experts Group LSH Locality Sensitive Hashing NSRL National Software Reference Library PDF Portable Document

  6. Optimal sizing of plug-in fuel cell electric vehicles using models of vehicle performance and system cost

    International Nuclear Information System (INIS)

    Xu, Liangfei; Ouyang, Minggao; Li, Jianqiu; Yang, Fuyuan; Lu, Languang; Hua, Jianfeng

    2013-01-01

    Highlights: ► An analytical model for vehicle performance and power-train parameters. ► Quantitative relationships between vehicle performance and power-train parameters. ► Optimal sizing rules that help designing an optimal PEM fuel cell power-train. ► An on-road testing showing the performance of the proposed vehicle. -- Abstract: This paper presents an optimal sizing method for plug-in proton exchange membrane (PEM) fuel cell and lithium-ion battery (LIB) powered city buses. We propose a theoretical model describing the relationship between components’ parameters and vehicle performance. Analysis results show that within the working range of the electric motor, the maximal velocity and driving distance are influenced linearly by the parameters of the components, e.g. fuel cell efficiency, fuel cell output power, stored hydrogen mass, vehicle auxiliary power, battery capacity, and battery average resistance. Moreover, accelerating time is also linearly dependant on the abovementioned parameters, except of those of the battery. Next, we attempt to minimize fixed and operating costs by introducing an optimal sizing problem that uses as constraints the requirements on vehicle performance. By solving this problem, we attain several optimal sizing rules. Finally, we use these rules to design a plug-in PEM fuel cell city bus and present performance results obtained by on-road testing.

  7. Optimal design of disc-type magneto-rheological brake for mid-sized motorcycle: experimental evaluation

    Science.gov (United States)

    Sohn, Jung Woo; Jeon, Juncheol; Nguyen, Quoc Hung; Choi, Seung-Bok

    2015-08-01

    In this paper, a disc-type magneto-rheological (MR) brake is designed for a mid-sized motorcycle and its performance is experimentally evaluated. The proposed MR brake consists of an outer housing, a rotating disc immersed in MR fluid, and a copper wire coiled around a bobbin to generate a magnetic field. The structural configuration of the MR brake is first presented with consideration of the installation space for the conventional hydraulic brake of a mid-sized motorcycle. The design parameters of the proposed MR brake are optimized to satisfy design requirements such as the braking torque, total mass of the MR brake, and cruising temperature caused by the magnetic-field friction of the MR fluid. In the optimization procedure, the braking torque is calculated based on the Herschel-Bulkley rheological model, which predicts MR fluid behavior well at high shear rate. An optimization tool based on finite element analysis is used to obtain the optimized dimensions of the MR brake. After manufacturing the MR brake, mechanical performances regarding the response time, braking torque and cruising temperature are experimentally evaluated.

  8. Adding Data Management Services to Parallel File Systems

    Energy Technology Data Exchange (ETDEWEB)

    Brandt, Scott [Univ. of California, Santa Cruz, CA (United States)

    2015-03-04

    The objective of this project, called DAMASC for “Data Management in Scientific Computing”, is to coalesce data management with parallel file system management to present a declarative interface to scientists for managing, querying, and analyzing extremely large data sets efficiently and predictably. Managing extremely large data sets is a key challenge of exascale computing. The overhead, energy, and cost of moving massive volumes of data demand designs where computation is close to storage. In current architectures, compute/analysis clusters access data in a physically separate parallel file system and largely leave it scientist to reduce data movement. Over the past decades the high-end computing community has adopted middleware with multiple layers of abstractions and specialized file formats such as NetCDF-4 and HDF5. These abstractions provide a limited set of high-level data processing functions, but have inherent functionality and performance limitations: middleware that provides access to the highly structured contents of scientific data files stored in the (unstructured) file systems can only optimize to the extent that file system interfaces permit; the highly structured formats of these files often impedes native file system performance optimizations. We are developing Damasc, an enhanced high-performance file system with native rich data management services. Damasc will enable efficient queries and updates over files stored in their native byte-stream format while retaining the inherent performance of file system data storage via declarative queries and updates over views of underlying files. Damasc has four key benefits for the development of data-intensive scientific code: (1) applications can use important data-management services, such as declarative queries, views, and provenance tracking, that are currently available only within database systems; (2) the use of these services becomes easier, as they are provided within a familiar file

  9. Long term developments in irradiated natural uranium processing costs. Optimal size and siting of plants

    International Nuclear Information System (INIS)

    Thiriet, L.

    1964-01-01

    The aim of this paper is to help solve the problem of the selection of optimal sizes and sites for spent nuclear fuel processing plants associated with power capacity programmes already installed. Firstly, the structure of capital and running costs of irradiated natural uranium processing plants is studied, as well as the influence of plant sizes on these costs and structures. Shipping costs from the production site to the plant must also be added to processing costs. An attempt to reach a minimum cost for the production of a country or a group of countries must therefore take into account both the size and the location of the plants. The foreseeable shipping costs and their structure (freight, insurance, container cost and depreciation), for spent natural uranium are indicated. Secondly, for various annual spent fuel reprocessing programmes, the optimal sizes and locations of the plants are determined. The sensitivity of the results to the basic assumptions relative to processing costs, shipping costs, the starting up year of the plant programme and the length of period considered, is also tested. - this rather complex problem, of a combinative nature, is solved through dynamic programming methods. - It is shown that these methods can also be applied to the problem of selecting the optimal sizes and locations of processing plants for MTR type fuel elements, related to research reactor programmes, as well as to future plutonium element processing plants related to breeder reactors. Thirdly, the case where yearly extraction of the plutonium contained in the irradiated natural uranium is not compulsory is examined; some stockpiling of the fuel is then allowed some years, entailing delayed processing. The load factor of such plants is thus greatly improved with respect to that of plants where the annual plutonium demand is strictly satisfied. By including spent natural uranium stockpiling costs an optimal rhythm of introduction and optimal sizes for spent fuel

  10. Optimizing the passenger air bag of an adaptive restraint system for multiple size occupants.

    Science.gov (United States)

    Bai, Zhonghao; Jiang, Binhui; Zhu, Feng; Cao, Libo

    2014-01-01

    The development of the adaptive occupant restraint system (AORS) has led to an innovative way to optimize such systems for multiple size occupants. An AORS consists of multiple units such as adaptive air bags, seat belts, etc. During a collision, as a supplemental protective device, air bags can provide constraint force and play a role in dissipating the crash energy of the occupants' head and thorax. This article presents an investigation into an adaptive passenger air bag (PAB). The purpose of this study is to develop a base shape of a PAB for different size occupants using an optimization method. Four typical base shapes of a PAB were designed based on geometric data on the passenger side. Then 4 PAB finite element (FE) models and a validated sled with different size dummy models were developed in MADYMO (TNO, Rijswijk, The Netherlands) to conduct the optimization to obtain the best baseline PAB that would be used in the AORS. The objective functions-that is, the minimum total probability of injuries (∑Pcomb) of the 5th percentile female and 50th and 95th percentile male dummies-were adopted to evaluate the optimal configurations. The injury probability (Pcomb) for each dummy was adopted from the U.S. New Car Assessment Program (US-NCAP). The parameters of the AORS were first optimized for different types of PAB base shapes in a frontal impact. Then, contact time duration and force between the PAB and dummy head/chest were optimized by adjusting the parameters of the PAB, such as the number and position of tethers, lower the Pcomb of the 95th percentile male dummy. According to the optimization results, 4 typical PABs could provide effective protection to 5th and 50th percentile dummies. However, due to the heavy and large torsos of the 95th percentile occupants, the current occupant restraint system does not demonstrate satisfactory protective function, particularly for the thorax.

  11. Optimization of the size and shape of the set-in nozzle for a PWR reactor pressure vessel

    Energy Technology Data Exchange (ETDEWEB)

    Murtaza, Usman Tariq, E-mail: maniiut@yahoo.com; Javed Hyder, M., E-mail: hyder@pieas.edu.pk

    2015-04-01

    Highlights: • The size and shape of the set-in nozzle of the RPV have been optimized. • The optimized nozzle ensure the reduction of the mass around 198 kg per nozzle. • The mass of the RPV should be minimized for better fracture toughness. - Abstract: The objective of this research work is to optimize the size and shape of the set-in nozzle for a typical reactor pressure vessel (RPV) of a 300 MW pressurized water reactor. The analysis was performed by optimizing the four design variables which control the size and shape of the nozzle. These variables are inner radius of the nozzle, thickness of the nozzle, taper angle at the nozzle-cylinder intersection, and the point where taper of the nozzle starts from. It is concluded that the optimum design of the nozzle is the one that minimizes the two conflicting state variables, i.e., the stress intensity (Tresca yield criterion) and the mass of the RPV.

  12. A hybrid of ant colony optimization and artificial bee colony algorithm for probabilistic optimal placement and sizing of distributed energy resources

    International Nuclear Information System (INIS)

    Kefayat, M.; Lashkar Ara, A.; Nabavi Niaki, S.A.

    2015-01-01

    Highlights: • A probabilistic optimization framework incorporated with uncertainty is proposed. • A hybrid optimization approach combining ACO and ABC algorithms is proposed. • The problem is to deal with technical, environmental and economical aspects. • A fuzzy interactive approach is incorporated to solve the multi-objective problem. • Several strategies are implemented to compare with literature methods. - Abstract: In this paper, a hybrid configuration of ant colony optimization (ACO) with artificial bee colony (ABC) algorithm called hybrid ACO–ABC algorithm is presented for optimal location and sizing of distributed energy resources (DERs) (i.e., gas turbine, fuel cell, and wind energy) on distribution systems. The proposed algorithm is a combined strategy based on the discrete (location optimization) and continuous (size optimization) structures to achieve advantages of the global and local search ability of ABC and ACO algorithms, respectively. Also, in the proposed algorithm, a multi-objective ABC is used to produce a set of non-dominated solutions which store in the external archive. The objectives consist of minimizing power losses, total emissions produced by substation and resources, total electrical energy cost, and improving the voltage stability. In order to investigate the impact of the uncertainty in the output of the wind energy and load demands, a probabilistic load flow is necessary. In this study, an efficient point estimate method (PEM) is employed to solve the optimization problem in a stochastic environment. The proposed algorithm is tested on the IEEE 33- and 69-bus distribution systems. The results demonstrate the potential and effectiveness of the proposed algorithm in comparison with those of other evolutionary optimization methods

  13. An Improved Genetic Algorithm for Optimal Stationary Energy Storage System Locating and Sizing

    OpenAIRE

    Bin Wang; Zhongping Yang; Fei Lin; Wei Zhao

    2014-01-01

    The application of a stationary ultra-capacitor energy storage system (ESS) in urban rail transit allows for the recuperation of vehicle braking energy for increasing energy savings as well as for a better vehicle voltage profile. This paper aims to obtain the best energy savings and voltage profile by optimizing the location and size of ultra-capacitors. This paper firstly raises the optimization objective functions from the perspectives of energy savings, regenerative braking cancellation a...

  14. High-Performance, Multi-Node File Copies and Checksums for Clustered File Systems

    Science.gov (United States)

    Kolano, Paul Z.; Ciotti, Robert B.

    2012-01-01

    Modern parallel file systems achieve high performance using a variety of techniques, such as striping files across multiple disks to increase aggregate I/O bandwidth and spreading disks across multiple servers to increase aggregate interconnect bandwidth. To achieve peak performance from such systems, it is typically necessary to utilize multiple concurrent readers/writers from multiple systems to overcome various singlesystem limitations, such as number of processors and network bandwidth. The standard cp and md5sum tools of GNU coreutils found on every modern Unix/Linux system, however, utilize a single execution thread on a single CPU core of a single system, and hence cannot take full advantage of the increased performance of clustered file systems. Mcp and msum are drop-in replacements for the standard cp and md5sum programs that utilize multiple types of parallelism and other optimizations to achieve maximum copy and checksum performance on clustered file systems. Multi-threading is used to ensure that nodes are kept as busy as possible. Read/write parallelism allows individual operations of a single copy to be overlapped using asynchronous I/O. Multinode cooperation allows different nodes to take part in the same copy/checksum. Split-file processing allows multiple threads to operate concurrently on the same file. Finally, hash trees allow inherently serial checksums to be performed in parallel. Mcp and msum provide significant performance improvements over standard cp and md5sum using multiple types of parallelism and other optimizations. The total speed-ups from all improvements are significant. Mcp improves cp performance over 27x, msum improves md5sum performance almost 19x, and the combination of mcp and msum improves verified copies via cp and md5sum by almost 22x. These improvements come in the form of drop-in replacements for cp and md5sum, so are easily used and are available for download as open source software at http://mutil.sourceforge.net.

  15. PSO-Based Smart Grid Application for Sizing and Optimization of Hybrid Renewable Energy Systems.

    Science.gov (United States)

    Mohamed, Mohamed A; Eltamaly, Ali M; Alolah, Abdulrahman I

    2016-01-01

    This paper introduces an optimal sizing algorithm for a hybrid renewable energy system using smart grid load management application based on the available generation. This algorithm aims to maximize the system energy production and meet the load demand with minimum cost and highest reliability. This system is formed by photovoltaic array, wind turbines, storage batteries, and diesel generator as a backup source of energy. Demand profile shaping as one of the smart grid applications is introduced in this paper using load shifting-based load priority. Particle swarm optimization is used in this algorithm to determine the optimum size of the system components. The results obtained from this algorithm are compared with those from the iterative optimization technique to assess the adequacy of the proposed algorithm. The study in this paper is performed in some of the remote areas in Saudi Arabia and can be expanded to any similar regions around the world. Numerous valuable results are extracted from this study that could help researchers and decision makers.

  16. 13 CFR 127.601 - May a protest challenging the size and status of a concern as an EDWOSB or WOSB be filed together?

    Science.gov (United States)

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false May a protest challenging the size and status of a concern as an EDWOSB or WOSB be filed together? 127.601 Section 127.601 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION WOMEN-OWNED SMALL BUSINESS FEDERAL CONTRACT...

  17. Optimal sizing and operation of energy storage systems considering long term assessment

    Directory of Open Access Journals (Sweden)

    Gerardo Guerra

    2018-01-01

    Full Text Available This paper proposes a procedure for estimating the optimal sizing of Photovoltaic Generators and Energy Storage units when they are operated from the utility’s perspective. The goal is to explore the potential improvement on the overall operating conditions of the distribution system to which the Generators and Storage units will be connected. Optimization is conducted by means of a General Parallel Genetic Algorithm that seeks to maximize the technical benefits for the distribution system. The paper proposes an operation strategy for Energy Storage units based on the daily variation of load and generation; the operation strategy is optimized for an evaluation period of one year using hourly power curves. The construction of the yearly Storage operation curve results in a high-dimension optimization problem; as a result, different day-classification methods are applied in order to reduce the dimension of the optimization. Results show that the proposed approach is capable of producing significant improvements in system operating conditions and that the best performance is obtained when the day-classification is based on the similarity among daily power curves.

  18. Fast probabilistic file fingerprinting for big data.

    Science.gov (United States)

    Tretyakov, Konstantin; Laur, Sven; Smant, Geert; Vilo, Jaak; Prins, Pjotr

    2013-01-01

    Biological data acquisition is raising new challenges, both in data analysis and handling. Not only is it proving hard to analyze the data at the rate it is generated today, but simply reading and transferring data files can be prohibitively slow due to their size. This primarily concerns logistics within and between data centers, but is also important for workstation users in the analysis phase. Common usage patterns, such as comparing and transferring files, are proving computationally expensive and are tying down shared resources. We present an efficient method for calculating file uniqueness for large scientific data files, that takes less computational effort than existing techniques. This method, called Probabilistic Fast File Fingerprinting (PFFF), exploits the variation present in biological data and computes file fingerprints by sampling randomly from the file instead of reading it in full. Consequently, it has a flat performance characteristic, correlated with data variation rather than file size. We demonstrate that probabilistic fingerprinting can be as reliable as existing hashing techniques, with provably negligible risk of collisions. We measure the performance of the algorithm on a number of data storage and access technologies, identifying its strengths as well as limitations. Probabilistic fingerprinting may significantly reduce the use of computational resources when comparing very large files. Utilisation of probabilistic fingerprinting techniques can increase the speed of common file-related workflows, both in the data center and for workbench analysis. The implementation of the algorithm is available as an open-source tool named pfff, as a command-line tool as well as a C library. The tool can be downloaded from http://biit.cs.ut.ee/pfff.

  19. Optimal plot size in the evaluation of papaya scions: proposal and comparison of methods

    Directory of Open Access Journals (Sweden)

    Humberto Felipe Celanti

    Full Text Available ABSTRACT Evaluating the quality of scions is extremely important and it can be done by characteristics of shoots and roots. This experiment evaluated height of the aerial part, stem diameter, number of leaves, petiole length and length of roots of papaya seedlings. Analyses were performed from a blank trial with 240 seedlings of "Golden Pecíolo Curto". The determination of the optimum plot size was done by applying the methods of maximum curvature, maximum curvature of coefficient of variation and a new proposed method, which incorporates the bootstrap resampling simulation to the maximum curvature method. According to the results obtained, five is the optimal number of seedlings of papaya "Golden Pecíolo Curto" per plot. The proposed method of bootstrap simulation with replacement provides optimal plot sizes equal or higher than the maximum curvature method and provides same plot size than maximum curvature method of the coefficient of variation.

  20. Cloud Optimized Image Format and Compression

    Science.gov (United States)

    Becker, P.; Plesea, L.; Maurer, T.

    2015-04-01

    Cloud based image storage and processing requires revaluation of formats and processing methods. For the true value of the massive volumes of earth observation data to be realized, the image data needs to be accessible from the cloud. Traditional file formats such as TIF and NITF were developed in the hay day of the desktop and assumed fast low latency file access. Other formats such as JPEG2000 provide for streaming protocols for pixel data, but still require a server to have file access. These concepts no longer truly hold in cloud based elastic storage and computation environments. This paper will provide details of a newly evolving image storage format (MRF) and compression that is optimized for cloud environments. Although the cost of storage continues to fall for large data volumes, there is still significant value in compression. For imagery data to be used in analysis and exploit the extended dynamic range of the new sensors, lossless or controlled lossy compression is of high value. Compression decreases the data volumes stored and reduces the data transferred, but the reduced data size must be balanced with the CPU required to decompress. The paper also outlines a new compression algorithm (LERC) for imagery and elevation data that optimizes this balance. Advantages of the compression include its simple to implement algorithm that enables it to be efficiently accessed using JavaScript. Combing this new cloud based image storage format and compression will help resolve some of the challenges of big image data on the internet.

  1. Optimization of bridging agents size distribution for drilling operations

    Energy Technology Data Exchange (ETDEWEB)

    Waldmann, Alex; Andrade, Alex Rodrigues de; Pires Junior, Idvard Jose; Martins, Andre Leibsohn [PETROBRAS S.A., Rio de Janeiro, RJ (Brazil)]. E-mails: awaldmann@petrobras.com.br; andradear.gorceix@petrobras.com.br; idvard.gorceix@petrobras.com.br; aleibsohn@petrobras.com.br

    2008-07-01

    The conventional drilling technique is based on positive hydrostatic pressure against well walls to prevent inflows of native fluids into the well. Such inflows can cause security problems for the team well and to probe. As the differential pressure of the well to reservoir is always positive, the filtrate of the fluid tends to invade the reservoir rock. Minimize the invasion of drilling fluid is a relevant theme in the oil wells drilling operations. In the design of drilling fluid, a common practice in the industry is the addition of bridging agents in the composition of the fluid to form a cake of low permeability at well walls and hence restrict the invasive process. The choice of drilling fluid requires the optimization of the concentration, shape and size distribution of particles. The ability of the fluid to prevent the invasion is usually evaluated in laboratory tests through filtration in porous media consolidated. This paper presents a description of the methods available in the literature for optimization of the formulation of bridging agents to drill-in fluids, predicting the pore throat from data psychotherapy, and a sensitivity analysis of the main operational parameters. The analysis is based on experimental results of the impact of the size distribution and concentration of bridging agents in the filtration process of drill-in fluids through porous media submitted to various different differential of pressure. The final objective is to develop a software for use of PETROBRAS, which may relate different types and concentrations of bridging agents with the properties of the reservoir to minimize the invasion. (author)

  2. Optimal Sizing of Energy Storage for Community Microgrids Considering Building Thermal Dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Guodong [ORNL; Li, Zhi [ORNL; Starke, Michael R. [ORNL; Ollis, Ben [ORNL; Tomsovic, Kevin [University of Tennessee, Knoxville (UTK)

    2017-07-01

    This paper proposes an optimization model for the optimal sizing of energy storage in community microgrids considering the building thermal dynamics and customer comfort preference. The proposed model minimizes the annualized cost of the community microgrid, including energy storage investment, purchased energy cost, demand charge, energy storage degradation cost, voluntary load shedding cost and the cost associated with customer discomfort due to room temperature deviation. The decision variables are the power and energy capacity of invested energy storage. In particular, we assume the heating, ventilation and air-conditioning (HVAC) systems can be scheduled intelligently by the microgrid central controller while maintaining the indoor temperature in the comfort range set by customers. For this purpose, the detailed thermal dynamic characteristics of buildings have been integrated into the optimization model. Numerical simulation shows significant cost reduction by the proposed model. The impacts of various costs on the optimal solution are investigated by sensitivity analysis.

  3. ANFIS-based genetic algorithm for predicting the optimal sizing coefficient of photovoltaic supply systems

    Energy Technology Data Exchange (ETDEWEB)

    Mellit, A. [Medea Univ., Medea (Algeria). Inst. of Science Engineering, Dept. of Electronics

    2007-07-01

    Stand-alone photovoltaic (PV) power supply systems are regarded as reliable and economical sources of electricity in rural remote areas, particularly in developing countries. However, the sizing of stand-alone photovoltaic (PV) systems is an important part of the system design. Choosing the optimal number of solar cell panels and the size of the storage battery to be used for a certain application at a particular site is an important economical problem. In this paper, a genetic algorithm (GA) and an adaptive neuro-fuzzy inference scheme (ANFIS) were proposed as a means for determining the optimal size of PV system, particularly, in isolated areas. The GA-ANFIS model was shown to be suitable for modelling the optimal sizing parameters of PVS systems. The GA was used to determine the PV-array capacity and the storage capacity for 60 sites. From this database, 56 pairs relative to 56 sites were used for training the network. Four pairs were used for testing and validating the ANFIS model. A correlation of 99 per cent was achieved when complete unknown data parameters were presented to the model. The proposed technique provided more accurate results than the alternative artificial neural network (ANN) with GA. The advantage of this model was that it could estimate the PV-array area and the useful capacity of the battery from only geographical coordinates. Although the technique was applied and tested in Algeria, it can be generalized for any location in the world. 15 refs., 4 tabs., 8 figs.

  4. Optimal sizing of a multi-source energy plant for power heat and cooling generation

    International Nuclear Information System (INIS)

    Barbieri, E.S.; Dai, Y.J.; Morini, M.; Pinelli, M.; Spina, P.R.; Sun, P.; Wang, R.Z.

    2014-01-01

    Multi-source systems for the fulfilment of electric, thermal and cooling demand of a building can be based on different technologies (e.g. solar photovoltaic, solar heating, cogeneration, heat pump, absorption chiller) which use renewable, partially renewable and fossil energy sources. Therefore, one of the main issues of these kinds of multi-source systems is to find the appropriate size of each technology. Moreover, building energy demands depend on the climate in which the building is located and on the characteristics of the building envelope, which also influence the optimal sizing. This paper presents an analysis of the effect of different climatic scenarios on the multi-source energy plant sizing. For this purpose a model has been developed and has been implemented in the Matlab ® environment. The model takes into consideration the load profiles for electricity, heating and cooling for a whole year. The performance of the energy systems are modelled through a systemic approach. The optimal sizing of the different technologies composing the multi-source energy plant is investigated by using a genetic algorithm, with the goal of minimizing the primary energy consumption only, since the cost of technologies and, in particular, the actual tariff and incentive scenarios depend on the specific country. Moreover economic considerations may lead to inadequate solutions in terms of primary energy consumption. As a case study, the Sino-Italian Green Energy Laboratory of the Shanghai Jiao Tong University has been hypothetically located in five cities in different climatic zones. The load profiles are calculated by means of a TRNSYS ® model. Results show that the optimal load allocation and component sizing are strictly related to climatic data (e.g. external air temperature and solar radiation)

  5. Effects of word width and word length on optimal character size for reading of horizontally scrolling Japanese words

    Directory of Open Access Journals (Sweden)

    Wataru eTeramoto

    2016-02-01

    Full Text Available The present study investigated whether word width and length affect the optimal character size for reading of horizontally scrolling Japanese words, using reading speed as a measure. In Experiment 1, three Japanese words, each consisting of 4 Hiragana characters, sequentially scrolled on a display screen from right to left. Participants, all Japanese native speakers, were instructed to read the words aloud as accurately as possible, irrespective of their order within the sequence. To quantitatively measure their reading performance, we used rapid serial visual presentation paradigm, where the scrolling rate was increased until the participants began to make mistakes. Thus, the highest scrolling rate at which the participants’ performance exceeded 88.9% correct rate was calculated for each character size (0.3, 0.6, 1.0, and 3.0° and scroll window size (5 or 10 character spaces. Results showed that the reading performance was highest in the range of 0.6° to 1.0°, irrespective of the scroll window size. Experiment 2 investigated whether the optimal character size observed in Experiment 1 was applicable for any word width and word length (i.e., the number of characters in a word. Results showed that reading speeds were slower for longer than shorter words and the word width of 3.6° was optimal among the word lengths tested (3, 4, and 6 character words. Considering that character size varied depending on word width and word length in the present study, this means that the optimal character size can be changed by word width and word length.

  6. Morphing Wing Weight Predictors and Their Application in a Template-Based Morphing Aircraft Sizing Environment II. Part 2; Morphing Aircraft Sizing via Multi-level Optimization

    Science.gov (United States)

    Skillen, Michael D.; Crossley, William A.

    2008-01-01

    This report presents an approach for sizing of a morphing aircraft based upon a multi-level design optimization approach. For this effort, a morphing wing is one whose planform can make significant shape changes in flight - increasing wing area by 50% or more from the lowest possible area, changing sweep 30 or more, and/or increasing aspect ratio by as much as 200% from the lowest possible value. The top-level optimization problem seeks to minimize the gross weight of the aircraft by determining a set of "baseline" variables - these are common aircraft sizing variables, along with a set of "morphing limit" variables - these describe the maximum shape change for a particular morphing strategy. The sub-level optimization problems represent each segment in the morphing aircraft's design mission; here, each sub-level optimizer minimizes fuel consumed during each mission segment by changing the wing planform within the bounds set by the baseline and morphing limit variables from the top-level problem.

  7. Challenging Ubiquitous Inverted Files

    NARCIS (Netherlands)

    de Vries, A.P.

    2000-01-01

    Stand-alone ranking systems based on highly optimized inverted file structures are generally considered ‘the’ solution for building search engines. Observing various developments in software and hardware, we argue however that IR research faces a complex engineering problem in the quest for more

  8. Optimal Harvesting in a Periodic Food Chain Model with Size Structures in Predators

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Feng-Qin, E-mail: zhafq@263.net [Yuncheng University, Department of Applied Mathematics (China); Liu, Rong [Lvliang University, Department of Mathematics (China); Chen, Yuming, E-mail: ychen@wlu.ca [Yuncheng University, Department of Applied Mathematics (China)

    2017-04-15

    In this paper, we investigate a periodic food chain model with harvesting, where the predators have size structures and are described by first-order partial differential equations. First, we establish the existence of a unique non-negative solution by using the Banach fixed point theorem. Then, we provide optimality conditions by means of normal cone and adjoint system. Finally, we derive the existence of an optimal strategy by means of Ekeland’s variational principle. Here the objective functional represents the net economic benefit yielded from harvesting.

  9. Optimal Photovoltaic System Sizing of a Hybrid Diesel/PV System

    Directory of Open Access Journals (Sweden)

    Ahmed Belhamadia

    2017-03-01

    Full Text Available This paper presents a cost analysis study of a hybrid diesel and Photovoltaic (PV system in Kuala Terengganu, Malaysia. It first presents the climate conditions of the city followed by the load profile of a 2MVA network; the system was evaluated as a standalone system. Diesel generator rating was considered such that it follows ISO 8528. The maximum size of the PV system was selected such that its penetration would not exceed 25%. Several sizes were considered but the 400kWp system was found to be the most cost efficient. Cost estimation was done using Hybrid Optimization Model for Electric Renewable (HOMER. Based on the simulation results, the climate conditions and the NEC 960, the numbers of the maximum and minimum series modules were suggested as well as the maximum number of the parallel strings.

  10. Optimal Energy Management, Location and Size for Stationary Energy Storage System in a Metro Line Based on Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Huan Xia

    2015-10-01

    Full Text Available The installation of stationary super-capacitor energy storage system (ESS in metro systems can recycle the vehicle braking energy and improve the pantograph voltage profile. This paper aims to optimize the energy management, location, and size of stationary super-capacitor ESSes simultaneously and obtain the best economic efficiency and voltage profile of metro systems. Firstly, the simulation platform of an urban rail power supply system, which includes trains and super-capacitor energy storage systems, is established. Then, two evaluation functions from the perspectives of economic efficiency and voltage drop compensation are put forward. Ultimately, a novel optimization method that combines genetic algorithms and a simulation platform of urban rail power supply system is proposed, which can obtain the best energy management strategy, location, and size for ESSes simultaneously. With actual parameters of a Chinese metro line applied in the simulation comparison, certain optimal scheme of ESSes’ energy management strategy, location, and size obtained by a novel optimization method can achieve much better performance of metro systems from the perspectives of two evaluation functions. The simulation result shows that with the increase of weight coefficient, the optimal energy management strategy, locations and size of ESSes appear certain regularities, and the best compromise between economic efficiency and voltage drop compensation can be obtained by a novel optimization method, which can provide a valuable reference to subway company.

  11. PSO-Based Smart Grid Application for Sizing and Optimization of Hybrid Renewable Energy Systems

    Science.gov (United States)

    Mohamed, Mohamed A.; Eltamaly, Ali M.; Alolah, Abdulrahman I.

    2016-01-01

    This paper introduces an optimal sizing algorithm for a hybrid renewable energy system using smart grid load management application based on the available generation. This algorithm aims to maximize the system energy production and meet the load demand with minimum cost and highest reliability. This system is formed by photovoltaic array, wind turbines, storage batteries, and diesel generator as a backup source of energy. Demand profile shaping as one of the smart grid applications is introduced in this paper using load shifting-based load priority. Particle swarm optimization is used in this algorithm to determine the optimum size of the system components. The results obtained from this algorithm are compared with those from the iterative optimization technique to assess the adequacy of the proposed algorithm. The study in this paper is performed in some of the remote areas in Saudi Arabia and can be expanded to any similar regions around the world. Numerous valuable results are extracted from this study that could help researchers and decision makers. PMID:27513000

  12. Effects of Word Width and Word Length on Optimal Character Size for Reading of Horizontally Scrolling Japanese Words.

    Science.gov (United States)

    Teramoto, Wataru; Nakazaki, Takuyuki; Sekiyama, Kaoru; Mori, Shuji

    2016-01-01

    The present study investigated, whether word width and length affect the optimal character size for reading of horizontally scrolling Japanese words, using reading speed as a measure. In Experiment 1, three Japanese words, each consisting of four Hiragana characters, sequentially scrolled on a display screen from right to left. Participants, all Japanese native speakers, were instructed to read the words aloud as accurately as possible, irrespective of their order within the sequence. To quantitatively measure their reading performance, we used rapid serial visual presentation paradigm, where the scrolling rate was increased until the participants began to make mistakes. Thus, the highest scrolling rate at which the participants' performance exceeded 88.9% correct rate was calculated for each character size (0.3°, 0.6°, 1.0°, and 3.0°) and scroll window size (5 or 10 character spaces). Results showed that the reading performance was highest in the range of 0.6° to 1.0°, irrespective of the scroll window size. Experiment 2 investigated whether the optimal character size observed in Experiment 1 was applicable for any word width and word length (i.e., the number of characters in a word). Results showed that reading speeds were slower for longer than shorter words and the word width of 3.6° was optimal among the word lengths tested (three, four, and six character words). Considering that character size varied depending on word width and word length in the present study, this means that the optimal character size can be changed by word width and word length in scrolling Japanese words.

  13. Optimal battery sizing in photovoltaic based distributed generation using enhanced opposition-based firefly algorithm for voltage rise mitigation.

    Science.gov (United States)

    Wong, Ling Ai; Shareef, Hussain; Mohamed, Azah; Ibrahim, Ahmad Asrul

    2014-01-01

    This paper presents the application of enhanced opposition-based firefly algorithm in obtaining the optimal battery energy storage systems (BESS) sizing in photovoltaic generation integrated radial distribution network in order to mitigate the voltage rise problem. Initially, the performance of the original firefly algorithm is enhanced by utilizing the opposition-based learning and introducing inertia weight. After evaluating the performance of the enhanced opposition-based firefly algorithm (EOFA) with fifteen benchmark functions, it is then adopted to determine the optimal size for BESS. Two optimization processes are conducted where the first optimization aims to obtain the optimal battery output power on hourly basis and the second optimization aims to obtain the optimal BESS capacity by considering the state of charge constraint of BESS. The effectiveness of the proposed method is validated by applying the algorithm to the 69-bus distribution system and by comparing the performance of EOFA with conventional firefly algorithm and gravitational search algorithm. Results show that EOFA has the best performance comparatively in terms of mitigating the voltage rise problem.

  14. Optimal Battery Sizing in Photovoltaic Based Distributed Generation Using Enhanced Opposition-Based Firefly Algorithm for Voltage Rise Mitigation

    Directory of Open Access Journals (Sweden)

    Ling Ai Wong

    2014-01-01

    Full Text Available This paper presents the application of enhanced opposition-based firefly algorithm in obtaining the optimal battery energy storage systems (BESS sizing in photovoltaic generation integrated radial distribution network in order to mitigate the voltage rise problem. Initially, the performance of the original firefly algorithm is enhanced by utilizing the opposition-based learning and introducing inertia weight. After evaluating the performance of the enhanced opposition-based firefly algorithm (EOFA with fifteen benchmark functions, it is then adopted to determine the optimal size for BESS. Two optimization processes are conducted where the first optimization aims to obtain the optimal battery output power on hourly basis and the second optimization aims to obtain the optimal BESS capacity by considering the state of charge constraint of BESS. The effectiveness of the proposed method is validated by applying the algorithm to the 69-bus distribution system and by comparing the performance of EOFA with conventional firefly algorithm and gravitational search algorithm. Results show that EOFA has the best performance comparatively in terms of mitigating the voltage rise problem.

  15. Optimal unit sizing of a hybrid renewable energy system for isolated applications

    International Nuclear Information System (INIS)

    Morales, D.

    2006-07-01

    In general, the methods used to conceive a renewable energy production system overestimate the size of the generating units. These methods increase the investment cost and the production cost of energy. The work presented in this thesis proposes a methodology to optimally size a renewable energy system.- This study shows that the classic approach based only on a long term analysis of system's behaviour is not sufficient and a complementary methodology based on a short term analysis is proposed. A numerical simulation was developed in which the mathematical models of the solar panel, the wind turbines and battery are integrated. The daily average solar energy per m2 is decomposed into a series of hourly I energy values using the Collares-Pereira equations. The time series analysis of the wind speed is made using the Monte Carlo Simulation Method. The second part of this thesis makes a detailed analysis of an isolated wind energy production system. The average energy produced by the system depends on the generator's rated power, the total swept area of the wind turbine, the gearbox's transformation ratio, the battery voltage and the wind speed probability function. The study proposes a methodology to determine the optimal matching between the rated power of the permanent magnet synchronous machine and the wind turbine's rotor size. This is made taking into account the average electrical energy produced over a period of time. (author)

  16. Optimal Sample Size for Probability of Detection Curves

    International Nuclear Information System (INIS)

    Annis, Charles; Gandossi, Luca; Martin, Oliver

    2012-01-01

    The use of Probability of Detection (POD) curves to quantify NDT reliability is common in the aeronautical industry, but relatively less so in the nuclear industry. The European Network for Inspection Qualification's (ENIQ) Inspection Qualification Methodology is based on the concept of Technical Justification, a document assembling all the evidence to assure that the NDT system in focus is indeed capable of finding the flaws for which it was designed. This methodology has become widely used in many countries, but the assurance it provides is usually of qualitative nature. The need to quantify the output of inspection qualification has become more important, especially as structural reliability modelling and quantitative risk-informed in-service inspection methodologies become more widely used. To credit the inspections in structural reliability evaluations, a measure of the NDT reliability is necessary. A POD curve provides such metric. In 2010 ENIQ developed a technical report on POD curves, reviewing the statistical models used to quantify inspection reliability. Further work was subsequently carried out to investigate the issue of optimal sample size for deriving a POD curve, so that adequate guidance could be given to the practitioners of inspection reliability. Manufacturing of test pieces with cracks that are representative of real defects found in nuclear power plants (NPP) can be very expensive. Thus there is a tendency to reduce sample sizes and in turn reduce the conservatism associated with the POD curve derived. Not much guidance on the correct sample size can be found in the published literature, where often qualitative statements are given with no further justification. The aim of this paper is to summarise the findings of such work. (author)

  17. An analytical method to determine the optimal size of a photovoltaic plant

    Energy Technology Data Exchange (ETDEWEB)

    Barra, L; Catalanotti, S; Fontana, F; Lavorante, F

    1984-01-01

    In this paper, a simplified method for the optimal sizing of a photovoltaic system is presented. The results have been obtained for Italian meteorological data, but the methodology can be applied to any geographical area. The system studied is composed of a photovoltaic array, power tracker, battery storage, inverter and load. Computer simulation was used to obtain the performance of this system for many values of field area, battery storage value, solar flux and load by keeping constant the efficiencies. A simple fit was used to achieve a formula relating the system variables to the performance. Finally, the formulae for the optimal values of the field area and the battery storage value are shown.

  18. Flexibility and torsional behaviour of rotary nickel-titanium PathFile, RaCe ISO 10, Scout RaCe and stainless steel K-File hand instruments.

    Science.gov (United States)

    Nakagawa, R K L; Alves, J L; Buono, V T L; Bahia, M G A

    2014-03-01

    To assess and compare the flexibility and torsional resistance of PathFile, RaCe ISO 10 and Scout RaCe instruments in relation to stainless steel K-File hand instruments. Rotary PathFile (sizes 13, 16 and 19; .02 taper), Race ISO 10 (size 10; 0.02, 0.04 and 0.06 tapers), Scout RaCe (sizes 10, 15 and 20; 0.02 taper) and hand K-File (sizes 10, 15 and 20; 0.02 taper) instruments were evaluated. Alloy chemical composition, phases present and transformation temperatures were determined for the NiTi instruments. For all instruments, diameters at each millimetre from the tip as well as cross-sectional areas at 3 mm from the tip were measured based on ANSI/ADA Specification No. 101 using image analysis software. Resistance to bending and torsional resistance were determined according to specification ISO 3630-1. Vickers microhardness measurements were also taken in all instruments to assess their strength. Data were analysed using analysis of variance (α = 0.05). The alloys used in the manufacture of the three types of NiTi instruments had approximately the same chemical composition, but the PathFile instruments had a higher Af transformation temperature and contained a small amount of B19' martensite. All instruments had diameter values within the standard tolerance. The bending and torsional resistance values were significantly increased relative to the instrument diameter and cross-sectional area. PathFile instruments were the most flexible and the least torque resistant, whilst the stainless steel instruments were the least flexible although they were more torque resistant than the NiTi instruments. © 2013 International Endodontic Journal. Published by John Wiley & Sons Ltd.

  19. Energetic constraints, size gradients, and size limits in benthic marine invertebrates.

    Science.gov (United States)

    Sebens, Kenneth P

    2002-08-01

    Populations of marine benthic organisms occupy habitats with a range of physical and biological characteristics. In the intertidal zone, energetic costs increase with temperature and aerial exposure, and prey intake increases with immersion time, generating size gradients with small individuals often found at upper limits of distribution. Wave action can have similar effects, limiting feeding time or success, although certain species benefit from wave dislodgment of their prey; this also results in gradients of size and morphology. The difference between energy intake and metabolic (and/or behavioral) costs can be used to determine an energetic optimal size for individuals in such populations. Comparisons of the energetic optimal size to the maximum predicted size based on mechanical constraints, and the ensuing mortality schedule, provides a mechanism to study and explain organism size gradients in intertidal and subtidal habitats. For species where the energetic optimal size is well below the maximum size that could persist under a certain set of wave/flow conditions, it is probable that energetic constraints dominate. When the opposite is true, populations of small individuals can dominate habitats with strong dislodgment or damage probability. When the maximum size of individuals is far below either energetic optima or mechanical limits, other sources of mortality (e.g., predation) may favor energy allocation to early reproduction rather than to continued growth. Predictions based on optimal size models have been tested for a variety of intertidal and subtidal invertebrates including sea anemones, corals, and octocorals. This paper provides a review of the optimal size concept, and employs a combination of the optimal energetic size model and life history modeling approach to explore energy allocation to growth or reproduction as the optimal size is approached.

  20. Optimal Sizing of a Photovoltaic-Hydrogen Power System for HALE Aircraft by means of Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Victor M. Sanchez

    2015-01-01

    Full Text Available Over the last decade there has been a growing interest in the research of feasibility to use high altitude long endurance (HALE aircrafts in order to provide mobile communications. The use of HALEs for telecommunication networks has the potential to deliver a wide range of communication services (from high-quality voice to high-definition videos, as well as high-data-rate wireless channels cost effectively. One of the main challenges of this technology is to design its power supply system, which must provide the enough energy for long time flights in a reliable way. In this paper a photovoltaic/hydrogen system is proposed as power system for a HALE aircraft due its high power density characteristic. In order to obtain the optimal sizing for photovoltaic/hydrogen system a particle swarm optimizer (PSO is used. As a case study, theoretical design of the photovoltaic/hydrogen power system for three different HALE aircrafts located at 18° latitude is presented. At this latitude, the range of solar radiation intensity was from 310 to 450 Wh/sq·m/day. The results obtained show that the photovoltaic/hydrogen systems calculated by PSO can operate during one year with efficacies ranging between 45.82% and 47.81%. The obtained sizing result ensures that the photovoltaic/hydrogen system supplies adequate energy for HALE aircrafts.

  1. Optimal Placement and Sizing of Fault Current Limiters in Distributed Generation Systems Using a Hybrid Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    N. Bayati

    2017-02-01

    Full Text Available Distributed Generation (DG connection in a power system tends to increase the short circuit level in the entire system which, in turn, could eliminate the protection coordination between the existing relays. Fault Current Limiters (FCLs are often used to reduce the short-circuit level of the network to a desirable level, provided that they are dully placed and appropriately sized. In this paper, a method is proposed for optimal placement of FCLs and optimal determination of their impedance values by which the relay operation time, the number and size of the FCL are minimized while maintaining the relay coordination before and after DG connection. The proposed method adopts the removal of low-impact FCLs and uses a hybrid Genetic Algorithm (GA optimization scheme to determine the optimal placement of FCLs and the values of their impedances. The suitability of the proposed method is demonstrated by examining the results of relay coordination in a typical DG network before and after DG connection.

  2. K-file vs ProFiles in cleaning capacity and instrumentation time in primary molar root canals: An in vitro study

    Directory of Open Access Journals (Sweden)

    N Madan

    2011-01-01

    Full Text Available Objectives: This study compares the efficiency of manual K-files and rotary ProFiles in cleaning capacity and instrumentation time in primary molar root canals. Materials and Methods: Seventy-five maxillary and mandibular primary molar root canals were instrumented with ProFiles and K-files in the step-back manner from size #10 to #40. The teeth were decalcified, dehydrated and cleared, and analyzed for the presence of dye remaining on the root canal walls, which served as an evidence of cleaning capacity of both the techniques. Results: The results showed a significant difference in the cleaning capacity of the root canals with ProFiles and K-files, in apical and coronal thirds of the root canal. ProFiles have been found to be more efficient in cleaning the coronal thirds and K-files in cleaning apical thirds of the root canals. Both the techniques were almost equally effective in cleaning the middle thirds of the canals. The time taken during the cleaning of the root canals appeared to be statistically shorter with K-files than profiles.

  3. Optimal sizing for SAPIEN 3 transcatheter aortic valve replacement in patients with or without left ventricular outflow tract calcification.

    Science.gov (United States)

    Maeno, Yoshio; Abramowitz, Yigal; Jilaihawi, Hasan; Israr, Sharjeel; Yoon, Sunghan; Sharma, Rahul P; Kazuno, Yoshio; Kawamori, Hiroyuki; Miyasaka, Masaki; Rami, Tanya; Mangat, Geeteshwar; Takahashi, Nobuyuki; Okuyama, Kazuaki; Kashif, Mohammad; Chakravarty, Tarun; Nakamura, Mamoo; Cheng, Wen; Makkar, Raj R

    2017-04-07

    The impact of left ventricular outflow tract calcification (LVOT-CA) on SAPIEN 3 transcatheter aortic valve replacement (S3-TAVR) is not well understood. The aims of the present study were to determine optimal device sizing for S3-TAVR in patients with or without LVOT-CA and to evaluate the influence of residual paravalvular leak (PVL) on survival after S3-TAVR in these patients. This study analysed 280 patients (LVOT-CA=144, no LVOT-CA=136) undergoing S3-TAVR. Optimal annular area sizing was defined as % annular area sizing related to lower rates of ≥mild PVL. Annular area sizing was determined as follows: (prosthesis area/CT annulus area-1)×100. Overall, ≥mild PVL was present in 25.7%. Receiver operating characteristic curve analysis for prediction of ≥mild PVL in patients with LVOT-CA showed that 7.2% annular area sizing was identified as the optimal threshold (area under the curve [AUC] 0.71). Conversely, annular area sizing for no LVOT-CA appeared unrelated to PVL (AUC 0.58). Aortic annular injury was seen in four patients (average 15.5% annular area oversizing), three of whom had LVOT-CA. Although there was no difference in one-year survival between patients with ≥mild PVL and without PVL (log-rank p=0.91), subgroup analysis demonstrated that patients with ≥moderate LVOT-CA who had ≥mild PVL had lower survival compared to patients with ≥mild PVL and none or mild LVOT-CA (log-rank p=0.010). In the setting of LVOT-CA, an optimally sized S3 valve is required to reduce PVL and to increase survival following TAVR.

  4. Size and Topology Optimization for Trusses with Discrete Design Variables by Improved Firefly Algorithm

    NARCIS (Netherlands)

    Wu, Yue; Li, Q.; Hu, Qingjie; Borgart, A.

    2017-01-01

    Firefly Algorithm (FA, for short) is inspired by the social behavior of fireflies and their phenomenon of bioluminescent communication. Based on the fundamentals of FA, two improved strategies are proposed to conduct size and topology optimization for trusses with discrete design variables. Firstly,

  5. Collective operations in a file system based execution model

    Science.gov (United States)

    Shinde, Pravin; Van Hensbergen, Eric

    2013-02-19

    A mechanism is provided for group communications using a MULTI-PIPE synthetic file system. A master application creates a multi-pipe synthetic file in the MULTI-PIPE synthetic file system, the master application indicating a multi-pipe operation to be performed. The master application then writes a header-control block of the multi-pipe synthetic file specifying at least one of a multi-pipe synthetic file system name, a message type, a message size, a specific destination, or a specification of the multi-pipe operation. Any other application participating in the group communications then opens the same multi-pipe synthetic file. A MULTI-PIPE file system module then implements the multi-pipe operation as identified by the master application. The master application and the other applications then either read or write operation messages to the multi-pipe synthetic file and the MULTI-PIPE synthetic file system module performs appropriate actions.

  6. Turbofan engine diagnostics neuron network size optimization method which takes into account overlaerning effect

    Directory of Open Access Journals (Sweden)

    О.С. Якушенко

    2010-01-01

    Full Text Available  The article is devoted to the problem of gas turbine engine (GTE technical state class automatic recognition with operation parameters by neuron networks. The one of main problems for creation the neuron networks is determination of their optimal structures size (amount of layers in network and count of neurons in each layer.The method of neuron network size optimization intended for classification of GTE technical state is considered in the article. Optimization is cared out with taking into account of overlearning effect possibility when a learning network loses property of generalization and begins strictly describing educational data set. To determinate a moment when overlearning effect is appeared in learning neuron network the method  of three data sets is used. The method is based on the comparison of recognition quality parameters changes which were calculated during recognition of educational and control data sets. As the moment when network overlearning effect is appeared the moment when control data set recognition quality begins deteriorating but educational data set recognition quality continues still improving is used. To determinate this moment learning process periodically is terminated and simulation of network with education and control data sets is fulfilled. The optimization of two-, three- and four-layer networks is conducted and some results of optimization are shown. Also the extended educational set is created and shown. The set describes 16 GTE technical state classes and each class is represented with 200 points (200 possible technical state class realizations instead of 20 points using in the former articles. It was done to increase representativeness of data set.In the article the algorithm of optimization is considered and some results which were obtained with it are shown. The results of experiments were analyzed to determinate most optimal neuron network structure. This structure provides most high-quality GTE

  7. Particle swarm optimization algorithm for simultaneous optimal placement and sizing of shunt active power conditioner (APC) and shunt capacitor inharmonic distorted distribution system

    Institute of Scientific and Technical Information of China (English)

    Mohammadi Mohammad

    2017-01-01

    Due to development of distribution systems and increase in electricity demand, the use of capacitor banks increases. From the other point of view, nonlinear loads generate and inject considerable harmonic currents into power system. Under this condition if capacitor banks are not properly selected and placed in the power system, they could amplify and propagate these harmonics and deteriorate power quality to unacceptable levels. With attention of disadvantages of passive filters, such as occurring resonance, nowadays the usage of this type of harmonic compensator is restricted. On the other side, one of parallel multi-function compensating devices which are recently used in distribution system to mitigate voltage sag and harmonic distortion, performs power factor correction, and improves the overall power quality as active power conditioner (APC). Therefore, the utilization of APC in harmonic distorted system can affect and change the optimal location and size of shunt capacitor bank under harmonic distortion condition. This paper presents an optimization algorithm for improvement of power quality using simultaneous optimal placement and sizing of APC and shunt capacitor banks in radial distribution networks in the presence of voltage and current harmonics. The algorithm is based on particle swarm optimization (PSO). The objective function includes the cost of power losses, energy losses and those of the capacitor banks and APCs.

  8. Optimal Sizing and Location of Distributed Generators Based on PBIL and PSO Techniques

    Directory of Open Access Journals (Sweden)

    Luis Fernando Grisales-Noreña

    2018-04-01

    Full Text Available The optimal location and sizing of distributed generation is a suitable option for improving the operation of electric systems. This paper proposes a parallel implementation of the Population-Based Incremental Learning (PBIL algorithm to locate distributed generators (DGs, and the use of Particle Swarm Optimization (PSO to define the size those devices. The resulting method is a master-slave hybrid approach based on both the parallel PBIL (PPBIL algorithm and the PSO, which reduces the computation time in comparison with other techniques commonly used to address this problem. Moreover, the new hybrid method also reduces the active power losses and improves the nodal voltage profiles. In order to verify the performance of the new method, test systems with 33 and 69 buses are implemented in Matlab, using Matpower, for evaluating multiple cases. Finally, the proposed method is contrasted with the Loss Sensitivity Factor (LSF, a Genetic Algorithm (GA and a Parallel Monte-Carlo algorithm. The results demonstrate that the proposed PPBIL-PSO method provides the best balance between processing time, voltage profiles and reduction of power losses.

  9. Performance of the Galley Parallel File System

    Science.gov (United States)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    As the input/output (I/O) needs of parallel scientific applications increase, file systems for multiprocessors are being designed to provide applications with parallel access to multiple disks. Many parallel file systems present applications with a conventional Unix-like interface that allows the application to access multiple disks transparently. This interface conceals the parallism within the file system, which increases the ease of programmability, but makes it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. Furthermore, most current parallel file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic parallel workloads. Initial experiments, reported in this paper, indicate that Galley is capable of providing high-performance 1/O to applications the applications that rely on them. In Section 3 we describe that access data in patterns that have been observed to be common.

  10. Optimal capacity and buffer size estimation under Generalized Markov Fluids Models and QoS parameters

    International Nuclear Information System (INIS)

    Bavio, José; Marrón, Beatriz

    2014-01-01

    Quality of service (QoS) for internet traffic management requires good traffic models and good estimation of sharing network resource. A link of a network processes all traffic and it is designed with certain capacity C and buffer size B. A Generalized Markov Fluid model (GMFM), introduced by Marrón (2011), is assumed for the sources because describes in a versatile way the traffic, allows estimation based on traffic traces, and also consistent effective bandwidth estimation can be done. QoS, interpreted as buffer overflow probability, can be estimated for GMFM through the effective bandwidth estimation and solving the optimization problem presented in Courcoubetis (2002), the so call inf-sup formulas. In this work we implement a code to solve the inf-sup problem and other optimization related with it, that allow us to do traffic engineering in links of data networks to calculate both, minimum capacity required when QoS and buffer size are given or minimum buffer size required when QoS and capacity are given

  11. Can E-Filing Reduce Tax Compliance Costs in Developing Countries?

    OpenAIRE

    Yilmaz, Fatih; Coolidge, Jacqueline

    2013-01-01

    The purpose of this study is to investigate the association between electronic filing (e-filing) and the total tax compliance costs incurred by small and medium size businesses in developing countries, based on survey data from South Africa, Ukraine, and Nepal. A priori, most observers expect that use of e-filing should reduce tax compliance costs, but this analysis suggests that the assum...

  12. Optimization of the size and yield of graphene oxide sheets in the exfoliation step

    OpenAIRE

    Botas, Cristina; Pérez, A.M. (Ana); Álvarez, Patricia; Santamaría, Ricardo; Granda, Marcos; Blanco, Clara; Menéndez, Rosa

    2017-01-01

    In this paper we demonstrate that the yield and size of the graphene oxide sheets (GO) obtained by sonication of graphite oxide (GrO) can be optimized not only by selecting the appropriate exfoliation conditions but also as a function of the crystalline structure of the parent graphite. A larger crystal size in the parent graphite favors GrO exfoliation and yields larger sheets in shorter sonication times, independently of the oxygen content of the GrO. A maximum yield of GO is obtained in al...

  13. Image Size Variation Influence on Corrupted and Non-viewable BMP Image

    Science.gov (United States)

    Azmi, Tengku Norsuhaila T.; Azma Abdullah, Nurul; Rahman, Nurul Hidayah Ab; Hamid, Isredza Rahmi A.; Chai Wen, Chuah

    2017-08-01

    Image is one of the evidence component seek in digital forensics. Joint Photographic Experts Group (JPEG) format is most popular used in the Internet because JPEG files are very lossy and easy to compress that can speed up Internet transmitting processes. However, corrupted JPEG images are hard to recover due to the complexities of determining corruption point. Nowadays Bitmap (BMP) images are preferred in image processing compared to another formats because BMP image contain all the image information in a simple format. Therefore, in order to investigate the corruption point in JPEG, the file is required to be converted into BMP format. Nevertheless, there are many things that can influence the corrupting of BMP image such as the changes of image size that make the file non-viewable. In this paper, the experiment indicates that the size of BMP file influences the changes in the image itself through three conditions, deleting, replacing and insertion. From the experiment, we learnt by correcting the file size, it can able to produce a viewable file though partially. Then, it can be investigated further to identify the corruption point.

  14. Discrete size optimization of steel trusses using a refined big bang-big crunch algorithm

    Science.gov (United States)

    Hasançebi, O.; Kazemzadeh Azad, S.

    2014-01-01

    This article presents a methodology that provides a method for design optimization of steel truss structures based on a refined big bang-big crunch (BB-BC) algorithm. It is shown that a standard formulation of the BB-BC algorithm occasionally falls short of producing acceptable solutions to problems from discrete size optimum design of steel trusses. A reformulation of the algorithm is proposed and implemented for design optimization of various discrete truss structures according to American Institute of Steel Construction Allowable Stress Design (AISC-ASD) specifications. Furthermore, the performance of the proposed BB-BC algorithm is compared to its standard version as well as other well-known metaheuristic techniques. The numerical results confirm the efficiency of the proposed algorithm in practical design optimization of truss structures.

  15. An Efficient Format for Nearly Constant-Time Access to Arbitrary Time Intervals in Large Trace Files

    Directory of Open Access Journals (Sweden)

    Anthony Chan

    2008-01-01

    Full Text Available A powerful method to aid in understanding the performance of parallel applications uses log or trace files containing time-stamped events and states (pairs of events. These trace files can be very large, often hundreds or even thousands of megabytes. Because of the cost of accessing and displaying such files, other methods are often used that reduce the size of the tracefiles at the cost of sacrificing detail or other information. This paper describes a hierarchical trace file format that provides for display of an arbitrary time window in a time independent of the total size of the file and roughly proportional to the number of events within the time window. This format eliminates the need to sacrifice data to achieve a smaller trace file size (since storage is inexpensive, it is necessary only to make efficient use of bandwidth to that storage. The format can be used to organize a trace file or to create a separate file of annotations that may be used with conventional trace files. We present an analysis of the time to access all of the events relevant to an interval of time and we describe experiments demonstrating the performance of this file format.

  16. Translator for Optimizing Fluid-Handling Components

    Science.gov (United States)

    Landon, Mark; Perry, Ernest

    2007-01-01

    A software interface has been devised to facilitate optimization of the shapes of valves, elbows, fittings, and other components used to handle fluids under extreme conditions. This software interface translates data files generated by PLOT3D (a NASA grid-based plotting-and- data-display program) and by computational fluid dynamics (CFD) software into a format in which the files can be read by Sculptor, which is a shape-deformation- and-optimization program. Sculptor enables the user to interactively, smoothly, and arbitrarily deform the surfaces and volumes in two- and three-dimensional CFD models. Sculptor also includes design-optimization algorithms that can be used in conjunction with the arbitrary-shape-deformation components to perform automatic shape optimization. In the optimization process, the output of the CFD software is used as feedback while the optimizer strives to satisfy design criteria that could include, for example, improved values of pressure loss, velocity, flow quality, mass flow, etc.

  17. Method for data compression by associating complex numbers with files of data values

    Science.gov (United States)

    Feo, John Thomas; Hanks, David Carlton; Kraay, Thomas Arthur

    1998-02-10

    A method for compressing data for storage or transmission. Given a complex polynomial and a value assigned to each root, a root generated data file (RGDF) is created, one entry at a time. Each entry is mapped to a point in a complex plane. An iterative root finding technique is used to map the coordinates of the point to the coordinates of one of the roots of the polynomial. The value associated with that root is assigned to the entry. An equational data compression (EDC) method reverses this procedure. Given a target data file, the EDC method uses a search algorithm to calculate a set of m complex numbers and a value map that will generate the target data file. The error between a simple target data file and generated data file is typically less than 10%. Data files can be transmitted or stored without loss by transmitting the m complex numbers, their associated values, and an error file whose size is at most one-tenth of the size of the input data file.

  18. Size, shape, and topology optimization of planar and space trusses using mutation-based improved metaheuristics

    Directory of Open Access Journals (Sweden)

    Ghanshyam G. Tejani

    2018-04-01

    Full Text Available In this study, simultaneous size, shape, and topology optimization of planar and space trusses are investigated. Moreover, the trusses are subjected to constraints for element stresses, nodal displacements, and kinematic stability conditions. Truss Topology Optimization (TTO removes the superfluous elements and nodes from the ground structure. In this method, the difficulties arise due to unacceptable and singular topologies; therefore, the Grubler’s criterion and the positive definiteness are used to handle such issue. Moreover, the TTO is challenging due to its search space, which is implicit, non-convex, non-linear, and often leading to divergence. Therefore, mutation-based metaheuristics are proposed to investigate them. This study compares the performance of four improved metaheuristics (viz. Improved Teaching–Learning-Based Optimization (ITLBO, Improved Heat Transfer Search (IHTS, Improved Water Wave Optimization (IWWO, and Improved Passing Vehicle Search (IPVS and four basic metaheuristics (viz. TLBO, HTS, WWO, and PVS in order to solve structural optimization problems. Keywords: Structural optimization, Mutation operator, Improved metaheuristics, Modified algorithms, Truss topology optimization

  19. A Joint Optimal Decision on Shipment Size and Carbon Reduction under Direct Shipment and Peddling Distribution Strategies

    Directory of Open Access Journals (Sweden)

    Daiki Min

    2017-11-01

    Full Text Available Recently, much research has focused on lowering carbon emissions in logistics. This paper attempts to contribute to the literature on the joint shipment size and carbon reduction decisions by developing novel models for distribution systems under direct shipment and peddling distribution strategies. Unlike the literature that has simply investigated the effects of carbon costs on operational decisions, we address how to reduce carbon emissions and logistics costs by adjusting shipment size and making an optimal decision on carbon reduction investment. An optimal decision is made by analyzing the distribution cost including not only logistics and carbon trading costs but also the cost for adjusting carbon emission factors. No research has explicitly considered the two sources of carbon emissions, but we develop a model covering the difference in managing carbon emissions from transportation and storage. Structural analysis guides how to determine an optimal shipment size and emission factors in a closed form. Moreover, we analytically prove the possibility of reducing the distribution cost and carbon emissions at the same time. Numerical analysis follows validation of the results and demonstrates some interesting findings on carbon and distribution cost reduction.

  20. SU-F-T-172: A Method for Log File QA On An IBA Proteus System for Patient Specific Spot Scanning Quality Assurance

    Energy Technology Data Exchange (ETDEWEB)

    Tang, S; Ho, M; Chen, C; Mah, D [ProCure NJ, Somerset, NJ (United States); Rice, I; Doan, D; Mac Rae, B [IBA, Somerset, NJ (United States)

    2016-06-15

    Purpose: The use of log files to perform patient specific quality assurance for both protons and IMRT has been established. Here, we extend that approach to a proprietary log file format and compare our results to measurements in phantom. Our goal was to generate a system that would permit gross errors to be found within 3 fractions until direct measurements. This approach could eventually replace direct measurements. Methods: Spot scanning protons pass through multi-wire ionization chambers which provide information about the charge, location, and size of each delivered spot. We have generated a program that calculates the dose in phantom from these log files and compares the measurements with the plan. The program has 3 different spot shape models: single Gaussian, double Gaussian and the ASTROID model. The program was benchmarked across different treatment sites for 23 patients and 74 fields. Results: The dose calculated from the log files were compared to those generate by the treatment planning system (Raystation). While the dual Gaussian model often gave better agreement, overall, the ASTROID model gave the most consistent results. Using a 5%–3 mm gamma with a 90% passing criteria and excluding doses below 20% of prescription all patient samples passed. However, the degree of agreement of the log file approach was slightly worse than that of the chamber array measurement approach. Operationally, this implies that if the beam passes the log file model, it should pass direct measurement. Conclusion: We have established and benchmarked a model for log file QA in an IBA proteus plus system. The choice of optimal spot model for a given class of patients may be affected by factors such as site, field size, and range shifter and will be investigated further.

  1. Comparison of bi-level optimization frameworks for sizing and control of a hybrid electric vehicle

    NARCIS (Netherlands)

    Silvas, E.; Bergshoeff, N.D.; Hofman, T.; Steinbuch, M.

    2015-01-01

    This paper discusses the integrated design problem related to determining the power specifications of the main subsystems (sizing) and the supervisory control (energy management). Different bi-level optimization methods, with the outer loop using algorithms as Genetic Algorithms, Sequential

  2. Optimized sizing model for renewable energy systems in rural areas; Modelo de dimensionamento otimizado para sistemas energeticos renovaveis em ambiente rurais

    Energy Technology Data Exchange (ETDEWEB)

    Nogueira, Carlos E.C. [UNIOESTE, Cascavel, PR (Brazil). Centro de Ciencias Exatas e Tecnologicas]. E-mail: cecn@correios.net.br; Zuern, Hans H. [Santa Catarina Univ., Florianopolis, SC (Brazil). Dept. de Engenharia Eletrica

    2005-05-15

    The purpose of this research was to develop a methodology for sizing integrated renewable energy systems, useful for rural areas, using simulation and optimization tools developed in MATLAB 6.0. The sizing model produces a system with minimum cost and high reliability level, based on the concept of loss of power supply probability (LPSP) for consecutive hours. An optimization model is presented and three different sizing scenarios are calculated and compared, showing flexibility in the elaboration of different project conceptions. The obtained results show a complete sizing of the energy conversion devices and a long-term cost evaluation. (author)

  3. How Hedstrom files fail during clinical use? A retrieval study based on SEM, optical microscopy and micro-XCT analysis.

    Science.gov (United States)

    Zinelis, Spiros; Al Jabbari, Youssef S

    2018-05-01

    This study was conducted to evaluate the failure mechanism of clinically failed Hedstrom (H)-files. Discarded H-files (n=160) from #8 to #40 ISO sizes were collected from different dental clinics. Retrieved files were classified according to their macroscopic appearance and they were investigated under scanning electron microscopy (SEM) and X-ray micro-computed tomography (mXCT). Then the files were embedded in resin along their longitudinal axis and after metallographic grinding and polishing, studied under an incident light microscope. The macroscopic evaluation showed that small ISO sizes (#08-#15) failed by extensive plastic deformation, while larger sizes (≥#20) tended to fracture. Light microscopy and mXCT results coincided showing that unused and plastically deformed files were free of internal defects, while fractured files demonstrate the presence of intense cracking in the flute region. SEM analysis revealed the presence of striations attributed to the fatigue mechanism. Secondary cracks were also identified by optical microscopy and their distribution was correlated to fatigue under bending loading. Experimental results demonstrated that while overloading of cutting instruments is the predominating failure mechanism of small file sizes (#08-#15), fatigue should be considered the fracture mechanism for larger sizes (≥#20).

  4. The Optimal Inhomogeneity for Superconductivity: Finite Size Studies

    Energy Technology Data Exchange (ETDEWEB)

    Tsai, W-F.

    2010-04-06

    We report the results of exact diagonalization studies of Hubbard models on a 4 x 4 square lattice with periodic boundary conditions and various degrees and patterns of inhomogeneity, which are represented by inequivalent hopping integrals t and t{prime}. We focus primarily on two patterns, the checkerboard and the striped cases, for a large range of values of the on-site repulsion U and doped hole concentration, x. We present evidence that superconductivity is strongest for U of order the bandwidth, and intermediate inhomogeneity, 0 < t{prime} < t. The maximum value of the 'pair-binding energy' we have found with purely repulsive interactions is {Delta}{sub pb} = 0.32t for the checkerboard Hubbard model with U = 8t and t{prime} = 0.5t. Moreover, for near optimal values, our results are insensitive to changes in boundary conditions, suggesting that the correlation length is sufficiently short that finite size effects are already unimportant.

  5. Reliability analysis of a replication with limited number of journaling files

    International Nuclear Information System (INIS)

    Kimura, Mitsutaka; Imaizumi, Mitsuhiro; Nakagawa, Toshio

    2013-01-01

    Recently, replication mechanisms using journaling files have been widely used for the server systems. We have already discussed the model of asynchronous replication system using journaling files [8]. This paper formulates a stochastic model of a server system with replication considering the number of transmitting journaling files. The server updates the storage database and transmits the journaling file when a client requests the data update. The server transmits the database content to a backup site either at a constant time or after a constant number of transmitting journaling files. We derive the expected number of the replication and of transmitting journaling files. Further, we calculate the expected cost and discuss optimal replication interval to minimize it. Finally, numerical examples are given

  6. The NGDC Seafloor Sediment Grain Size Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The NGDC (now NCEI) Seafloor Sediment Grain Size Database contains particle size data for over 17,000 seafloor samples worldwide. The file was begun by NGDC in 1976...

  7. Optimizing the particle size of coal for CWM in view of fluidity. [Biomodal

    Energy Technology Data Exchange (ETDEWEB)

    Matsuo, Seiji; Nonaka, Michio; Okano, Yasuhiko; Inoue, Toshio

    1987-10-25

    As is well known, the viscosity of CWM is considerably influenced by the distribution of coal particle sizes and has bearing on particle packing density or porosity. A model for representing the viscosity of CWM in terms of particle porosity and specific surface was designed. Also, experimental verification was conducted for the method of optimizing particle size on a two-stage grinding system. The results are as follows: The viscosity of CWM is influenced not only by the porosity of coal particles, but also by the specific surface; also, it is correlated to the distance between suspended particles. At the two-stage grinding experiments, a particle size distribution leading to a low viscosity was obtained by mixing coarse and fine particles at 4:1. This has demonstrated that the use of an agitating mill for fine particles is of help. (11 figs, 2 tabs, 6 refs)

  8. Combustion of palm kernel shell in a fluidized bed: Optimization of biomass particle size and operating conditions

    International Nuclear Information System (INIS)

    Ninduangdee, Pichet; Kuprianov, Vladimir I.

    2014-01-01

    Highlights: • Safe burning of palm kernel shell is achievable in a FBC using alumina as the bed material. • Thermogravimetric analysis of the shell with different particle sizes is performed. • Optimal values of the shell particle size and excess air lead to the minimum emission costs. • Combustion efficiency of 99.4–99.7% is achievable when operated under optimal conditions. • CO and NO emissions of the FBC are at levels substantially below national emission limits. - Abstract: This work presents a study on the combustion of palm kernel shell (PKS) in a conical fluidized-bed combustor (FBC) using alumina sand as the bed material to prevent bed agglomeration. Prior to combustion experiments, a thermogravimetric analysis was performed in nitrogen and dry air to investigate the effects of biomass particle size on thermal and combustion reactivity of PKS. During the combustion tests, the biomass with different mean particle sizes (1.5 mm, 4.5 mm, 7.5 mm, and 10.5 mm) was burned at a 45 kg/h feed rate, while excess air was varied from 20% to 80%. Temperature and gas concentrations (O 2 , CO, C x H y as CH 4 , and NO) were recorded along the axial direction in the reactor as well as at stack. The experimental results indicated that the biomass particle size and excess air had substantial effects on the behavior of gaseous pollutants (CO, C x H y , and NO) in different regions inside the reactor, as well as on combustion efficiency and emissions of the conical FBC. The CO and C x H y emissions can be effectively controlled by decreasing the feedstock particle size and/or increasing excess air, whereas the NO emission can be mitigated using coarser biomass particles and/or lower excess air. A cost-based approach was applied to determine the optimal values of biomass particle size and excess air, ensuring minimum emission costs of burning the biomass in the proposed combustor. From the optimization analysis, the best combustion and emission performance of the

  9. Utilization of Supercapacitors in Adaptive Protection Applications for Resiliency against Communication Failures: A Size and Cost Optimization Case Study

    Energy Technology Data Exchange (ETDEWEB)

    Habib, Hany F [Florida Intl Univ., Miami, FL (United States); El Hariri, Mohamad [Florida Intl Univ., Miami, FL (United States); Elsayed, Ahmed [Florida Intl Univ., Miami, FL (United States); Mohammed, Osama [Florida Intl Univ., Miami, FL (United States)

    2017-03-30

    Microgrids’ adaptive protection techniques rely on communication signals from the point of common coupling to ad- just the corresponding relays’ settings for either grid-connected or islanded modes of operation. However, during communication out- ages or in the event of a cyberattack, relays settings are not changed. Thus adaptive protection schemes are rendered unsuc- cessful. Due to their fast response, supercapacitors, which are pre- sent in the microgrid to feed pulse loads, could also be utilized to enhance the resiliency of adaptive protection schemes to communi- cation outages. Proper sizing of the supercapacitors is therefore im- portant in order to maintain a stable system operation and also reg- ulate the protection scheme’s cost. This paper presents a two-level optimization scheme for minimizing the supercapacitor size along with optimizing its controllers’ parameters. The latter will lead to a reduction of the supercapacitor fault current contribution and an increase in that of other AC resources in the microgrid in the ex- treme case of having a fault occurring simultaneously with a pulse load. It was also shown that the size of the supercapacitor can be reduced if the pulse load is temporary disconnected during the transient fault period. Simulations showed that the resulting super- capacitor size and the optimized controller parameters from the proposed two-level optimization scheme were feeding enough fault currents for different types of faults and minimizing the cost of the protection scheme.

  10. Transport in ratchets with single-file constraint

    Indian Academy of Sciences (India)

    file diffusion” in the literature. Anomaly in protein molecule diffusion on a cell ... tic media [20]. The rectification effect vanishes in the adiabatically slow modulation limit and optimizes in a driving frequency range. In other related work on flash-.

  11. Optimal sizing method for constituent elements of stand-alone photovoltaic power systems

    Energy Technology Data Exchange (ETDEWEB)

    Otsuka, Hirotada; Oi, Yoichi [Hokuriku Electric Power Co., Inc. Toyama (Japan)

    1988-12-25

    The purpose of the report was to calculate the optimal volume of constituent elements of stand-alone photovoltaic power systems, based on the distribution of global radiation on an inclined surface (herein-after called flux of solar radiation), which had been previously measured, and the size of load to be supplied. The least power generation cost was calculated, supposing that setting load was 176kWh/month and the loss of load probability (LOLP) was 1%, by using actual amount of solar radiation in May 1985. The cost was divided into two components: one was proportionate to the size of solar cell, and the other was in proportion to the battery volume. And then, the cost of twenty-year operation(TLC) was calculated. The size of array and the battery volume, which minimize the cost, can be determined when TLC is differentiate. Since auxiliary power source is not attached to this system, it is necessary to restrict the load in order to meet the electric power shortage. In case of the cost at construction in 1984, a standard model indicating the least power generation cost is a photovoltaic system with the array size of A=49.0m{sup 2} and the battery volume of Q=568(Ah). 4 refs., 9 figs., 10 tabs.

  12. Optimal Lot Sizing with Scrap and Random Breakdown Occurring in Backorder Replenishing Period

    OpenAIRE

    Ting, Chia-Kuan; Chiu, Yuan-Shyi; Chan, Chu-Chai

    2011-01-01

    This paper is concerned with determination of optimal lot size for an economic production quantity model with scrap and random breakdown occurring in backorder replenishing period. In most real-life manufacturing systems, generation of defective items and random breakdown of production equipment are inevitable. To deal with the stochastic machine failures, production planners practically calculate the mean time between failures (MTBF) and establish the robust plan accordingly, in terms of opt...

  13. Dynamic Non-Hierarchical File Systems for Exascale Storage

    Energy Technology Data Exchange (ETDEWEB)

    Long, Darrell E. [Univ. of California, Santa Cruz, CA (United States); Miller, Ethan L [Univ. of California, Santa Cruz, CA (United States)

    2015-02-24

    appliances. These search applications are often optimized for a single file system, making it difficult to move files and their metadata between file systems. Users have tried to solve this problem in several ways, including the use of separate databases to index file properties, the encoding of file properties into file names, and separately gathering and managing provenance data, but none of these approaches has worked well, either due to limited usefulness or scalability, or both. Our research addressed several key issues: High-performance, real-time metadata harvesting: extracting important attributes from files dynamically and immediately updating indexes used to improve search; Transparent, automatic, and secure provenance capture: recording the data inputs and processing steps used in the production of each file in the system; Scalable indexing: indexes that are optimized for integration with the file system; Dynamic file system structure: our approach provides dynamic directories similar to those in semantic file systems, but these are the native organization rather than a feature grafted onto a conventional system. In addition to these goals, our research effort will include evaluating the impact of new storage technologies on the file system design and performance. In particular, the indexing and metadata harvesting functions can potentially benefit from the performance improvements promised by new storage class memories.

  14. Hybrid cryptosystem for image file using elgamal and double playfair cipher algorithm

    Science.gov (United States)

    Hardi, S. M.; Tarigan, J. T.; Safrina, N.

    2018-03-01

    In this paper, we present an implementation of an image file encryption using hybrid cryptography. We chose ElGamal algorithm to perform asymmetric encryption and Double Playfair for the symmetric encryption. Our objective is to show that these algorithms are capable to encrypt an image file with an acceptable running time and encrypted file size while maintaining the level of security. The application was built using C# programming language and ran as a stand alone desktop application under Windows Operating System. Our test shows that the system is capable to encrypt an image with a resolution of 500×500 to a size of 976 kilobytes with an acceptable running time.

  15. Multi-level, automatic file management system using magnetic disk, mass storage system and magnetic tape

    International Nuclear Information System (INIS)

    Fujii, Minoru; Asai, Kiyoshi

    1979-12-01

    A simple, effective file management system using magnetic disk, mass storage system (MSS) and magnetic tape is described. Following are the concepts and techniques introduced in this file management system. (1) File distribution and continuity character of file references are closely approximated by memory retention function. A density function using the memory retention function is thus defined. (2) A method of computing the cost/benefit lines for magnetic disk, MSS and magnetic tape is presented. (3) A decision process of an optimal organization of file facilities incorporating file demands distribution to respective file devices, is presented. (4) A method of simple, practical, effective, automatic file management, incorporating multi-level file management, space management and file migration control, is proposed. (author)

  16. Image files - RPD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...ftp://ftp.biosciencedbc.jp/archive/rpd/LATEST/rpd_gel_image.zip File size: 38.5 MB Simple search URL - Data ... License Update History of This Database Site Policy | Contact Us Image files - RPD | LSDB Archive ...

  17. Design and Implementation of a Metadata-rich File System

    Energy Technology Data Exchange (ETDEWEB)

    Ames, S; Gokhale, M B; Maltzahn, C

    2010-01-19

    Despite continual improvements in the performance and reliability of large scale file systems, the management of user-defined file system metadata has changed little in the past decade. The mismatch between the size and complexity of large scale data stores and their ability to organize and query their metadata has led to a de facto standard in which raw data is stored in traditional file systems, while related, application-specific metadata is stored in relational databases. This separation of data and semantic metadata requires considerable effort to maintain consistency and can result in complex, slow, and inflexible system operation. To address these problems, we have developed the Quasar File System (QFS), a metadata-rich file system in which files, user-defined attributes, and file relationships are all first class objects. In contrast to hierarchical file systems and relational databases, QFS defines a graph data model composed of files and their relationships. QFS incorporates Quasar, an XPATH-extended query language for searching the file system. Results from our QFS prototype show the effectiveness of this approach. Compared to the de facto standard, the QFS prototype shows superior ingest performance and comparable query performance on user metadata-intensive operations and superior performance on normal file metadata operations.

  18. Optimization of Blending Parameters and Fiber Size of Kenaf-Bast-Fiber-Reinforced the Thermoplastic Polyurethane Composites by Taguchi Method

    Directory of Open Access Journals (Sweden)

    Y. A. El-Shekeil

    2013-01-01

    Full Text Available “Kenaf-fibers- (KF-” reinforced “thermoplastic polyurethane (TPU” composites were prepared by the melt-blending method followed by compression molding. Composite specimens were cut from the sheets that were prepared by compression molding. The criteria of optimization were testing the specimens by tensile test and comparing the ultimate tensile strength. The aim of this study is to optimize processing parameters (e.g., processing temperature, time, and speed and fiber size using the Taguchi approach. These four parameters were investigated in three levels each. The L9 orthogonal array was used based on the number of parameters and levels that has been selected. Furthermore, analysis of variance (ANOVA was used to determine the significance of different parameters. The results showed that the optimum values were 180°C, 50 rpm, 13 min, and 125–300 micron for processing temperature, processing speed, processing time, and fiber size, respectively. Using ANOVA, processing temperature showed the highest significance value followed by fiber size. Processing time and speed did not show any significance on the optimization of TPU/KF.

  19. Distributing file-based data to remote sites within the BABAR collaboration

    International Nuclear Information System (INIS)

    Adye, T.; Dorigo, A.; Forti, A.; Leonardi, E.

    2001-01-01

    BABAR uses two formats for its data: Objectivity database and ROOT files. This poster concerns the distribution of the latter--for Objectivity data see. The BABAR analysis data is stored in ROOT files--one per physics run and analysis selection channel-maintained in a large directory tree. Currently BABAR has more than 4.5 TBytes in 200,00- ROOT files. This data is (mostly) produced at SLAC, but is required for analysis at universities and research centres throughout the US and Europe. Two basic problems confront us when we seek to import bulk data from SLAC to an institute's local storage via the network. We must determine which files must be imported (depending on the local site requirements and which files have already been imported), and the authors must make the optimum use of the network when transferring the data. Basic ftp-like tools (ftp, scp, etc) do not attempt to solve the first problem. More sophisticated tools like rsync, the widely-used mirror/synchronisation program, compare local and remote file systems, checking for changes (based on file date, size and, if desired, an elaborate checksum) in order to only copy new or modified files. However rsync allows for only limited file selection. Also when, as in BABAR, an extremely large directory structure must be scanned, rsync can take several hours just to determine which files need to be copied. Although rsync (and scp) provides on-the-fly compression, it does not allow us to optimise the network transfer by using multiple streams, adjusting the TCP window size, or separating encrypted authentication from unencrypted data channels

  20. Sizing and Optimization for Hybrid Central in South Algeria Based on Three Different Generators

    Directory of Open Access Journals (Sweden)

    Chouaib Ammari

    2017-11-01

    Full Text Available In this paper, we will size an optimum hybrid central content three different generators, two on renewable energy (solar photovoltaic and wind power and two nonrenewable (diesel generator and storage system because the new central generator has started to consider the green power technology in order for best future to the world, this central will use all the green power resource available and distributes energy to a small isolated village in southwest of Algeria named “Timiaouine”. The consumption of this village estimated with detailed in two season; season low consumption (winter and high consumption (summer, the hybrid central will be optimized by program Hybrid Optimization Model for Electric Renewable (HOMER PRO, this program will simulate in two configuration, the first with storage system, the second without storage system and in the end the program HOMER PRO will choose the best configuration which is the mixture of both economic and ecologic configurations, this central warrants the energetic continuity of village. Article History: Received May 18th 2017; Received in revised form July 17th 2017; Accepted Sept 3rd 2017; Available online How to Cite This Article: Ammari, C., Hamouda,M., and Makhloufi,S. (2017 Sizing and Optimization for Hybrid Central in South Algeria Based on Three Different Generators. International Journal of Renewable Energy Development, 6(3, 263-272. http://doi.org/10.14710/ijred.6.3.263-272

  1. Optimal sizing of grid-independent hybrid photovoltaic–battery power systems for household sector

    International Nuclear Information System (INIS)

    Bianchi, M.; Branchini, L.; Ferrari, C.; Melino, F.

    2014-01-01

    Highlights: • A feasibility study on a stand-alone solar–battery power generation system is carried out. • An in-house developed calculation code able to estimate photovoltaic panels behaviour is described. • The feasibility of replacing grid electricity with hybrid system is examined. • Guidelines for optimal photovoltaic design are given. • Guidelines for optimal storage sizing in terms of batteries number and capacity are given. - Abstract: The penetration of renewable sources into the grid, particularly wind and solar, have been increasing in recent years. As a consequence, there have been serious concerns over reliable and safety operation of power systems. One possible solution, to improve grid stability, is to integrate energy storage devices into power system network: storing energy produced in periods of low demand to later use, ensuring full exploitation of intermittent available sources. Focusing on stand-alone photovoltaic (PV) energy system, energy storage is needed with the purpose of ensuring continuous power flow, to minimize or, if anything, to neglect electrical grid supply. A comprehensive study on a hybrid stand-alone photovoltaic power system using two different energy storage technologies has been performed. The study examines the feasibility of replacing electricity provided by the grid with hybrid system to meet household demand. In particular, this paper presents first results for photovoltaic (PV)/battery (B) hybrid configuration. The main objective of this paper is focused on PV/B system, to recommend hybrid system optimal design in terms of PV module number, PV module tilt, number and capacity of batteries to minimize or, if possible, to neglect grid supply. This paper is the early stage of a theoretical and experimental study in which two different hybrid power system configurations will be evaluated and compared: (i) PV/B system and (ii) PV/B/fuel cell (FC) system. The aim of the overall study will be the definition of the

  2. Optimizing supercritical antisolvent process parameters to minimize the particle size of paracetamol nanoencapsulated in L-polylactide

    Directory of Open Access Journals (Sweden)

    Kalani M

    2011-05-01

    Full Text Available Mahshid Kalani, Robiah Yunus, Norhafizah AbdullahChemical and Environmental Engineering, Faculty of Engineering, University Putra Malaysia, Selangor Darul Ehsan, MalaysiaBackground: The aim of this study was to optimize the different process parameters including pressure, temperature, and polymer concentration, to produce fine small spherical particles with a narrow particle size distribution using a supercritical antisolvent method for drug encapsulation. The interaction between different process parameters was also investigated.Methods and results: The optimized process parameters resulted in production of nanoencapsulated paracetamol in L-polylactide with a mean diameter of approximately 300 nm at 120 bar, 30°C, and a polymer concentration of 16 ppm. Thermogravimetric analysis illustrated the thermal characteristics of the nanoparticles. The high electrical charge on the surface of the nanoparticles caused the particles to repel each other, with the high negative zeta potential preventing flocculation.Conclusion: Our results illustrate the effect of different process parameters on particle size and morphology, and validate results obtained via RSM statistical software. Furthermore, the in vitro drug-release profile is consistent with a Korsmeyer–Peppas kinetic model.Keywords: supercritical, antisolvent, encapsulation, nanoparticles, biodegradable polymer, optimization, drug delivery

  3. An optimization of robust SMES with specified structure H∞ controller for power system stabilization considering superconducting magnetic coil size

    International Nuclear Information System (INIS)

    Ngamroo, Issarachai

    2011-01-01

    Even the superconducting magnetic energy storage (SMES) is the smart stabilizing device in electric power systems, the installation cost of SMES is very high. Especially, the superconducting magnetic coil size which is the critical part of SMES, must be well designed. On the contrary, various system operating conditions result in system uncertainties. The power controller of SMES designed without taking such uncertainties into account, may fail to stabilize the system. By considering both coil size and system uncertainties, this paper copes with the optimization of robust SMES controller. No need of exact mathematic equations, the normalized coprime factorization is applied to model system uncertainties. Based on the normalized integral square error index of inter-area rotor angle difference and specified structured H ∞ loop shaping optimization, the robust SMES controller with the smallest coil size, can be achieved by the genetic algorithm. The robustness of the proposed SMES with the smallest coil size can be confirmed by simulation study.

  4. Radiology Teaching Files on the Internet

    International Nuclear Information System (INIS)

    Lim, Eun Chung; Kim, Eun Kyung

    1996-01-01

    There is increasing attention about radiology teaching files on the Internet in the field of diagnostic radiology. The purpose of this study was to aid in the creation of new radiology teaching file by analysing the present radiology teaching file sites on the Internet with many aspects and evaluating images on those sites, using Macintosh II ci compute r, 28.8kbps TelePort Fax/Modem, Netscape Navigator 2.0 software. The results were as follow : 1. Analysis of radiology teaching file sites (1) Country distribution was the highest in USA (57.5%). (2) Average number of cases was 186 cases and radiology teaching file sites with search engine were 9 sites (22.5%). (3) At the method of case arrangement, anatomic area type and diagnosis type were found at the 10 sites (25%) each, question and answer type was found at the 9 sites (22.5%). (4) Radiology teaching file sites with oro-maxillofacial disorder were 9 sites (22.5%). (5) At the image format, GIF format was found at the 14 sites (35%), and JPEG format found at the 14 sites (35%). (6) Created year was the highest in 1995 (43.7%). (7) Continuing case upload was found at the 35 sites (87.5%). 2. Evaluation of images on the radiology teaching files (1) Average file size of GIF format (71 Kbyte) was greater than that of JPEG format (24 Kbyte). (P<0.001) (2) Image quality of GIF format was better than that of JPEG format. (P<0.001)

  5. Optimal Caching in Multicast 5G Networks with Opportunistic Spectrum Access

    KAUST Repository

    Emara, Mostafa

    2018-01-15

    Cache-enabled small base station (SBS) densification is foreseen as a key component of 5G cellular networks. This architecture enables storing popular files at the network edge (i.e., SBS caches), which empowers local communication and alleviates traffic congestions at the core/backhaul network. This paper develops a mathematical framework, based on stochastic geometry, to characterize the hit probability of a cache-enabled multicast 5G network with SBS multi-channel capabilities and opportunistic spectrum access. To this end, we first derive the hit probability by characterizing opportunistic spectrum access success probabilities, service distance distributions, and coverage probabilities. The optimal caching distribution to maximize the hit probability is then computed. The performance and trade-offs of the derived optimal caching distributions are then assessed and compared with two widely employed caching distribution schemes, namely uniform and Zipf caching, through numerical results and extensive simulations. It is shown that the Zipf caching almost optimal only in scenarios with large number of available channels and large cache sizes.

  6. Optimization Specifications for CUDA Code Restructuring Tool

    KAUST Repository

    Khan, Ayaz

    2017-01-01

    and convert it into an optimized CUDA kernel with user directives in a configuration file for guiding the compiler. RTCUDA also allows transparent invocation of the most optimized external math libraries like cuSparse and cuBLAS enabling efficient design

  7. Sizing optimization of skeletal structures using teaching-learning based optimization

    Directory of Open Access Journals (Sweden)

    Vedat Toğan

    2017-03-01

    Full Text Available Teaching Learning Based Optimization (TLBO is one of the non-traditional techniques to simulate natural phenomena into a numerical algorithm. TLBO mimics teaching learning process occurring between a teacher and students in a classroom. A parameter named as teaching factor, TF, seems to be the only tuning parameter in TLBO. Although the value of the teaching factor, TF, is determined by an equation, the value of 1 or 2 has been used by the researchers for TF. This study intends to explore the effect of the variation of teaching factor TF on the performances of TLBO. This effect is demonstrated in solving structural optimization problems including truss and frame structures under the stress and displacement constraints. The results indicate that the variation of TF in the TLBO process does not change the results obtained at the end of the optimization procedure when the computational cost of TLBO is ignored.

  8. Bioresorbable scaffolds for bone tissue engineering: optimal design, fabrication, mechanical testing and scale-size effects analysis.

    Science.gov (United States)

    Coelho, Pedro G; Hollister, Scott J; Flanagan, Colleen L; Fernandes, Paulo R

    2015-03-01

    Bone scaffolds for tissue regeneration require an optimal trade-off between biological and mechanical criteria. Optimal designs may be obtained using topology optimization (homogenization approach) and prototypes produced using additive manufacturing techniques. However, the process from design to manufacture remains a research challenge and will be a requirement of FDA design controls to engineering scaffolds. This work investigates how the design to manufacture chain affects the reproducibility of complex optimized design characteristics in the manufactured product. The design and prototypes are analyzed taking into account the computational assumptions and the final mechanical properties determined through mechanical tests. The scaffold is an assembly of unit-cells, and thus scale size effects on the mechanical response considering finite periodicity are investigated and compared with the predictions from the homogenization method which assumes in the limit infinitely repeated unit cells. Results show that a limited number of unit-cells (3-5 repeated on a side) introduce some scale-effects but the discrepancies are below 10%. Higher discrepancies are found when comparing the experimental data to numerical simulations due to differences between the manufactured and designed scaffold feature shapes and sizes as well as micro-porosities introduced by the manufacturing process. However good regression correlations (R(2) > 0.85) were found between numerical and experimental values, with slopes close to 1 for 2 out of 3 designs. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  9. Inverse problem for particle size distributions of atmospheric aerosols using stochastic particle swarm optimization

    International Nuclear Information System (INIS)

    Yuan Yuan; Yi Hongliang; Shuai Yong; Wang Fuqiang; Tan Heping

    2010-01-01

    As a part of resolving optical properties in atmosphere radiative transfer calculations, this paper focuses on obtaining aerosol optical thicknesses (AOTs) in the visible and near infrared wave band through indirect method by gleaning the values of aerosol particle size distribution parameters. Although various inverse techniques have been applied to obtain values for these parameters, we choose a stochastic particle swarm optimization (SPSO) algorithm to perform an inverse calculation. Computational performances of different inverse methods are investigated and the influence of swarm size on the inverse problem of computation particles is examined. Next, computational efficiencies of various particle size distributions and the influences of the measured errors on computational accuracy are compared. Finally, we recover particle size distributions for atmospheric aerosols over Beijing using the measured AOT data (at wavelengths λ=0.400, 0.690, 0.870, and 1.020 μm) obtained from AERONET at different times and then calculate other AOT values for this band based on the inverse results. With calculations agreeing with measured data, the SPSO algorithm shows good practicability.

  10. Productivity growth, case mix and optimal size of hospitals. A 16-year study of the Norwegian hospital sector.

    Science.gov (United States)

    Anthun, Kjartan Sarheim; Kittelsen, Sverre Andreas Campbell; Magnussen, Jon

    2017-04-01

    This paper analyses productivity growth in the Norwegian hospital sector over a period of 16 years, 1999-2014. This period was characterized by a large ownership reform with subsequent hospital reorganizations and mergers. We describe how technological change, technical productivity, scale efficiency and the estimated optimal size of hospitals have evolved during this period. Hospital admissions were grouped into diagnosis-related groups using a fixed-grouper logic. Four composite outputs were defined and inputs were measured as operating costs. Productivity and efficiency were estimated with bootstrapped data envelopment analyses. Mean productivity increased by 24.6% points from 1999 to 2014, an average annual change of 1.5%. There was a substantial growth in productivity and hospital size following the ownership reform. After the reform (2003-2014), average annual growth was case mix between hospitals, and thus provides a framework for future studies. The study adds to the discussion on optimal hospital size. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. [PVFS 2000: An operational parallel file system for Beowulf

    Science.gov (United States)

    Ligon, Walt

    2004-01-01

    The approach has been to develop Parallel Virtual File System version 2 (PVFS2) , retaining the basic philosophy of the original file system but completely rewriting the code. It shows the architecture of the server and client components. BMI - BMI is the network abstraction layer. It is designed with a common driver and modules for each protocol supported. The interface is non-blocking, and provides mechanisms for optimizations including pinning user buffers. Currently TCP/IP and GM(Myrinet) modules have been implemented. Trove -Trove is the storage abstraction layer. It provides for storing both data spaces and name/value pairs. Trove can also be implemented using different underlying storage mechanisms including native files, raw disk partitions, SQL and other databases. The current implementation uses native files for data spaces and Berkeley db for name/value pairs.

  12. Optimal Location and Sizing of UPQC in Distribution Networks Using Differential Evolution Algorithm

    Directory of Open Access Journals (Sweden)

    Seyed Abbas Taher

    2012-01-01

    Full Text Available Differential evolution (DE algorithm is used to determine optimal location of unified power quality conditioner (UPQC considering its size in the radial distribution systems. The problem is formulated to find the optimum location of UPQC based on an objective function (OF defined for improving of voltage and current profiles, reducing power loss and minimizing the investment costs considering the OF's weighting factors. Hence, a steady-state model of UPQC is derived to set in forward/backward sweep load flow. Studies are performed on two IEEE 33-bus and 69-bus standard distribution networks. Accuracy was evaluated by reapplying the procedures using both genetic (GA and immune algorithms (IA. Comparative results indicate that DE is capable of offering a nearer global optimal in minimizing the OF and reaching all the desired conditions than GA and IA.

  13. Optimized Sizing, Selection, and Economic Analysis of Battery Energy Storage for Grid-Connected Wind-PV Hybrid System

    OpenAIRE

    Fathima, Hina; Palanisamy, K.

    2015-01-01

    Energy storages are emerging as a predominant sector for renewable energy applications. This paper focuses on a feasibility study to integrate battery energy storage with a hybrid wind-solar grid-connected power system to effectively dispatch wind power by incorporating peak shaving and ramp rate limiting. The sizing methodology is optimized using bat optimization algorithm to minimize the cost of investment and losses incurred by the system in form of load shedding and wind curtailment. The ...

  14. Physician Fee Schedule National Payment Amount File

    Data.gov (United States)

    U.S. Department of Health & Human Services — The significant size of the Physician Fee Schedule Payment Amount File-National requires that database programs (e.g., Access, dBase, FoxPro, etc.) be used to read...

  15. Optimal Sizing Of An Off-Grid Small Hydro-Photovoltaic-Diesel Generator Hybrid Power System For A Distant Village

    Directory of Open Access Journals (Sweden)

    Adebanji B.

    2017-08-01

    Full Text Available This paper presented an optimal sizing technique for an off-grid hybrid system consisting of Small Hydro SHP system Photovoltaic PV modules Battery BATT banks and Diesel Generator DG. The objective cost function Annualized Cost System and the Loss of Power Supply Probability LPSP were minimized with application of Genetic Algorithm GA in order to reduce the Cost of Energy COE generation. GA compared to other convectional optimization methods has the ability to attain global optimum easily. The decision variables are the number of small hydro turbines NSHP number of solar panels NPV number of battery banks NBATT and the capacity of DG PDG. The proposed method was applied to a typical rural village Itapaji-Ekiti in Nigeria. The monthly average solar irradiance data were converted into hourly solar irradiance data for uniformity. Sensitivity analysis was also performed to identify the most important parameter influencing the optimized hybrid system. The optimal sizing result of the HPS is 954 kW of SHP 290 kW of PV panels 9500 sets of 600Ah battery strings and 350 kW of DG. The optimal Loss of Power Supply Probability LPSP is 0.0054 and the Renewable Fraction RF is 0.62 which is indeed a significant improvement on the environment and comparatively better than any other combinations in the system.

  16. Optimal sizing of energy storage system for microgrids

    Indian Academy of Sciences (India)

    strategies and optimal allocation methods of the ESS devices are required for the MG. ... for the optimal design of systems managed optimally according to different .... Energy storage hourly operating and maintenance cost is defined as a ...

  17. Enhancing sampling design in mist-net bat surveys by accounting for sample size optimization

    OpenAIRE

    Trevelin, Leonardo Carreira; Novaes, Roberto Leonan Morim; Colas-Rosas, Paul François; Benathar, Thayse Cristhina Melo; Peres, Carlos A.

    2017-01-01

    The advantages of mist-netting, the main technique used in Neotropical bat community studies to date, include logistical implementation, standardization and sampling representativeness. Nonetheless, study designs still have to deal with issues of detectability related to how different species behave and use the environment. Yet there is considerable sampling heterogeneity across available studies in the literature. Here, we approach the problem of sample size optimization. We evaluated the co...

  18. Combining kernel matrix optimization and regularization to improve particle size distribution retrieval

    Science.gov (United States)

    Ma, Qian; Xia, Houping; Xu, Qiang; Zhao, Lei

    2018-05-01

    A new method combining Tikhonov regularization and kernel matrix optimization by multi-wavelength incidence is proposed for retrieving particle size distribution (PSD) in an independent model with improved accuracy and stability. In comparison to individual regularization or multi-wavelength least squares, the proposed method exhibited better anti-noise capability, higher accuracy and stability. While standard regularization typically makes use of the unit matrix, it is not universal for different PSDs, particularly for Junge distributions. Thus, a suitable regularization matrix was chosen by numerical simulation, with the second-order differential matrix found to be appropriate for most PSD types.

  19. Tuning HDF5 subfiling performance on parallel file systems

    Energy Technology Data Exchange (ETDEWEB)

    Byna, Suren [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Chaarawi, Mohamad [Intel Corp. (United States); Koziol, Quincey [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Mainzer, John [The HDF Group (United States); Willmore, Frank [The HDF Group (United States)

    2017-05-12

    Subfiling is a technique used on parallel file systems to reduce locking and contention issues when multiple compute nodes interact with the same storage target node. Subfiling provides a compromise between the single shared file approach that instigates the lock contention problems on parallel file systems and having one file per process, which results in generating a massive and unmanageable number of files. In this paper, we evaluate and tune the performance of recently implemented subfiling feature in HDF5. In specific, we explain the implementation strategy of subfiling feature in HDF5, provide examples of using the feature, and evaluate and tune parallel I/O performance of this feature with parallel file systems of the Cray XC40 system at NERSC (Cori) that include a burst buffer storage and a Lustre disk-based storage. We also evaluate I/O performance on the Cray XC30 system, Edison, at NERSC. Our results show performance benefits of 1.2X to 6X performance advantage with subfiling compared to writing a single shared HDF5 file. We present our exploration of configurations, such as the number of subfiles and the number of Lustre storage targets to storing files, as optimization parameters to obtain superior I/O performance. Based on this exploration, we discuss recommendations for achieving good I/O performance as well as limitations with using the subfiling feature.

  20. Measurement and visualization of file-to-wall contact during ultrasonically activated irrigation in simulated canals

    NARCIS (Netherlands)

    Boutsioukis, C.; Verhaagen, B.; Walmsley, A.D.; Versluis, Michel; van der Sluis, L.W.M.

    2013-01-01

    Aim (i) To quantify in a simulated root canal model the file-to-wall contact during ultrasonic activation of an irrigant and to evaluate the effect of root canal size, file insertion depth, ultrasonic power, root canal level and previous training, (ii) To investigate the effect of file-to-wall

  1. Measurement and visualization of file-to-wall contact during ultrasonically activated irrigation in simulated canals

    NARCIS (Netherlands)

    Boutsioukis, C.; Verhaagen, B.; Walmsley, A. D.; Versluis, M.; van der Sluis, L. W. M.

    2013-01-01

    Aim(i) To quantify in a simulated root canal model the file-to-wall contact during ultrasonic activation of an irrigant and to evaluate the effect of root canal size, file insertion depth, ultrasonic power, root canal level and previous training, (ii) To investigate the effect of file-to-wall

  2. A comparative evaluation of gutta percha removal and extrusion of apical debris by rotary and hand files.

    Science.gov (United States)

    Chandrasekar; Ebenezar, A V Rajesh; Kumar, Mohan; Sivakumar, A

    2014-11-01

    The aim of this study was to evaluate the efficacy of Protaper retreatment files in comparison with RaCe, K3 and H-files for removal of gutta-percha and apically extruded debris using volumetric analysis. Forty extracted single rooted maxillary incisor teeth with straight canals and mature apices were selected for the study. After access cavity preparation, apical patency was confirmed with a size 10 K-file extending 1mm beyond the point at which it was first visible at the apical end. Working lengths were determined with the use of size 15 K-file. The canals were prepared in a step-back technique and the master apical file was size 30 for all teeth. 3% sodium hypochlorite was used as an irrigant after each instrumentation. Before final rinse, size 20 K-file was passed 1mm beyond the apex to remove any dentinal shaving plugs and maintain the apical patency. Then the canals were dried with paper points. The root canal was filled using standard gutta-percha points and zinc oxide eugenol sealer under lateral condensation technique. The teeth were then randomly divided into four groups of ten teeth each based on the instrument used for gutta percha removal. All the rotary instruments used in this study were rotated at 300rpm. The instruments used were: Group 1 - RaCe Files, Group 2 - ProTaper retreatment Files, Group 3 - K3 Files and Group 4 - H Files. The volume of the obturating material was calculated before and after removal using volumetric analysis with spiral CT. The removal efficacy with each instrument was calculated and statistically analysed. The results of the study show that the ProTaper retreatment files (Group 2) (97.4%) showed the highest efficiency in the removal of obturating material, which was followed by RaCe (95.74%), K3 (92.86%) and H files (90.14%) with the efficiency in the decreasing order. Similarly the mean apical extrusion in H files (0.000 ± 0.002) was significantly lower than all the rotary instruments. However, the difference among the

  3. Flash X-Ray (FXR) Accelerator Optimization Electronic Time-Resolved Measurement of X-Ray Source Size

    International Nuclear Information System (INIS)

    Jacob, J; Ong, M; Wargo, P

    2005-01-01

    Lawrence Livermore National Laboratory (LLNL) is currently investigating various approaches to minimize the x-ray source size on the Flash X-Ray (FXR) linear induction accelerator in order to improve x-ray flux and increase resolution for hydrodynamic radiography experiments. In order to effectively gauge improvements to final x-ray source size, a fast, robust, and accurate system for measuring the spot size is required. Timely feedback on x-ray source size allows new and improved accelerator tunes to be deployed and optimized within the limited run-time constraints of a production facility with a busy experimental schedule; in addition, time-resolved measurement capability allows the investigation of not only the time-averaged source size, but also the evolution of the source size, centroid position, and x-ray dose throughout the 70 ns beam pulse. Combined with time-resolved measurements of electron beam parameters such as emittance, energy, and current, key limiting factors can be identified, modeled, and optimized for the best possible spot size. Roll-bar techniques are a widely used method for x-ray source size measurement, and have been the method of choice at FXR for many years. A thick bar of tungsten or other dense metal with a sharp edge is inserted into the path of the x-ray beam so as to heavily attenuate the lower half of the beam, resulting in a half-light, half-dark image as seen downstream of the roll-bar; by measuring the width of the transition from light to dark across the edge of the roll-bar, the source size can be deduced. For many years, film has been the imaging medium of choice for roll-bar measurements thanks to its high resolution, linear response, and excellent contrast ratio. Film measurements, however, are fairly cumbersome and require considerable setup and analysis time; moreover, with the continuing trend towards all-electronic measurement systems, film is becoming increasingly difficult and expensive to procure. Here, we shall

  4. NOTES ON OPTIMAL ALLOCATION FOR FIXED SIZE CONFIDENCE REGIONS OF THE DIFFERENCE OF TWO MULTINORMAL MEANS

    OpenAIRE

    Hyakutake, Hiroto; Kawasaki, Hidefumi

    2004-01-01

    We consider the problem of constructing a fixed-size confidence region of the difference of two multinormal means when the covariance matrices have intraclass correlation structure. When the covariance matrices are known, we derive an optimal allocation. A two-stage procedure is given for the problem with unknown covariance matrices.

  5. Decay data file based on the ENSDF file

    Energy Technology Data Exchange (ETDEWEB)

    Katakura, J. [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1997-03-01

    A decay data file with the JENDL (Japanese Evaluated Nuclear Data Library) format based on the ENSDF (Evaluated Nuclear Structure Data File) file was produced as a tentative one of special purpose files of JENDL. The problem using the ENSDF file as primary source data of the JENDL decay data file is presented. (author)

  6. Ventricular arrhythmia burst is an independent indicator of larger infarct size even in optimal reperfusion in STEMI

    NARCIS (Netherlands)

    van der Weg, Kirian; Majidi, Mohamed; Haeck, Joost D. E.; Tijssen, Jan G. P.; Green, Cynthia L.; Koch, Karel T.; Kuijt, Wichert J.; Krucoff, Mitchell W.; Gorgels, Anton P. M.; de Winter, Robbert J.

    2016-01-01

    We hypothesized that ventricular arrhythmia (VA) bursts during reperfusion phase are a marker of larger infarct size despite optimal epicardial and microvascular perfusion. 126 STEMI patients were studied with 24h continuous, 12-lead Holter monitoring. Myocardial blush grade (MBG) was determined and

  7. Angular deflection of rotary nickel titanium files: a comparative study

    Directory of Open Access Journals (Sweden)

    Gianluca Gambarini

    2009-12-01

    Full Text Available A new manufacturing method of twisting nickel titanium wire to produce rotary nickel titanium (RNT files has recently been developed. The aim of the present study was to evaluate whether the new manufacturing process increased the angular deflection of RNT files, by comparing instruments produced using the new manufacturing method (Twisted Files versus instruments produced with the traditional grinding process. Testing was performed on a total of 40 instruments of the following commercially available RNT files: Twisted Files (TF, Profile, K3 and M2 (NRT. All instruments tested had the same dimensions (taper 0.06 and tip size 25. Test procedures strictly followed ISO 3630-1. Data were collected and statistically analyzed by means ANOVA test. The results showed that TF demonstrated significantly higher average angular deflection levels (P<0.05, than RNT manufactured by a grinding process. Since angular deflection represent the amount of rotation (and consequently deformation that a RNT file can withstand before torsional failure, such a significant improvement is a favorable property for the clinical use of the tested RNT files.

  8. Optimal Siting and Sizing of Energy Storage System for Power Systems with Large-scale Wind Power Integration

    DEFF Research Database (Denmark)

    Zhao, Haoran; Wu, Qiuwei; Huang, Shaojun

    2015-01-01

    This paper proposes algorithms for optimal sitingand sizing of Energy Storage System (ESS) for the operationplanning of power systems with large scale wind power integration.The ESS in this study aims to mitigate the wind powerfluctuations during the interval between two rolling Economic......Dispatches (EDs) in order to maintain generation-load balance.The charging and discharging of ESS is optimized consideringoperation cost of conventional generators, capital cost of ESSand transmission losses. The statistics from simulated systemoperations are then coupled to the planning process to determinethe...

  9. STANDALONE PHOTOVOLTAIC SYSTEMS SIZING OPTIMIZATION USING DESIGN SPACE APPROACH: CASE STUDY FOR RESIDENTIAL LIGHTING LOAD

    Directory of Open Access Journals (Sweden)

    D. F. AL RIZA

    2015-07-01

    Full Text Available This paper presents a sizing optimization methodology of panel and battery capacity in a standalone photovoltaic system with lighting load. Performance of the system is identified by performing Loss of Power Supply Probability (LPSP calculation. Input data used for the calculation is the daily weather data and system components parameters. Capital Cost and Life Cycle Cost (LCC is calculated as optimization parameters. Design space for optimum system configuration is identified based on a given LPSP value, Capital Cost and Life Cycle Cost. Excess energy value is used as an over-design indicator in the design space. An economic analysis, including cost of the energy and payback period, for selected configurations are also studied.

  10. LHCb trigger streams optimization

    Science.gov (United States)

    Derkach, D.; Kazeev, N.; Neychev, R.; Panin, A.; Trofimov, I.; Ustyuzhanin, A.; Vesterinen, M.

    2017-10-01

    The LHCb experiment stores around 1011 collision events per year. A typical physics analysis deals with a final sample of up to 107 events. Event preselection algorithms (lines) are used for data reduction. Since the data are stored in a format that requires sequential access, the lines are grouped into several output file streams, in order to increase the efficiency of user analysis jobs that read these data. The scheme efficiency heavily depends on the stream composition. By putting similar lines together and balancing the stream sizes it is possible to reduce the overhead. We present a method for finding an optimal stream composition. The method is applied to a part of the LHCb data (Turbo stream) on the stage where it is prepared for user physics analysis. This results in an expected improvement of 15% in the speed of user analysis jobs, and will be applied on data to be recorded in 2017.

  11. Multiple sensitive estimation and optimal sample size allocation in the item sum technique.

    Science.gov (United States)

    Perri, Pier Francesco; Rueda García, María Del Mar; Cobo Rodríguez, Beatriz

    2018-01-01

    For surveys of sensitive issues in life sciences, statistical procedures can be used to reduce nonresponse and social desirability response bias. Both of these phenomena provoke nonsampling errors that are difficult to deal with and can seriously flaw the validity of the analyses. The item sum technique (IST) is a very recent indirect questioning method derived from the item count technique that seeks to procure more reliable responses on quantitative items than direct questioning while preserving respondents' anonymity. This article addresses two important questions concerning the IST: (i) its implementation when two or more sensitive variables are investigated and efficient estimates of their unknown population means are required; (ii) the determination of the optimal sample size to achieve minimum variance estimates. These aspects are of great relevance for survey practitioners engaged in sensitive research and, to the best of our knowledge, were not studied so far. In this article, theoretical results for multiple estimation and optimal allocation are obtained under a generic sampling design and then particularized to simple random sampling and stratified sampling designs. Theoretical considerations are integrated with a number of simulation studies based on data from two real surveys and conducted to ascertain the efficiency gain derived from optimal allocation in different situations. One of the surveys concerns cannabis consumption among university students. Our findings highlight some methodological advances that can be obtained in life sciences IST surveys when optimal allocation is achieved. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Analyzing endosonic root canal file oscillations: an in vitro evaluation.

    Science.gov (United States)

    Lea, Simon C; Walmsley, A Damien; Lumley, Philip J

    2010-05-01

    Passive ultrasonic irrigation may be used to improve bacterial reduction within the root canal. The technique relies on a small file being driven to oscillate freely within the canal and activating an irrigant solution through biophysical forces such as microstreaming. There is limited information regarding a file's oscillation patterns when operated while surrounded by fluid as is the case within a canal root. Files of different sizes (#10 and #30, 27 mm and 31 mm) were connected to an ultrasound generator via a 120 degrees file holder. Files were immersed in a water bath, and a laser vibrometer set up with measurement lines superimposed over the files. The laser vibrometer was scanned over the oscillating files. Measurements were repeated 10 times for each file/power setting used. File mode shapes are comprised of a series of nodes/antinodes, with thinner, longer files producing more antinodes. The maximum vibration occurred at the free end of the file. Increasing generator power had no significant effect on this maximum amplitude (p > 0.20). Maximum displacement amplitudes were 17 to 22 microm (#10 file, 27 mm), 15 to 21 microm (#10 file, 31 mm), 6 to 9 microm (#30 file, 27 mm), and 5 to 7 microm (#30, 31 mm) for all power settings. Antinodes occurring along the remaining file length were significantly larger at generator power 1 than at powers 2 through 5 (p generator powers, energy delivered to the file is dissipated in unwanted vibration resulting in reduced vibration displacement amplitudes. This may reduce the occurrence of the biophysical forces necessary to maximize the technique's effectiveness. Copyright (c) 2010 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  13. Classification and Processing Optimization of Barley Milk Production Using NIR Spectroscopy, Particle Size, and Total Dissolved Solids Analysis

    Directory of Open Access Journals (Sweden)

    Jasenka Gajdoš Kljusurić

    2015-01-01

    Full Text Available Barley is a grain whose consumption has a significant nutritional benefit for human health as a very good source of dietary fibre, minerals, vitamins, and phenolic and phytic acids. Nowadays, it is more and more often used in the production of plant milk, which is used to replace cow milk in the diet by an increasing number of consumers. The aim of the study was to classify barley milk and determine the optimal processing conditions in barley milk production based on NIR spectra, particle size, and total dissolved solids analysis. Standard recipe for barley milk was used without added additives. Barley grain was ground and mixed in a blender for 15, 30, 45, and 60 seconds. The samples were filtered and particle size of the grains was determined by laser diffraction particle sizing. The plant milk was also analysed using near infrared spectroscopy (NIRS, in the range from 904 to 1699 nm. Furthermore, conductivity of each sample was determined and microphotographs were taken in order to identify the structure of fat globules and particles in the barley milk. NIR spectra, particle size distribution, and conductivity results all point to 45 seconds as the optimal blending time, since further blending results in the saturation of the samples.

  14. Systematic pseudopotentials from reference eigenvalue sets for DFT calculations: Pseudopotential files

    Directory of Open Access Journals (Sweden)

    Pablo Rivero

    2015-06-01

    Full Text Available We present in this article a pseudopotential (PP database for DFT calculations in the context of the SIESTA code [1–3]. Comprehensive optimized PPs in two formats (psf files and input files for ATM program are provided for 20 chemical elements for LDA and GGA exchange-correlation potentials. Our data represents a validated database of PPs for SIESTA DFT calculations. Extensive transferability tests guarantee the usefulness of these PPs.

  15. Thermoeconomic optimization of small size central air conditioner

    International Nuclear Information System (INIS)

    Zhang, G.Q.; Wang, L.; Liu, L.; Wang, Z.

    2004-01-01

    The application of thermoeconomic optimization design in an air-conditioning system is important in achieving economical life cycle cost. Previous work on thermoeconomic optimization mainly focused on directly calculating exergy input into the system. However, it is usually difficult to do so because of the uncertainty of input power of fan on the air side of the heat-exchanger and that of pump in the system. This paper introduces a new concept that exergy input into the system can be substituted for the sum of exergy destruction and exergy output from the system according to conservation of exergy. Although it is also difficult for a large-scale system to calculate exergy destruction, it is feasible to do so for a small-scale system, for instance, villa air conditioner (VAC). In order to perform thermoeconomic optimization, a program is firstly developed to evaluate the thermodynamic property of HFC134a on the basis of Martin-Hou state equation. Authors develop thermodynamic and thermoeconomic objective functions based on second law and thermoeconomic analysis of VAC system. Two optimization results are obtained. The design of VAC only aimed at decreasing the energy consumption is not comprehensive. Life cycle cost at thermoeconomic optimization is lower than that at thermodynamic optimization

  16. Optimal placement and sizing of fixed and switched capacitor banks under non sinusoidal operating conditions

    International Nuclear Information System (INIS)

    Ladjevardi, M.; Masoum, M.A.S.; Fuchs, E.F.

    2004-01-01

    An iterative nonlinear algorithm is generated for optimal sizing and placement of fixed and switched capacitor banks on radial distribution lines in the presence of linear and nonlinear loads. The HARMFLOW algorithm and the maximum sensitivities selection method are used to solve the constrained optimizations problem with discrete variables. To limit the burden of calculations and improve convergence, the problem is decomposed into two subproblems. Objective functions include minimum system losses and capacitor cost while IEEE 519 power quality limits are used as constraints. Results are presented and analyzed for the 18 bus IEEE distorted system. The advantage of the proposed algorithm compared to the previous work is the consideration of harmonic couplings and reactions of actual nonlinear loads of the distribution system

  17. Extending DIRAC File Management with Erasure-Coding for efficient storage

    CERN Document Server

    Skipsey, Samuel Cadellin; Britton, David; Crooks, David; Roy, Gareth

    2015-01-01

    The state of the art in Grid style data management is to achieve increased resilience of data via multiple complete replicas of data files across multiple storage endpoints. While this is effective, it is not the most space-efficient approach to resilience, especially when the reliability of individual storage endpoints is sufficiently high that only a few will be inactive at any point in time. We report on work performed as part of GridPP\\cite{GridPP}, extending the Dirac File Catalogue and file management interface to allow the placement of erasure-coded files: each file distributed as N identically-sized chunks of data striped across a vector of storage endpoints, encoded such that any M chunks can be lost and the original file can be reconstructed. The tools developed are transparent to the user, and, as well as allowing up and downloading of data to Grid storage, also provide the possibility of parallelising access across all of the distributed chunks at once, improving data transfer and IO performance. ...

  18. Portfolio size as funktion of the premium: modeling and optimization

    DEFF Research Database (Denmark)

    Asmussen, Søren; Christensen, Bent Jesper; Taksar, Michael I

    An insurance company has a large number N of potential customers characterized by i.i.d. r.v.'s A1,…,AN giving the arrival rates of claims. Customers are risk averse, and a customer accepts an offered premium p according to his A-value. The modeling further involves a discount rate d>r of customers......, where r is the risk-free interest rate. Based on calculations of the customers' present values of the alternative strategies of insuring and not insuring, the portfolio size n(p) is derived, and also the rate of claims from the insured customers is given. Further, the value of p which is optimal...... for minimizing the ruin probability is derived in a diffusion approximation to the Cramér-Lundberg risk process with an added liability rate L of the company. The solution involves the Lambert W function. Similar discussion is given for extensions involving customers having only partial information...

  19. Design and Optimization of Ultrasonic Wireless Power Transmission Links for Millimeter-Sized Biomedical Implants.

    Science.gov (United States)

    Meng, Miao; Kiani, Mehdi

    2017-02-01

    Ultrasound has been recently proposed as an alternative modality for efficient wireless power transmission (WPT) to biomedical implants with millimeter (mm) dimensions. This paper presents the theory and design methodology of ultrasonic WPT links that involve mm-sized receivers (Rx). For given load (R L ) and powering distance (d), the optimal geometries of transmitter (Tx) and Rx ultrasonic transducers, including their diameter and thickness, as well as the optimal operation frequency (f c ) are found through a recursive design procedure to maximize the power transmission efficiency (PTE). First, a range of realistic f c s is found based on the Rx thickness constrain. For a chosen f c within the range, the diameter and thickness of the Rx transducer are then swept together to maximize PTE. Then, the diameter and thickness of the Tx transducer are optimized to maximize PTE. Finally, this procedure is repeated for different f c s to find the optimal f c and its corresponding transducer geometries that maximize PTE. A design example of ultrasonic link has been presented and optimized for WPT to a 1 mm 3 implant, including a disk-shaped piezoelectric transducer on a silicon die. In simulations, a PTE of 2.11% at f c of 1.8 MHz was achieved for R L of 2.5 [Formula: see text] at [Formula: see text]. In order to validate our simulations, an ultrasonic link was optimized for a 1 mm 3 piezoelectric transducer mounted on a printed circuit board (PCB), which led to simulated and measured PTEs of 0.65% and 0.66% at f c of 1.1 MHz for R L of 2.5 [Formula: see text] at [Formula: see text], respectively.

  20. CT-angiography-based evaluation of the aortic annulus for prosthesis sizing in transcatheter aortic valve implantation (TAVI)-predictive value and optimal thresholds for major anatomic parameters.

    Science.gov (United States)

    Schwarz, Florian; Lange, Philipp; Zinsser, Dominik; Greif, Martin; Boekstegers, Peter; Schmitz, Christoph; Reiser, Maximilian F; Kupatt, Christian; Becker, Hans C

    2014-01-01

    To evaluate the predictive value of CT-derived measurements of the aortic annulus for prosthesis sizing in transcatheter aortic valve implantation (TAVI) and to calculate optimal cutoff values for the selection of various prosthesis sizes. The local IRB waived approval for this single-center retrospective analysis. Of 441 consecutive TAVI-patients, 90 were excluded (death within 30 days: 13; more than mild aortic regurgitation: 10; other reasons: 67). In the remaining 351 patients, the CoreValve (Medtronic) and the Edwards Sapien XT valve (Edwards Lifesciences) were implanted in 235 and 116 patients. Optimal prosthesis size was determined during TAVI by inflation of a balloon catheter at the aortic annulus. All patients had undergone CT-angiography of the heart or body trunk prior to TAVI. Using these datasets, the diameter of the long and short axis as well as the circumference and the area of the aortic annulus were measured. Multi-Class Receiver-Operator-Curve analyses were used to determine the predictive value of all variables and to define optimal cutoff-values. Differences between patients who underwent implantation of the small, medium or large prosthesis were significant for all except the large vs. medium CoreValve (all p'sprosthesis size for both manufacturers (multi-class AUC's: 0.80, 0.88, 0.91, 0.88, 0.88, 0.89). Using the calculated optimal cutoff-values, prosthesis size is predicted correctly in 85% of cases. CT-based aortic root measurements permit excellent prediction of the prosthesis size considered optimal during TAVI.

  1. Accessing files in an Internet: The Jade file system

    Science.gov (United States)

    Peterson, Larry L.; Rao, Herman C.

    1991-01-01

    Jade is a new distribution file system that provides a uniform way to name and access files in an internet environment. It makes two important contributions. First, Jade is a logical system that integrates a heterogeneous collection of existing file systems, where heterogeneous means that the underlying file systems support different file access protocols. Jade is designed under the restriction that the underlying file system may not be modified. Second, rather than providing a global name space, Jade permits each user to define a private name space. These private name spaces support two novel features: they allow multiple file systems to be mounted under one directory, and they allow one logical name space to mount other logical name spaces. A prototype of the Jade File System was implemented on Sun Workstations running Unix. It consists of interfaces to the Unix file system, the Sun Network File System, the Andrew File System, and FTP. This paper motivates Jade's design, highlights several aspects of its implementation, and illustrates applications that can take advantage of its features.

  2. Accessing files in an internet - The Jade file system

    Science.gov (United States)

    Rao, Herman C.; Peterson, Larry L.

    1993-01-01

    Jade is a new distribution file system that provides a uniform way to name and access files in an internet environment. It makes two important contributions. First, Jade is a logical system that integrates a heterogeneous collection of existing file systems, where heterogeneous means that the underlying file systems support different file access protocols. Jade is designed under the restriction that the underlying file system may not be modified. Second, rather than providing a global name space, Jade permits each user to define a private name space. These private name spaces support two novel features: they allow multiple file systems to be mounted under one directory, and they allow one logical name space to mount other logical name spaces. A prototype of the Jade File System was implemented on Sun Workstations running Unix. It consists of interfaces to the Unix file system, the Sun Network File System, the Andrew File System, and FTP. This paper motivates Jade's design, highlights several aspects of its implementation, and illustrates applications that can take advantage of its features.

  3. 78 FR 68893 - Self-Regulatory Organizations; Financial Industry Regulatory Authority, Inc.; Notice of Filing...

    Science.gov (United States)

    2013-11-15

    ... that the size of the BBO equals the minimum quote size. Number of market makers actively quoting...-Regulatory Organizations; Financial Industry Regulatory Authority, Inc.; Notice of Filing and Immediate Effectiveness of a Proposed Rule Change To Extend the Tier Size Pilot of FINRA Rule 6433 (Minimum Quotation Size...

  4. The rice growth image files - The Rice Growth Monitoring for The Phenotypic Functional Analysis | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us The Rice Growth Monitoring for The Phenotypic Functional Analysis The rice growth image file...s Data detail Data name The rice growth image files DOI 10.18908/lsdba.nbdc00945-004 Description of data contents The rice growth ima...ge files categorized based on file size. Data file File name: image files (director...y) File URL: ftp://ftp.biosciencedbc.jp/archive/agritogo-rice-phenome/LATEST/image...ite Policy | Contact Us The rice growth image files - The Rice Growth Monitoring for The Phenotypic Functional Analysis | LSDB Archive ...

  5. Grey Wolf Optimization-Based Optimum Energy-Management and Battery-Sizing Method for Grid-Connected Microgrids

    Directory of Open Access Journals (Sweden)

    Kutaiba Sabah Nimma

    2018-04-01

    Full Text Available In the revolution of green energy development, microgrids with renewable energy sources such as solar, wind and fuel cells are becoming a popular and effective way of controlling and managing these sources. On the other hand, owing to the intermittency and wide range of dynamic responses of renewable energy sources, battery energy-storage systems have become an integral feature of microgrids. Intelligent energy management and battery sizing are essential requirements in the microgrids to ensure the optimal use of the renewable sources and reduce conventional fuel utilization in such complex systems. This paper presents a novel approach to meet these requirements by using the grey wolf optimization (GWO technique. The proposed algorithm is implemented for different scenarios, and the numerical simulation results are compared with other optimization methods including the genetic algorithm (GA, particle swarm optimization (PSO, the Bat algorithm (BA, and the improved bat algorithm (IBA. The proposed method (GWO shows outstanding results and superior performance compared with other algorithms in terms of solution quality and computational efficiency. The numerical results show that the GWO with a smart utilization of battery energy storage (BES helped to minimize the operational costs of microgrid by 33.185% in comparison with GA, PSO, BA and IBA.

  6. Optimal Placement and Sizing of Renewable Distributed Generations and Capacitor Banks into Radial Distribution Systems

    Directory of Open Access Journals (Sweden)

    Mahesh Kumar

    2017-06-01

    Full Text Available In recent years, renewable types of distributed generation in the distribution system have been much appreciated due to their enormous technical and environmental advantages. This paper proposes a methodology for optimal placement and sizing of renewable distributed generation(s (i.e., wind, solar and biomass and capacitor banks into a radial distribution system. The intermittency of wind speed and solar irradiance are handled with multi-state modeling using suitable probability distribution functions. The three objective functions, i.e., power loss reduction, voltage stability improvement, and voltage deviation minimization are optimized using advanced Pareto-front non-dominated sorting multi-objective particle swarm optimization method. First a set of non-dominated Pareto-front data are called from the algorithm. Later, a fuzzy decision technique is applied to extract the trade-off solution set. The effectiveness of the proposed methodology is tested on the standard IEEE 33 test system. The overall results reveal that combination of renewable distributed generations and capacitor banks are dominant in power loss reduction, voltage stability and voltage profile improvement.

  7. Fast probabilistic file fingerprinting for big data

    NARCIS (Netherlands)

    Tretjakov, K.; Laur, S.; Smant, G.; Vilo, J.; Prins, J.C.P.

    2013-01-01

    Background: Biological data acquisition is raising new challenges, both in data analysis and handling. Not only is it proving hard to analyze the data at the rate it is generated today, but simply reading and transferring data files can be prohibitively slow due to their size. This primarily

  8. A digital imaging teaching file by using the internet, HTML and personal computers

    International Nuclear Information System (INIS)

    Chun, Tong Jin; Jeon, Eun Ju; Baek, Ho Gil; Kang, Eun Joo; Baik, Seung Kug; Choi, Han Yong; Kim, Bong Ki

    1996-01-01

    A film-based teaching file takes up space and the need to search through such a file places limits on the extent to which it is likely to be used. Furthermore it is not easy for doctors in a medium-sized hospital to experience a variety of cases, and so for these reasons we created an easy-to-use digital imaging teaching file with HTML(Hypertext Markup Language) and downloaded images via World Wide Web(WWW) services on the Internet. This was suitable for use by computer novices. We used WWW internet services as a resource for various images and three different IMB-PC compatible computers(386DX, 486DX-II, and Pentium) in downloading the images and in developing a digitalized teaching file. These computers were connected with the Internet through a high speed dial-up modem(28.8Kbps) and to navigate the Internet. Twinsock and Netscape were used. 3.0, Korean word processing software, was used to create HTML(Hypertext Markup Language) files and the downloaded images were linked to the HTML files. In this way, a digital imaging teaching file program was created. Access to a Web service via the Internet required a high speed computer(at least 486DX II with 8MB RAM) for comfortabel use; this also ensured that the quality of downloaded images was not degraded during downloading and that these were good enough to use as a teaching file. The time needed to retrieve the text and related images depends on the size of the file, the speed of the network, and the network traffic at the time of connection. For computer novices, a digital image teaching file using HTML is easy to use. Our method of creating a digital imaging teaching file by using Internet and HTML would be easy to create and radiologists with little computer experience who want to study various digital radiologic imaging cases would find it easy to use

  9. Optimizing indomethacin-loaded chitosan nanoparticle size, encapsulation, and release using Box-Behnken experimental design.

    Science.gov (United States)

    Abul Kalam, Mohd; Khan, Abdul Arif; Khan, Shahanavaj; Almalik, Abdulaziz; Alshamsan, Aws

    2016-06-01

    Indomethacin chitosan nanoparticles (NPs) were developed by ionotropic gelation and optimized by concentrations of chitosan and tripolyphosphate (TPP) and stirring time by 3-factor 3-level Box-Behnken experimental design. Optimal concentration of chitosan (A) and TPP (B) were found 0.6mg/mL and 0.4mg/mL with 120min stirring time (C), with applied constraints of minimizing particle size (R1) and maximizing encapsulation efficiency (R2) and drug release (R3). Based on obtained 3D response surface plots, factors A, B and C were found to give synergistic effect on R1, while factor A has a negative impact on R2 and R3. Interaction of AB was negative on R1 and R2 but positive on R3. The factor AC was having synergistic effect on R1 and on R3, while the same combination had a negative effect on R2. The interaction BC was positive on the all responses. NPs were found in the size range of 321-675nm with zeta potentials (+25 to +32mV) after 6 months storage. Encapsulation, drug release, and content were in the range of 56-79%, 48-73% and 98-99%, respectively. In vitro drug release data were fitted in different kinetic models and pattern of drug release followed Higuchi-matrix type. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Chemomechanical preparation by hand instrumentation and by Mtwo engine-driven rotary files, an ex vivo study.

    Science.gov (United States)

    Krajczár, Károly; Tigyi, Zoltán; Papp, Viktória; Marada, Gyula; Sára, Jeges; Tóth, Vilmos

    2012-07-01

    To compare the disinfecting efficacy of the sodium hypochlorite irrigation by root canal preparation with stainless steel hand files, taper 0.02 and nickel-titanium Mtwo files with taper 0.04-0.06. 40 extracted human teeth were sterilized, and then inoculated with Enterococcus faecalis (ATCC 29212). After 6 day incubation time the root canals were prepared by hand with K-files (n=20) and by engine-driven Mtwo files (VDW, Munich, Germany) (n=20). Irrigation was carried out with 2.5% NaOCl in both cases. Samples were taken and determined in colony forming units (CFU) from the root canals before and after the preparation with instruments #25 and #35. Significant reduction in bacterial count was determined after filing at both groups. The number of bacteria kept on decreasing with the extension of apical preparation diameter. There was no significant difference between the preparation sizes in the bacterial counts after hand or engine-driven instrumentation at the same apical size. Statistical analysis was carried out with Mann-Whitney test, paired t-test and independent sample t-test. Significant reduction in CFU was achieved after the root canal preparation completed with 2.5% NaOCl irrigation, both with stainless steel hand or nickel-titanium rotary files. The root canal remained slightly infected after chemo mechanical preparation in both groups. Key words:Chemomechanical preparation, root canal disinfection, nickel-titanium, conicity, greater taper, apical size.

  11. Design optimization and tolerance analysis of a spot-size converter for the taper-assisted vertical integration platform in InP.

    Science.gov (United States)

    Tolstikhin, Valery; Saeidi, Shayan; Dolgaleva, Ksenia

    2018-05-01

    We report on the design optimization and tolerance analysis of a multistep lateral-taper spot-size converter based on indium phosphide (InP), performed using the Monte Carlo method. Being a natural fit to (and a key building block of) the regrowth-free taper-assisted vertical integration platform, such a spot-size converter enables efficient and displacement-tolerant fiber coupling to InP-based photonic integrated circuits at a wavelength of 1.31 μm. An exemplary four-step lateral-taper design featuring 0.35 dB coupling loss at optimal alignment of a standard single-mode fiber; ≥7  μm 1 dB displacement tolerance in any direction in a facet plane; and great stability against manufacturing variances is demonstrated.

  12. Derivative load voltage and particle swarm optimization to determine optimum sizing and placement of shunt capacitor in improving line losses

    Directory of Open Access Journals (Sweden)

    Mohamed Milad Baiek

    2016-12-01

    Full Text Available The purpose of this research is to study optimal size and placement of shunt capacitor in order to minimize line loss. Derivative load bus voltage was calculated to determine the sensitive load buses which further being optimum with the placement of shunt capacitor. Particle swarm optimization (PSO was demonstrated on the IEEE 14 bus power system to find optimum size of shunt capacitor in reducing line loss. The objective function was applied to determine the proper placement of capacitor and get satisfied solutions towards constraints. The simulation was run over Matlab under two scenarios namely base case and increasing 100% load. Derivative load bus voltage was simulated to determine the most sensitive load bus. PSO was carried out to determine the optimum sizing of shunt capacitor at the most sensitive bus. The results have been determined that the most sensitive bus was bus number 14 for the base case and increasing 100% load. The optimum sizing was 8.17 Mvar for the base case and 23.98 Mvar for increasing load about 100%. Line losses were able to reduce approximately 0.98% for the base case and increasing 100% load reduced about 3.16%. The proposed method was also proven as a better result compared with harmony search algorithm (HSA method. HSA method recorded loss reduction ratio about 0.44% for the base case and 2.67% when the load was increased by 100% while PSO calculated loss reduction ratio about 1.12% and 4.02% for the base case and increasing 100% load respectively. The result of this study will support the previous study and it is concluded that PSO was successfully able to solve some engineering problems as well as to find a solution in determining shunt capacitor sizing on the power system simply and accurately compared with other evolutionary optimization methods.

  13. Optimal sizing and control strategy of isolated grid with wind power and energy storage system

    International Nuclear Information System (INIS)

    Luo, Yi; Shi, Lin; Tu, Guangyu

    2014-01-01

    Highlights: • An energy storage sizing scheme for wind powered isolated grid is developed. • A bi-level control strategy for wind-battery isolated grid is proposed. • The energy storage type selection method for Nan’ao island grid is presented. • The sizing method and the control strategy are verified based on the Nan’ao island. • The wind-battery demonstration system has great benefit for remote areas. - Abstract: Integrating renewable energy and energy storage system provides a prospective way for power supply of remote areas. Focused on the isolated grids comprising renewable energy generation and energy storage, an energy storage sizing method for taking account of the reliability requirement and a bi-level control strategy of the isolated grids are presented in this paper. Based on comparative analysis of current energy storage characteristics and practicability, Sodium–sulfur battery is recommended for power balance control in the isolated grids. The optimal size of the energy storage system is determined by genetic algorithm and sequential simulation. The annualized cost considering the compensation cost of curtailed wind power and load is minimized when the reliability requirement can be satisfied. The sizing method emphasizes the tradeoff between energy storage size and reliability of power supply. The bi-level control strategy is designed as upper level wide area power balance control in dispatch timescale and lower level battery energy storage system V/f control in real-time range for isolated operation. The mixed timescale simulation results of Nan’ao Island grid verify the effectiveness of the proposed sizing method and control strategy

  14. Distributed PACS using distributed file system with hierarchical meta data servers.

    Science.gov (United States)

    Hiroyasu, Tomoyuki; Minamitani, Yoshiyuki; Miki, Mitsunori; Yokouchi, Hisatake; Yoshimi, Masato

    2012-01-01

    In this research, we propose a new distributed PACS (Picture Archiving and Communication Systems) which is available to integrate several PACSs that exist in each medical institution. The conventional PACS controls DICOM file into one data-base. On the other hand, in the proposed system, DICOM file is separated into meta data and image data and those are stored individually. Using this mechanism, since file is not always accessed the entire data, some operations such as finding files, changing titles, and so on can be performed in high-speed. At the same time, as distributed file system is utilized, accessing image files can also achieve high-speed access and high fault tolerant. The introduced system has a more significant point. That is the simplicity to integrate several PACSs. In the proposed system, only the meta data servers are integrated and integrated system can be constructed. This system also has the scalability of file access with along to the number of file numbers and file sizes. On the other hand, because meta-data server is integrated, the meta data server is the weakness of this system. To solve this defect, hieratical meta data servers are introduced. Because of this mechanism, not only fault--tolerant ability is increased but scalability of file access is also increased. To discuss the proposed system, the prototype system using Gfarm was implemented. For evaluating the implemented system, file search operating time of Gfarm and NFS were compared.

  15. Multiobjective optimization applied to structural sizing of low cost university-class microsatellite projects

    Science.gov (United States)

    Ravanbakhsh, Ali; Franchini, Sebastián

    2012-10-01

    In recent years, there has been continuing interest in the participation of university research groups in space technology studies by means of their own microsatellites. The involvement in such projects has some inherent challenges, such as limited budget and facilities. Also, due to the fact that the main objective of these projects is for educational purposes, usually there are uncertainties regarding their in orbit mission and scientific payloads at the early phases of the project. On the other hand, there are predetermined limitations for their mass and volume budgets owing to the fact that most of them are launched as an auxiliary payload in which the launch cost is reduced considerably. The satellite structure subsystem is the one which is most affected by the launcher constraints. This can affect different aspects, including dimensions, strength and frequency requirements. In this paper, the main focus is on developing a structural design sizing tool containing not only the primary structures properties as variables but also the system level variables such as payload mass budget and satellite total mass and dimensions. This approach enables the design team to obtain better insight into the design in an extended design envelope. The structural design sizing tool is based on analytical structural design formulas and appropriate assumptions including both static and dynamic models of the satellite. Finally, a Genetic Algorithm (GA) multiobjective optimization is applied to the design space. The result is a Pareto-optimal based on two objectives, minimum satellite total mass and maximum payload mass budget, which gives a useful insight to the design team at the early phases of the design.

  16. CT-angiography-based evaluation of the aortic annulus for prosthesis sizing in transcatheter aortic valve implantation (TAVI-predictive value and optimal thresholds for major anatomic parameters.

    Directory of Open Access Journals (Sweden)

    Florian Schwarz

    Full Text Available BACKGROUND/OBJECTIVES: To evaluate the predictive value of CT-derived measurements of the aortic annulus for prosthesis sizing in transcatheter aortic valve implantation (TAVI and to calculate optimal cutoff values for the selection of various prosthesis sizes. METHODS: The local IRB waived approval for this single-center retrospective analysis. Of 441 consecutive TAVI-patients, 90 were excluded (death within 30 days: 13; more than mild aortic regurgitation: 10; other reasons: 67. In the remaining 351 patients, the CoreValve (Medtronic and the Edwards Sapien XT valve (Edwards Lifesciences were implanted in 235 and 116 patients. Optimal prosthesis size was determined during TAVI by inflation of a balloon catheter at the aortic annulus. All patients had undergone CT-angiography of the heart or body trunk prior to TAVI. Using these datasets, the diameter of the long and short axis as well as the circumference and the area of the aortic annulus were measured. Multi-Class Receiver-Operator-Curve analyses were used to determine the predictive value of all variables and to define optimal cutoff-values. RESULTS: Differences between patients who underwent implantation of the small, medium or large prosthesis were significant for all except the large vs. medium CoreValve (all p's<0.05. Furthermore, mean diameter, annulus area and circumference had equally high predictive value for prosthesis size for both manufacturers (multi-class AUC's: 0.80, 0.88, 0.91, 0.88, 0.88, 0.89. Using the calculated optimal cutoff-values, prosthesis size is predicted correctly in 85% of cases. CONCLUSION: CT-based aortic root measurements permit excellent prediction of the prosthesis size considered optimal during TAVI.

  17. Towards Optimal Buffer Size in Wi-Fi Networks

    KAUST Repository

    Showail, Ahmad J.

    2016-01-19

    Buffer sizing is an important network configuration parameter that impacts the quality of data traffic. Falling memory cost and the fallacy that ‘more is better’ lead to over provisioning network devices with large buffers. Over-buffering or the so called ‘bufferbloat’ phenomenon creates excessive end-to-end delay in today’s networks. On the other hand, under-buffering results in frequent packet loss and subsequent under-utilization of network resources. The buffer sizing problem has been studied extensively for wired networks. However, there is little work addressing the unique challenges of wireless environment. In this dissertation, we discuss buffer sizing challenges in wireless networks, classify the state-of-the-art solutions, and propose two novel buffer sizing schemes. The first scheme targets buffer sizing in wireless multi-hop networks where the radio spectral resource is shared among a set of con- tending nodes. Hence, it sizes the buffer collectively and distributes it over a set of interfering devices. The second buffer sizing scheme is designed to cope up with recent Wi-Fi enhancements. It adapts the buffer size based on measured link characteristics and network load. Also, it enforces limits on the buffer size to maximize frame aggregation benefits. Both mechanisms are evaluated using simulation as well as testbed implementation over half-duplex and full-duplex wireless networks. Experimental evaluation shows that our proposal reduces latency by an order of magnitude.

  18. Optimal city size and population density for the 21st century.

    Science.gov (United States)

    Speare A; White, M J

    1990-10-01

    The thesis that large scale urban areas result in greater efficiency, reduced costs, and a better quality of life is reexamined. The environmental and social costs are measured for different scales of settlement. The desirability and perceived problems of a particular place are examined in relation to size of place. The consequences of population decline are considered. New York city is described as providing both opportunities in employment, shopping, and cultural activities as well as a high cost of living, crime, and pollution. The historical development of large cities in the US is described. Immigration has contributed to a greater concentration of population than would have otherwise have occurred. The spatial proximity of goods and services argument (agglomeration economies) has changed with advancements in technology such as roads, trucking, and electronic communication. There is no optimal city size. The overall effect of agglomeration can be assessed by determining whether the markets for goods and labor are adequate to maximize well-being and balance the negative and positive aspects of urbanization. The environmental costs of cities increase with size when air quality, water quality, sewage treatment, and hazardous waste disposal is considered. Smaller scale and lower density cities have the advantages of a lower concentration of pollutants. Also, mobilization for program support is easier with homogenous population. Lower population growth in large cities would contribute to a higher quality of life, since large metropolitan areas have a concentration of immigrants, younger age distributions, and minority groups with higher than average birth rates. The negative consequences of decline can be avoided if reduction of population in large cities takes place gradually. For example, poorer quality housing can be removed for open space. Cities should, however, still attract all classes of people with opportunities equally available.

  19. Algorithms for optimal sequencing of dynamic multileaf collimators

    Energy Technology Data Exchange (ETDEWEB)

    Kamath, Srijit [Department of Computer and Information Science and Engineering, University of Florida, Gainesville, FL (United States); Sahni, Sartaj [Department of Computer and Information Science and Engineering, University of Florida, Gainesville, FL (United States); Palta, Jatinder [Department of Radiation Oncology, University of Florida, Gainesville, FL (United States); Ranka, Sanjay [Department of Computer and Information Science and Engineering, University of Florida, Gainesville, FL (United States)

    2004-01-07

    Dynamic multileaf collimator (DMLC) intensity modulated radiation therapy (IMRT) is used to deliver intensity modulated beams using a multileaf collimator (MLC), with the leaves in motion. DMLC-IMRT requires the conversion of a radiation intensity map into a leaf sequence file that controls the movement of the MLC while the beam is on. It is imperative that the intensity map delivered using the leaf sequence file be as close as possible to the intensity map generated by the dose optimization algorithm, while satisfying hardware constraints of the delivery system. Optimization of the leaf-sequencing algorithm has been the subject of several recent investigations. In this work, we present a systematic study of the optimization of leaf-sequencing algorithms for dynamic multileaf collimator beam delivery and provide rigorous mathematical proofs of optimized leaf sequence settings in terms of monitor unit (MU) efficiency under the most common leaf movement constraints that include leaf interdigitation constraint. Our analytical analysis shows that leaf sequencing based on unidirectional movement of the MLC leaves is as MU efficient as bi-directional movement of the MLC leaves.

  20. Algorithms for optimal sequencing of dynamic multileaf collimators

    International Nuclear Information System (INIS)

    Kamath, Srijit; Sahni, Sartaj; Palta, Jatinder; Ranka, Sanjay

    2004-01-01

    Dynamic multileaf collimator (DMLC) intensity modulated radiation therapy (IMRT) is used to deliver intensity modulated beams using a multileaf collimator (MLC), with the leaves in motion. DMLC-IMRT requires the conversion of a radiation intensity map into a leaf sequence file that controls the movement of the MLC while the beam is on. It is imperative that the intensity map delivered using the leaf sequence file be as close as possible to the intensity map generated by the dose optimization algorithm, while satisfying hardware constraints of the delivery system. Optimization of the leaf-sequencing algorithm has been the subject of several recent investigations. In this work, we present a systematic study of the optimization of leaf-sequencing algorithms for dynamic multileaf collimator beam delivery and provide rigorous mathematical proofs of optimized leaf sequence settings in terms of monitor unit (MU) efficiency under the most common leaf movement constraints that include leaf interdigitation constraint. Our analytical analysis shows that leaf sequencing based on unidirectional movement of the MLC leaves is as MU efficient as bi-directional movement of the MLC leaves

  1. Optimization of the LENS process for steady molten pool size

    Energy Technology Data Exchange (ETDEWEB)

    Wang, L. [Center for Advanced Vehicular Systems, Mississippi State University, Mississippi State, MS 39762 (United States); Felicelli, S. [Mechanical Engineering Department, Mississippi State University, Mississippi State, MS 39762 (United States)], E-mail: felicelli@me.msstate.edu; Gooroochurn, Y. [ESI Group, Bloomfield Hills, MI 48304 (United States); Wang, P.T.; Horstemeyer, M.F. [Center for Advanced Vehicular Systems, Mississippi State University, Mississippi State, MS 39762 (United States)

    2008-02-15

    A three-dimensional finite element model was developed and applied to analyze the temperature and phase evolution in deposited stainless steel 410 (SS410) during the Laser Engineered Net Shaping (LENS) rapid fabrication process. The effect of solid phase transformations is taken into account by using temperature and phase dependent material properties and the continuous cooling transformation (CCT) diagram. The laser beam is modeled as a Gaussian distribution of heat flux from a moving heat source with conical shape. The laser power and translational speed during deposition of a single-wall plate are optimized in order to maintain a steady molten pool size. It is found that, after an initial transient due to the cold substrate, the dependency of laser power with layer number is approximately linear for all travel speeds analyzed. The temperature distribution and cooling rate surrounding the molten pool are predicted and compared with experiments. Based upon the predicted thermal cycles and cooling rate, the phase transformations and their effects on the hardness of the part are discussed.

  2. PENENTUAN PRODUCTION LOT SIZES DAN TRANSFER BATCH SIZES DENGAN PENDEKATAN MULTISTAGE

    Directory of Open Access Journals (Sweden)

    Purnawan Adi W

    2012-02-01

    optimal lot size in a system of production with several types. Analysis of production batch (production lot using hybrid analytic simulation is one kind of research about optimal lot size. That research uses single-stage system approach where there are not relationships between processes in every stage or in other word; one process is independent to other process. Using the same research object with one before, this research then take up problem how to determine production lot size with multi-stage approach. First, determining optimal production lot size by linear program using the same data with previous research. Then, production lot size is used as simulation input to determine transfer batch size. Average of queue length and waiting time as performance measurement are used as reference in determining transfer batch size from several alternatives.In this research, it shows that production lot size is same with demand each period. Determination result of transfer batch size by using simulation then implemented on model. The result is descent of inventory of connector product at 76.35% and 50.59% for box connector product, as compared to inventory using single-stage approach. Keywords : multistage, production lot, transfer batch

  3. On optimal (non-Trojan) semi-Latin squares with side n and block size n: Construction procedure and admissible permutations

    International Nuclear Information System (INIS)

    Chigbu, P.E.; Ukekwe, E.C.; Ikekeonwu, G.A.M.

    2006-12-01

    There is a special family of the (n x n)/k semi-Latin squares called the Trojan squares which are optimal among semi-Latin squares of equivalent sizes. Unfortunately, Trojan squares do not exist for all k; for instance, there is no Trojan square for k ≥ n. However, the need usually arises for constructing optimal semi-Latin squares where no Trojan squares exist. Bailey made a conjecture on optimal semi-Latin squares for k ≥ n and based on this conjecture, optimal non-Trojan semi-Latin squares are here constructed for k = n, considering the inherent Trojan squares for k < n. A lemma substantiating this conjecture for k = n is given and proved. In addition, the properties for the admissible permutation sets used in constructing these optimal squares are made evident based on the systematic-group-theoretic algorithm of Bailey and Chigbu. Algorithms for identifying the admissible permutations as well as constructing the optimal non-Trojan (n x n)/k = n semi-Latin squares for odd n and n = 4 are given. (author)

  4. Effect of repetitive pecking at working length for glide path preparation using G-file

    Directory of Open Access Journals (Sweden)

    Jung-Hong Ha

    2015-05-01

    Full Text Available Objectives Glide path preparation is recommended to reduce torsional failure of nickel-titanium (NiTi rotary instruments and to prevent root canal transportation. This study evaluated whether the repetitive insertions of G-files to the working length maintain the apical size as well as provide sufficient lumen as a glide path for subsequent instrumentation. Materials and Methods The G-file system (Micro-Mega composed of G1 and G2 files for glide path preparation was used with the J-shaped, simulated resin canals. After inserting a G1 file twice, a G2 file was inserted to the working length 1, 4, 7, or 10 times for four each experimental group, respectively (n = 10. Then the canals were cleaned by copious irrigation, and lubricated with a separating gel medium. Canal replicas were made using silicone impression material, and the diameter of the replicas was measured at working length (D0 and 1 mm level (D1 under a scanning electron microscope. Data was analysed by one-way ANOVA and post-hoc tests (p = 0.05. Results The diameter at D0 level did not show any significant difference between the 1, 2, 4, and 10 times of repetitive pecking insertions of G2 files at working length. However, 10 times of pecking motion with G2 file resulted in significantly larger canal diameter at D1 (p < 0.05. Conclusions Under the limitations of this study, the repetitive insertion of a G2 file up to 10 times at working length created an adequate lumen for subsequent apical shaping with other rotary files bigger than International Organization for Standardization (ISO size 20, without apical transportation at D0 level.

  5. Modified Electric System Cascade Analysis for optimal sizing of an autonomous Hybrid Energy System

    International Nuclear Information System (INIS)

    Zahboune, Hassan; Zouggar, Smail; Yong, Jun Yow; Varbanov, Petar Sabev; Elhafyani, Mohammed; Ziani, Elmostafa; Zarhloule, Yassine

    2016-01-01

    Ensuring sufficient generation for covering the power demand at minimum cost of the system are the goals of using renewable energy on isolated sites. Solar and wind capture are most widely used to generate clean electricity. Their availability is generally shifted in time. Therefore, it is advantageous to consider both sources simultaneously while designing an electrical power supply module of the studied system. A specific challenge in this context is to find the optimal sizes of the power generation and storage facilities, which would minimise the overall system cost and will still satisfy the demand. In this work, a new design algorithm is presented minimising the system cost, based on the Electric System Cascade Analysis and the Power Pinch Analysis. The algorithm takes as inputs the wind speed, solar irradiation, as well as cost data for the generation and storage facilities. It has also been applied to minimise the loss of power supply probability (LPSP) and to ensure the minimum of the used storage units without using outsourced electricity. The algorithm has been demonstrated on a case study with daily electrical energy demand of 18.7 kWh, resulting in a combination of PV Panels, wind turbine, and the batteries at minimal cost. For the conditions in Oujda city, the case study results indicate that it is possible to achieve 0.25 €/kWh Levelised Cost of Electricity for the generated power. - Highlights: • Renewable electricity systems for remote locations. • Optimal sizes of the power generation and storage facilities. • Improved Power Pinch procedure. • Achieves viable power cost levels.

  6. Optimization of detector size and collimator for PG-SPECT

    International Nuclear Information System (INIS)

    Ishikawa, M.; Kobayashi, T.; Kanda, K.

    2000-01-01

    A current absorbed dose evaluation method in a Boron Neutron Capture Therapy demands boron reaction rate from a boron concentration of an affected part supposed from a neutron flux and a boron concentration in blood measured by an activation method of a gold wire indirectly and converts it into an absorbed dose. So we devised a PG-SEPCT (Prompt Gamma-ray Single Photon Emission Computed Tomography) system to evaluate an absorbed dose directly by measuring prompt gamma-rays. Ordinary SPECT system uses a big NaI scintillator for detector so that measurement is done in low background gamma-ray environment. However, a conventional detector and collimator system cannot be just applied to PG-SPECT system because a background radiation coexists abundantly (PG-SPECT system is set in irradiation room). Accordingly PG-SPECT system requires a dedicated detector and collimator system. In order to reduce efficiency for background gamma-rays, we arranged detectors in a collimator to shield from background gamma-rays. We examined the most suitable collimator shape. The optimization condition of a dedicated collimator system is as follows: 1) the smallest particle size that can be distinguished is 1 cm. 2) necessary counts at measurement target center is not less than 10,000. (author)

  7. Optimal Siting and Sizing of Multiple DG Units for the Enhancement of Voltage Profile and Loss Minimization in Transmission Systems Using Nature Inspired Algorithms

    Directory of Open Access Journals (Sweden)

    Ambika Ramamoorthy

    2016-01-01

    Full Text Available Power grid becomes smarter nowadays along with technological development. The benefits of smart grid can be enhanced through the integration of renewable energy sources. In this paper, several studies have been made to reconfigure a conventional network into a smart grid. Amongst all the renewable sources, solar power takes the prominent position due to its availability in abundance. Proposed methodology presented in this paper is aimed at minimizing network power losses and at improving the voltage stability within the frame work of system operation and security constraints in a transmission system. Locations and capacities of DGs have a significant impact on the system losses in a transmission system. In this paper, combined nature inspired algorithms are presented for optimal location and sizing of DGs. This paper proposes a two-step optimization technique in order to integrate DG. In a first step, the best size of DG is determined through PSO metaheuristics and the results obtained through PSO is tested for reverse power flow by negative load approach to find possible bus locations. Then, optimal location is found by Loss Sensitivity Factor (LSF and weak (WK bus methods and the results are compared. In a second step, optimal sizing of DGs is determined by PSO, GSA, and hybrid PSOGSA algorithms. Apart from optimal sizing and siting of DGs, different scenarios with number of DGs (3, 4, and 5 and PQ capacities of DGs (P alone, Q alone, and  P and Q both are also analyzed and the results are analyzed in this paper. A detailed performance analysis is carried out on IEEE 30-bus system to demonstrate the effectiveness of the proposed methodology.

  8. Optimal Siting and Sizing of Multiple DG Units for the Enhancement of Voltage Profile and Loss Minimization in Transmission Systems Using Nature Inspired Algorithms.

    Science.gov (United States)

    Ramamoorthy, Ambika; Ramachandran, Rajeswari

    2016-01-01

    Power grid becomes smarter nowadays along with technological development. The benefits of smart grid can be enhanced through the integration of renewable energy sources. In this paper, several studies have been made to reconfigure a conventional network into a smart grid. Amongst all the renewable sources, solar power takes the prominent position due to its availability in abundance. Proposed methodology presented in this paper is aimed at minimizing network power losses and at improving the voltage stability within the frame work of system operation and security constraints in a transmission system. Locations and capacities of DGs have a significant impact on the system losses in a transmission system. In this paper, combined nature inspired algorithms are presented for optimal location and sizing of DGs. This paper proposes a two-step optimization technique in order to integrate DG. In a first step, the best size of DG is determined through PSO metaheuristics and the results obtained through PSO is tested for reverse power flow by negative load approach to find possible bus locations. Then, optimal location is found by Loss Sensitivity Factor (LSF) and weak (WK) bus methods and the results are compared. In a second step, optimal sizing of DGs is determined by PSO, GSA, and hybrid PSOGSA algorithms. Apart from optimal sizing and siting of DGs, different scenarios with number of DGs (3, 4, and 5) and PQ capacities of DGs (P alone, Q alone, and P and Q both) are also analyzed and the results are analyzed in this paper. A detailed performance analysis is carried out on IEEE 30-bus system to demonstrate the effectiveness of the proposed methodology.

  9. Impact of Spot Size and Spacing on the Quality of Robustly Optimized Intensity Modulated Proton Therapy Plans for Lung Cancer.

    Science.gov (United States)

    Liu, Chenbin; Schild, Steven E; Chang, Joe Y; Liao, Zhongxing; Korte, Shawn; Shen, Jiajian; Ding, Xiaoning; Hu, Yanle; Kang, Yixiu; Keole, Sameer R; Sio, Terence T; Wong, William W; Sahoo, Narayan; Bues, Martin; Liu, Wei

    2018-06-01

    To investigate how spot size and spacing affect plan quality, robustness, and interplay effects of robustly optimized intensity modulated proton therapy (IMPT) for lung cancer. Two robustly optimized IMPT plans were created for 10 lung cancer patients: first by a large-spot machine with in-air energy-dependent large spot size at isocenter (σ: 6-15 mm) and spacing (1.3 σ), and second by a small-spot machine with in-air energy-dependent small spot size (σ: 2-6 mm) and spacing (5 mm). Both plans were generated by optimizing radiation dose to internal target volume on averaged 4-dimensional computed tomography scans using an in-house-developed IMPT planning system. The dose-volume histograms band method was used to evaluate plan robustness. Dose evaluation software was developed to model time-dependent spot delivery to incorporate interplay effects with randomized starting phases for each field per fraction. Patient anatomy voxels were mapped phase-to-phase via deformable image registration, and doses were scored using in-house-developed software. Dose-volume histogram indices, including internal target volume dose coverage, homogeneity, and organs at risk (OARs) sparing, were compared using the Wilcoxon signed-rank test. Compared with the large-spot machine, the small-spot machine resulted in significantly lower heart and esophagus mean doses, with comparable target dose coverage, homogeneity, and protection of other OARs. Plan robustness was comparable for targets and most OARs. With interplay effects considered, significantly lower heart and esophagus mean doses with comparable target dose coverage and homogeneity were observed using smaller spots. Robust optimization with a small spot-machine significantly improves heart and esophagus sparing, with comparable plan robustness and interplay effects compared with robust optimization with a large-spot machine. A small-spot machine uses a larger number of spots to cover the same tumors compared with a large

  10. Optimal chest drain size: the rise of the small-bore pleural catheter.

    Science.gov (United States)

    Fysh, Edward T H; Smith, Nicola A; Lee, Y C Gary

    2010-12-01

    Drainage of the pleural space is not a modern concept, but the optimal size of chest drains to use remains debated. Conventional teaching advocates blunt dissection and large-bore tubes; but in recent years, small-bore catheters have gained popularity. In the absence of high-quality randomized data, this review summarizes the available literature on the choice of chest drains. The objective data supporting the use of large-bore tubes is scarce in most pleural diseases. Increasing evidence shows that small-bore catheters induce less pain and are of comparable efficacy to large-bore tubes, including in the management of pleural infection, malignant effusion, and pneumothoraces. The onus now is on those who favor large tubes to produce clinical data to justify the more invasive approach. © Thieme Medical Publishers.

  11. Inverse estimation of the particle size distribution using the Fruit Fly Optimization Algorithm

    International Nuclear Information System (INIS)

    He, Zhenzong; Qi, Hong; Yao, Yuchen; Ruan, Liming

    2015-01-01

    The Fruit Fly Optimization Algorithm (FOA) is applied to retrieve the particle size distribution (PSD) for the first time. The direct problems are solved by the modified Anomalous Diffraction Approximation (ADA) and the Lambert–Beer Law. Firstly, three commonly used monomodal PSDs, i.e. the Rosin–Rammer (R–R) distribution, the normal (N–N) distribution and the logarithmic normal (L–N) distribution, and the bimodal Rosin–Rammer distribution function are estimated in the dependent model. All the results show that the FOA can be used as an effective technique to estimate the PSDs under the dependent model. Then, an optimal wavelength selection technique is proposed to improve the retrieval results of bimodal PSD. Finally, combined with two general functions, i.e. the Johnson's S B (J-S B ) function and the modified beta (M-β) function, the FOA is employed to recover actual measurement aerosol PSDs over Beijing and Hangzhou obtained from the aerosol robotic network (AERONET). All the numerical simulations and experiment results demonstrate that the FOA can be used to retrieve actual measurement PSDs, and more reliable and accurate results can be obtained, if the J-S B function is employed

  12. SU-F-T-428: An Optimization-Based Commissioning Tool for Finite Size Pencil Beam Dose Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Li, Y; Tian, Z; Song, T; Jia, X; Gu, X; Jiang, S [UT Southwestern Medical Center, Dallas, TX (United States)

    2016-06-15

    Purpose: Finite size pencil beam (FSPB) algorithms are commonly used to pre-calculate the beamlet dose distribution for IMRT treatment planning. FSPB commissioning, which usually requires fine tuning of the FSPB kernel parameters, is crucial to the dose calculation accuracy and hence the plan quality. Yet due to the large number of beamlets, FSPB commissioning could be very tedious. This abstract reports an optimization-based FSPB commissioning tool we have developed in MatLab to facilitate the commissioning. Methods: A FSPB dose kernel generally contains two types of parameters: the profile parameters determining the dose kernel shape, and a 2D scaling factors accounting for the longitudinal and off-axis corrections. The former were fitted using the penumbra of a reference broad beam’s dose profile with Levenberg-Marquardt algorithm. Since the dose distribution of a broad beam is simply a linear superposition of the dose kernel of each beamlet calculated with the fitted profile parameters and scaled using the scaling factors, these factors could be determined by solving an optimization problem which minimizes the discrepancies between the calculated dose of broad beams and the reference dose. Results: We have commissioned a FSPB algorithm for three linac photon beams (6MV, 15MV and 6MVFFF). Dose of four field sizes (6*6cm2, 10*10cm2, 15*15cm2 and 20*20cm2) were calculated and compared with the reference dose exported from Eclipse TPS system. For depth dose curves, the differences are less than 1% of maximum dose after maximum dose depth for most cases. For lateral dose profiles, the differences are less than 2% of central dose at inner-beam regions. The differences of the output factors are within 1% for all the three beams. Conclusion: We have developed an optimization-based commissioning tool for FSPB algorithms to facilitate the commissioning, providing sufficient accuracy of beamlet dose calculation for IMRT optimization.

  13. Multi-objective optimization of water quality, pumps operation, and storage sizing of water distribution systems.

    Science.gov (United States)

    Kurek, Wojciech; Ostfeld, Avi

    2013-01-30

    A multi-objective methodology utilizing the Strength Pareto Evolutionary Algorithm (SPEA2) linked to EPANET for trading-off pumping costs, water quality, and tanks sizing of water distribution systems is developed and demonstrated. The model integrates variable speed pumps for modeling the pumps operation, two water quality objectives (one based on chlorine disinfectant concentrations and one on water age), and tanks sizing cost which are assumed to vary with location and diameter. The water distribution system is subject to extended period simulations, variable energy tariffs, Kirchhoff's laws 1 and 2 for continuity of flow and pressure, tanks water level closure constraints, and storage-reliability requirements. EPANET Example 3 is employed for demonstrating the methodology on two multi-objective models, which differ in the imposed water quality objective (i.e., either with disinfectant or water age considerations). Three-fold Pareto optimal fronts are presented. Sensitivity analysis on the storage-reliability constraint, its influence on pumping cost, water quality, and tank sizing are explored. The contribution of this study is in tailoring design (tank sizing), pumps operational costs, water quality of two types, and reliability through residual storage requirements, in a single multi-objective framework. The model was found to be stable in generating multi-objective three-fold Pareto fronts, while producing explainable engineering outcomes. The model can be used as a decision tool for both pumps operation, water quality, required storage for reliability considerations, and tank sizing decision-making. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. JENDL special purpose file

    International Nuclear Information System (INIS)

    Nakagawa, Tsuneo

    1995-01-01

    In JENDL-3,2, the data on all the reactions having significant cross section over the neutron energy from 0.01 meV to 20 MeV are given for 340 nuclides. The object range of application extends widely, such as the neutron engineering, shield and others of fast reactors, thermal neutron reactors and nuclear fusion reactors. This is a general purpose data file. On the contrary to this, the file in which only the data required for a specific application field are collected is called special purpose file. The file for dosimetry is a typical special purpose file. The Nuclear Data Center, Japan Atomic Energy Research Institute, is making ten kinds of JENDL special purpose files. The files, of which the working groups of Sigma Committee are in charge, are listed. As to the format of the files, ENDF format is used similarly to JENDL-3,2. Dosimetry file, activation cross section file, (α, n) reaction data file, fusion file, actinoid file, high energy data file, photonuclear data file, PKA/KERMA file, gas production cross section file and decay data file are described on their contents, the course of development and their verification. Dosimetry file and gas production cross section file have been completed already. As for the others, the expected time of completion is shown. When these files are completed, they are opened to the public. (K.I.)

  15. Optimal Sizing of Decentralized Photovoltaic Generation and Energy Storage Units for Malaysia Residential Household Using Iterative Method

    Directory of Open Access Journals (Sweden)

    Rahman Hasimah Abdul

    2016-01-01

    Full Text Available World’s fuel sources are decreasing, and global warming phenomena cause the necessity of urgent search for alternative energy sources. Photovoltaic generating system has a high potential, since it is clean, environmental friendly and secure energy sources. This paper presents an optimal sizing of decentralized photovoltaic system and electrical energy storage for a residential household using iterative method. The cost of energy, payback period, degree of autonomy and degree of own-consumption are defined as optimization parameters. A case study is conducted by employing Kuala Lumpur meteorological data, typical load profile from rural area in Malaysia, decentralized photovoltaic generation unit and electrical storage and it is analyzed in hourly basis. An iterative method is used with photovoltaic array variable from 0.1kW to 4.0kW and storage system variable from 50Ah to 400Ah was performed to determine the optimal design for the proposed system.

  16. Penyembunyian Data pada File Video Menggunakan Metode LSB dan DCT

    Directory of Open Access Journals (Sweden)

    Mahmuddin Yunus

    2014-01-01

    Full Text Available Abstrak Penyembunyian data pada file video dikenal dengan istilah steganografi video. Metode steganografi yang dikenal diantaranya metode Least Significant Bit (LSB dan Discrete Cosine Transform (DCT. Dalam penelitian ini dilakukan penyembunyian data pada file video dengan menggunakan metode LSB, metode DCT, dan gabungan metode LSB-DCT. Sedangkan kualitas file video yang dihasilkan setelah penyisipan dihitung dengan menggunakan Mean Square Error (MSE dan Peak Signal to Noise Ratio (PSNR.Uji eksperimen dilakukan berdasarkan ukuran file video, ukuran file berkas rahasia yang disisipkan, dan resolusi video. Hasil pengujian menunjukkan tingkat keberhasilan steganografi video dengan menggunakan metode LSB adalah 38%, metode DCT adalah 90%, dan gabungan metode LSB-DCT adalah 64%. Sedangkan hasil perhitungan MSE, nilai MSE metode DCT paling rendah dibandingkan metode LSB dan gabungan metode LSB-DCT. Sedangkan metode LSB-DCT mempunyai nilai yang lebih kecil dibandingkan metode LSB. Pada pengujian PSNR diperoleh databahwa nilai PSNR metode DCTlebih tinggi dibandingkan metode LSB dan gabungan metode LSB-DCT. Sedangkan nilai PSNR metode gabungan LSB-DCT lebih tinggi dibandingkan metode LSB.   Kata Kunci— Steganografi, Video, Least Significant Bit (LSB, Discrete Cosine Transform (DCT, Mean Square Error (MSE, Peak Signal to Noise Ratio (PSNR                             Abstract Hiding data in video files is known as video steganography. Some of the well known steganography methods areLeast Significant Bit (LSB and Discrete Cosine Transform (DCT method. In this research, data will be hidden on the video file with LSB method, DCT method, and the combined method of LSB-DCT. While the quality result of video file after insertion is calculated using the Mean Square Error (MSE and Peak Signal to Noise Ratio (PSNR. The experiments were conducted based on the size of the video file, the file size of the inserted secret files, and

  17. 11 CFR 100.19 - File, filed or filing (2 U.S.C. 434(a)).

    Science.gov (United States)

    2010-01-01

    ... a facsimile machine or by electronic mail if the reporting entity is not required to file..., including electronic reporting entities, may use the Commission's website's on-line program to file 48-hour... the reporting entity is not required to file electronically in accordance with 11 CFR 104.18. [67 FR...

  18. Storage of sparse files using parallel log-structured file system

    Science.gov (United States)

    Bent, John M.; Faibish, Sorin; Grider, Gary; Torres, Aaron

    2017-11-07

    A sparse file is stored without holes by storing a data portion of the sparse file using a parallel log-structured file system; and generating an index entry for the data portion, the index entry comprising a logical offset, physical offset and length of the data portion. The holes can be restored to the sparse file upon a reading of the sparse file. The data portion can be stored at a logical end of the sparse file. Additional storage efficiency can optionally be achieved by (i) detecting a write pattern for a plurality of the data portions and generating a single patterned index entry for the plurality of the patterned data portions; and/or (ii) storing the patterned index entries for a plurality of the sparse files in a single directory, wherein each entry in the single directory comprises an identifier of a corresponding sparse file.

  19. Impact of mobility on call block, call drops and optimal cell size in small cell networks

    OpenAIRE

    Ramanath , Sreenath; Voleti , Veeraruna Kavitha; Altman , Eitan

    2011-01-01

    We consider small cell networks and study the impact of user mobility. Assuming Poisson call arrivals at random positions with random velocities, we discuss the characterization of handovers at the boundaries. We derive explicit expressions for call block and call drop probabilities using tools from spatial queuing theory. We also derive expressions for the average virtual server held up time. These expressions are used to derive optimal cell sizes for various profile of velocities in small c...

  20. Intra-manufacture Diameter Variability of Rotary Files and Their Corresponding Gutta-Percha Cones Using Laser Scan Micrometre.

    Science.gov (United States)

    Mirmohammadi, Hesam; Sitarz, Monika; Shemesh, Hagay

    2018-01-01

    Manufacturers offer gutta-percha (GP) cones matched with different sizes of endodontic files as an attempt to simplify the obturation process and create a tight seal in the canal. The purpose of this study was to evaluate whether intra-manufacture GP diameters matched the diameters of their corresponding files at different levels using laser micrometre. Twenty files and corresponding GP master cones of Reciproc R40 (40/0.06) (VDW, Munich, Germany), WaveOne Large(40/0.08)(Dentsply Maillefer, Ballaigues, Switzerland), ProTaper F3 (30/0.09) (Dentsply Maillefer, Ballaigues, Switzerland), and Mtwo 40/0.06 (VDW, Munich, Germany) were examined using laser micrometre (LSM 6000 by Mitutoyo, Japan) with accuracy of 1 nm to establish their actual diameter at D 0 , D 1 , D 3 and D 6 . The data was analysed using the independent t -test. The differences were considered at 0.05. The diameter of GP master cones was significantly larger than that of the corresponding files at all levels in all brands. ProTaper GP diameter was closest to the file diameter at D1 (GP=0.35, File=0.35 mm), and D3 (GP=0.48, File=0.49). Within the same manufacturer, GP cone diameters do not match the diameters of their corresponding files. Clinicians are advised to use a GP gauge to cut the tip so as to appropriate the diameter from a smaller sized GP cone.

  1. File Type Identification of File Fragments using Longest Common Subsequence (LCS)

    Science.gov (United States)

    Rahmat, R. F.; Nicholas, F.; Purnamawati, S.; Sitompul, O. S.

    2017-01-01

    Computer forensic analyst is a person in charge of investigation and evidence tracking. In certain cases, the file needed to be presented as digital evidence was deleted. It is difficult to reconstruct the file, because it often lost its header and cannot be identified while being restored. Therefore, a method is required for identifying the file type of file fragments. In this research, we propose Longest Common Subsequences that consists of three steps, namely training, testing and validation, to identify the file type from file fragments. From all testing results we can conlude that our proposed method works well and achieves 92.91% of accuracy to identify the file type of file fragment for three data types.

  2. 49 CFR 564.5 - Information filing; agency processing of filings.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 6 2010-10-01 2010-10-01 false Information filing; agency processing of filings... HIGHWAY TRAFFIC SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION REPLACEABLE LIGHT SOURCE INFORMATION (Eff. until 12-01-12) § 564.5 Information filing; agency processing of filings. (a) Each manufacturer...

  3. Reconstruction of point cross-section from ENDF data file for Monte Carlo applications

    International Nuclear Information System (INIS)

    Kumawat, H.; Saxena, A.; Carminati, F.; )

    2016-12-01

    Monte Carlo neutron transport codes are one of the best tools to simulate complex systems like fission and fusion reactors, Accelerator Driven Sub-critical systems, radio-activity management of spent fuel and waste, optimization and characterization of neutron detectors, optimization of Boron Neutron Capture Therapy, imaging etc. The neutron cross-section and secondary particle emission properties are the main input parameters of such codes. The fission, capture and elastic scattering cross-sections have complex resonating structures. Evaluated Nuclear Data File (ENDF) contains these cross-sections and secondary parameters. We report the development of reconstruction procedure to generate point cross-sections and probabilities from ENDF data file. The cross-sections are compared with the values obtained from PREPRO and in some cases NJOY codes. The results are in good agreement. (author)

  4. File Detection On Network Traffic Using Approximate Matching

    Directory of Open Access Journals (Sweden)

    Frank Breitinger

    2014-09-01

    Full Text Available In recent years, Internet technologies changed enormously and allow faster Internet connections, higher data rates and mobile usage. Hence, it is possible to send huge amounts of data / files easily which is often used by insiders or attackers to steal intellectual property. As a consequence, data leakage prevention systems (DLPS have been developed which analyze network traffic and alert in case of a data leak. Although the overall concepts of the detection techniques are known, the systems are mostly closed and commercial.Within this paper we present a new technique for network trac analysis based on approximate matching (a.k.a fuzzy hashing which is very common in digital forensics to correlate similar files. This paper demonstrates how to optimize and apply them on single network packets. Our contribution is a straightforward concept which does not need a comprehensive conguration: hash the file and store the digest in the database. Within our experiments we obtained false positive rates between 10-4 and 10-5 and an algorithm throughput of over 650 Mbit/s.

  5. Optimal sample size for probability of detection curves

    International Nuclear Information System (INIS)

    Annis, Charles; Gandossi, Luca; Martin, Oliver

    2013-01-01

    Highlights: • We investigate sample size requirement to develop probability of detection curves. • We develop simulations to determine effective inspection target sizes, number and distribution. • We summarize these findings and provide guidelines for the NDE practitioner. -- Abstract: The use of probability of detection curves to quantify the reliability of non-destructive examination (NDE) systems is common in the aeronautical industry, but relatively less so in the nuclear industry, at least in European countries. Due to the nature of the components being inspected, sample sizes tend to be much lower. This makes the manufacturing of test pieces with representative flaws, in sufficient numbers, so to draw statistical conclusions on the reliability of the NDT system under investigation, quite costly. The European Network for Inspection and Qualification (ENIQ) has developed an inspection qualification methodology, referred to as the ENIQ Methodology. It has become widely used in many European countries and provides assurance on the reliability of NDE systems, but only qualitatively. The need to quantify the output of inspection qualification has become more important as structural reliability modelling and quantitative risk-informed in-service inspection methodologies become more widely used. A measure of the NDE reliability is necessary to quantify risk reduction after inspection and probability of detection (POD) curves provide such a metric. The Joint Research Centre, Petten, The Netherlands supported ENIQ by investigating the question of the sample size required to determine a reliable POD curve. As mentioned earlier manufacturing of test pieces with defects that are typically found in nuclear power plants (NPPs) is usually quite expensive. Thus there is a tendency to reduce sample sizes, which in turn increases the uncertainty associated with the resulting POD curve. The main question in conjunction with POS curves is the appropriate sample size. Not

  6. Cut-and-Paste file-systems: integrating simulators and file systems

    NARCIS (Netherlands)

    Bosch, H.G.P.; Mullender, Sape J.

    1995-01-01

    We have implemented an integrated and configurable file system called the Pegasus filesystem (PFS) and a trace-driven file-system simulator called Patsy. Patsy is used for off-line analysis of file-systemalgorithms, PFS is used for on-line file-systemdata storage. Algorithms are first analyzed in

  7. ATLAS File and Dataset Metadata Collection and Use

    CERN Document Server

    Albrand, S; The ATLAS collaboration; Lambert, F; Gallas, E J

    2012-01-01

    The ATLAS Metadata Interface (“AMI”) was designed as a generic cataloguing system, and as such it has found many uses in the experiment including software release management, tracking of reconstructed event sizes and control of dataset nomenclature. The primary use of AMI is to provide a catalogue of datasets (file collections) which is searchable using physics criteria. In this paper we discuss the various mechanisms used for filling the AMI dataset and file catalogues. By correlating information from different sources we can derive aggregate information which is important for physics analysis; for example the total number of events contained in dataset, and possible reasons for missing events such as a lost file. Finally we will describe some specialized interfaces which were developed for the Data Preparation and reprocessing coordinators. These interfaces manipulate information from both the dataset domain held in AMI, and the run-indexed information held in the ATLAS COMA application (Conditions and ...

  8. Evaluation of Contact Friction in Fracture of Rotationally Bent Nitinol Endodontic Files

    Science.gov (United States)

    Haimed, Tariq Abu

    2011-12-01

    The high flexibility of rotary Nitinol (Ni-Ti) files has helped clinicians perform root canal treatments with fewer technical errors than seen with stainless steel files. However, intracanal file fracture can occur, compromising the outcome of the treatment. Ni-Ti file fracture incidence is roughly around 4% amongst specialists and higher amongst general practitioners. Therefore, eliminating or reducing this problem should improve patient care. The aim of this project was to isolate and examine the role of friction between files and the canal walls of the glass tube model, and bending-related maximum strain amplitudes, on Ni-Ti file lifetimes-tofracture in the presence of different irrigant solutions and file coatings. A specifically designed device was used to test over 300 electropolished EndoSequenceRTM Ni-Ti files for number of cycles to failure (NCF) in smooth, bent glass tube models at 45 and 60 degrees during dry, coated and liquid-lubricated rotation at 600rpm. Fractured files were examined under Scanning Electron Microscopy (SEM) afterwards. Four different file sizes 25.04, 25.06, 35.04, 35.06 (diameter in mm/taper %) and six surface modification conditions were used independently. These conditions included, three solutions; (1) a surfactant-based solution, Surface-Active-Displacement-Solution (SADS), (2) a mouth wash proven to remove biofilms, Delmopinol 1%(DEL), and (3) Bleach 6% (vol.%), the most common antibacterial endodontic irrigant solution. The conditions also included two low-friction silane-based coating groups, 3-Hepta-fluoroisopropyl-propoxymethyl-dichlorosilane (3-HEPT) and Octadecyltrichlorosilane (ODS), in addition to an as-received file control group (Dry). The coefficient of friction (CF) between the file and the canal walls for each condition was measured as well as the surface tension of the irrigant solutions and the critical surface tension of the coated and uncoated files by contact angle measurements. The radius of curvature and

  9. Cyclic fatigue resistance, torsional resistance, and metallurgical characteristics of M3 Rotary and M3 Pro Gold NiTi files

    Science.gov (United States)

    2018-01-01

    Objectives To evaluate the mechanical properties and metallurgical characteristics of the M3 Rotary and M3 Pro Gold files (United Dental). Materials and Methods One hundred and sixty new M3 Rotary and M3 Pro Gold files (sizes 20/0.04 and 25/0.04) were used. Torque and angle of rotation at failure (n = 20) were measured according to ISO 3630-1. Cyclic fatigue resistance was tested by measuring the number of cycles to failure in an artificial stainless steel canal (60° angle of curvature and a 5-mm radius). The metallurgical characteristics were investigated by differential scanning calorimetry. Data were analyzed using analysis of variance and the Student-Newman-Keuls test. Results Comparing the same size of the 2 different instruments, cyclic fatigue resistance was significantly higher in the M3 Pro Gold files than in the M3 Rotary files (p Rotary files showed 1 small peak on the heating curve and 1 small peak on the cooling curve. Conclusions The M3 Pro Gold files showed greater flexibility and angular rotation than the M3 Rotary files, without decrement of their torque resistance. The superior flexibility of M3 Pro Gold files can be attributed to their martensite phase. PMID:29765904

  10. A simple sizing optimization technique for an impact limiter based on dynamic material properties

    International Nuclear Information System (INIS)

    Choi, Woo-Seok; Seo, Ki-Seog

    2010-01-01

    According to IAEA regulations, a transportation package for radioactive material should perform its intended function of containing the radioactive contents after a drop test, which is one of the hypothetical accident conditions. Impact limiters attached to a transport cask absorb most of the impact energy. So, it is important to determine the shape, size and material of impact limiters properly. The material data needed in this determination is a dynamic one. In this study, several materials considered as those of impact limiters were tested by drop weight equipment to acquire the dynamic material characteristics data. The impact absorbing volume of the impact limiter was derived mathematically for each drop condition. A size optimization of the impact limiter was conducted. The derived impact absorbing volumes were applied as constraints. These volumes should be less than the critical volumes generated based on the dynamic material characteristics. The derived procedure to decide the shape of the impact limiter can be useful at the preliminary design stage when the transportation package's outline is roughly determined and applied as an input value.

  11. Optimal Power Flow by Interior Point and Non Interior Point Modern Optimization Algorithms

    Directory of Open Access Journals (Sweden)

    Marcin Połomski

    2013-03-01

    Full Text Available The idea of optimal power flow (OPF is to determine the optimal settings for control variables while respecting various constraints, and in general it is related to power system operational and planning optimization problems. A vast number of optimization methods have been applied to solve the OPF problem, but their performance is highly dependent on the size of a power system being optimized. The development of the OPF recently has tracked significant progress both in numerical optimization techniques and computer techniques application. In recent years, application of interior point methods to solve OPF problem has been paid great attention. This is due to the fact that IP methods are among the fastest algorithms, well suited to solve large-scale nonlinear optimization problems. This paper presents the primal-dual interior point method based optimal power flow algorithm and new variant of the non interior point method algorithm with application to optimal power flow problem. Described algorithms were implemented in custom software. The experiments show the usefulness of computational software and implemented algorithms for solving the optimal power flow problem, including the system model sizes comparable to the size of the National Power System.

  12. Extending DIRAC File Management with Erasure-Coding for efficient storage.

    Science.gov (United States)

    Cadellin Skipsey, Samuel; Todev, Paulin; Britton, David; Crooks, David; Roy, Gareth

    2015-12-01

    The state of the art in Grid style data management is to achieve increased resilience of data via multiple complete replicas of data files across multiple storage endpoints. While this is effective, it is not the most space-efficient approach to resilience, especially when the reliability of individual storage endpoints is sufficiently high that only a few will be inactive at any point in time. We report on work performed as part of GridPP[1], extending the Dirac File Catalogue and file management interface to allow the placement of erasure-coded files: each file distributed as N identically-sized chunks of data striped across a vector of storage endpoints, encoded such that any M chunks can be lost and the original file can be reconstructed. The tools developed are transparent to the user, and, as well as allowing up and downloading of data to Grid storage, also provide the possibility of parallelising access across all of the distributed chunks at once, improving data transfer and IO performance. We expect this approach to be of most interest to smaller VOs, who have tighter bounds on the storage available to them, but larger (WLCG) VOs may be interested as their total data increases during Run 2. We provide an analysis of the costs and benefits of the approach, along with future development and implementation plans in this area. In general, overheads for multiple file transfers provide the largest issue for competitiveness of this approach at present.

  13. Cut-and-Paste file-systems : integrating simulators and file systems

    NARCIS (Netherlands)

    Bosch, H.G.P.; Mullender, Sape J.

    1996-01-01

    We have implemented an integrated and configurable file system called the PFS and a trace-driven file-system simulator called Patsy. Patsy is used for off-line analysis of file-system algorithms, PFS is used for on-line file-system data storage. Algorithms are first analyzed in Patsy and when we are

  14. Screw-in forces during instrumentation by various file systems.

    Science.gov (United States)

    Ha, Jung-Hong; Kwak, Sang Won; Kim, Sung-Kyo; Kim, Hyeon-Cheol

    2016-11-01

    The purpose of this study was to compare the maximum screw-in forces generated during the movement of various Nickel-Titanium (NiTi) file systems. Forty simulated canals in resin blocks were randomly divided into 4 groups for the following instruments: Mtwo size 25/0.07 (MTW, VDW GmbH), Reciproc R25 (RPR, VDW GmbH), ProTaper Universal F2 (PTU, Dentsply Maillefer), and ProTaper Next X2 (PTN, Dentsply Maillefer, n = 10). All the artificial canals were prepared to obtain a standardized lumen by using ProTaper Universal F1. Screw-in forces were measured using a custom-made experimental device (AEndoS- k , DMJ system) during instrumentation with each NiTi file system using the designated movement. The rotation speed was set at 350 rpm with an automatic 4 mm pecking motion at a speed of 1 mm/sec. The pecking depth was increased by 1 mm for each pecking motion until the file reach the working length. Forces were recorded during file movement, and the maximum force was extracted from the data. Maximum screw-in forces were analyzed by one-way ANOVA and Tukey's post hoc comparison at a significance level of 95%. Reciproc and ProTaper Universal files generated the highest maximum screw-in forces among all the instruments while M-two and ProTaper Next showed the lowest ( p files with smaller cross-sectional area for higher flexibility is recommended.

  15. Preparation and Surface Sizing Application of Sizing Agent Based on Collagen from Leather Waste

    Directory of Open Access Journals (Sweden)

    Xuechuan Wang

    2015-09-01

    Full Text Available Collagen extracted from leather waste was modified with maleic anhydride. Then, using ammonium persulfate as an initiator, by pre-modifying collagen reacted with styrene and ethyl acrylate monomers, a vinyl-grafted collagen sizing agent (VGCSA for paper was prepared. Before the experiment, the performance of VGCSA was tested and VGCSA emulsion was applied to the surface sizing of the corrugated paper. Effects of the amount of VGCSA, the compound proportion of VGCSA, and starch and styrene-acrylic emulsion were studied relative to paper properties. The morphological changes of the paper before and after sizing were characterized by SEM. It was found that the collagen reacted with styrene and ethyl acrylate monomers. Through the grafting of vinyl and collagen, the crystallinity and thermal stability of VGCSA increased. The structure of VGCSA was spherical with a uniform size, and the average particle size was approximately 350 to 400 nm. After being sized, the surface fibers of paper became smooth and orderly. The optimal sizing of VGCSA was 8 g/m2. The optimal proportion of VGCSA with starch was 4:6, and the optimal proportion of VGCSA with SAE was 2:8. The research indicates that collagen extracted from leather waste could be used as a biomaterial, and environmental and economic benefits could be created as well.

  16. Optimal Sizing of Energy Storage Systems for the Energy Procurement Problem in Multi-Period Markets under Uncertainties

    Directory of Open Access Journals (Sweden)

    Ryusuke Konishi

    2018-01-01

    Full Text Available In deregulated electricity markets, minimizing the procurement costs of electricity is a critical problem for procurement agencies (PAs. However, uncertainty is inevitable for PAs and includes multiple factors such as market prices, photovoltaic system (PV output and demand. This study focuses on settlements in multi-period markets (a day-ahead market and a real-time market and the installation of energy storage systems (ESSs. ESSs can be utilized for time arbitrage in the day-ahead market and to reduce the purchasing/selling of electricity in the real-time market. However, the high costs of an ESS mean the size of the system needs to be minimized. In addition, when determining the size of an ESS, it is important to identify the size appropriate for each role. Therefore, we employ the concept of a “slow” and a “fast” ESS to quantify the size of a system’s role, based on the values associated with the various uncertainties. Because the problem includes nonlinearity and non-convexity, we solve it within a realistic computational burden by reformulating the problem using reasonable assumptions. Therefore, this study identifies the optimal sizes of ESSs and procurement, taking into account the uncertainties of prices in multi-period markets, PV output and demand.

  17. File-System Workload on a Scientific Multiprocessor

    Science.gov (United States)

    Kotz, David; Nieuwejaar, Nils

    1995-01-01

    Many scientific applications have intense computational and I/O requirements. Although multiprocessors have permitted astounding increases in computational performance, the formidable I/O needs of these applications cannot be met by current multiprocessors a their I/O subsystems. To prevent I/O subsystems from forever bottlenecking multiprocessors and limiting the range of feasible applications, new I/O subsystems must be designed. The successful design of computer systems (both hardware and software) depends on a thorough understanding of their intended use. A system designer optimizes the policies and mechanisms for the cases expected to most common in the user's workload. In the case of multiprocessor file systems, however, designers have been forced to build file systems based only on speculation about how they would be used, extrapolating from file-system characterizations of general-purpose workloads on uniprocessor and distributed systems or scientific workloads on vector supercomputers (see sidebar on related work). To help these system designers, in June 1993 we began the Charisma Project, so named because the project sought to characterize 1/0 in scientific multiprocessor applications from a variety of production parallel computing platforms and sites. The Charisma project is unique in recording individual read and write requests-in live, multiprogramming, parallel workloads (rather than from selected or nonparallel applications). In this article, we present the first results from the project: a characterization of the file-system workload an iPSC/860 multiprocessor running production, parallel scientific applications at NASA's Ames Research Center.

  18. Optimized synthesis of nano-sized LiFePO4/C particles with excellent rate capability for lithium ion batteries

    International Nuclear Information System (INIS)

    Liu, Houbin; Miao, Cui; Meng, Yan; He, Yan-Bing; Xu, Qiang; Zhang, Xinhe; Tang, Zhiyuan

    2014-01-01

    Olivine-type LiFePO 4 /C composite with excellent rate capability and cycling stability is synthesized by an optimized ethylene glycol assisted solution-phase method. In an attempt to improve the electrochemical performance, the size of LiFePO 4 /C particle is reduced by optimizing the reaction time and temperature. The results show that the LiFePO 4 /C synthesized at 130 °C for 5 h consists of well-distributed nano-particles of size about 50 nm in diameter and 100 nm in length, which is uniformly coated with a carbon layer about 3.0 nm in thickness. The material synthesized at 130 °C exhibits the least charge-transfer resistance than the LiFePO 4 /C synthesized at 120 and 140 °C. The specific capacity of optimized LiFePO 4 /C at discharge rate of 0.1 C can reach to 166.5 mAhg −1 , nearly to the theoretical capacity. Even at high rate of 5, 10, 20 and 30 C, the specific capacities of 132.3, 120.4, 97.3 and 66.6 mAhg −1 are achieved, respectively, with no significant capacity fading after 100 cycles. This is a promising method used in industrialization to synthesize LiFePO 4 /C composite with excellent performance

  19. Measurement and visualization of file-to-wall contact during ultrasonically activated irrigation in simulated canals.

    Science.gov (United States)

    Boutsioukis, C; Verhaagen, B; Walmsley, A D; Versluis, M; van der Sluis, L W M

    2013-11-01

    (i) To quantify in a simulated root canal model the file-to-wall contact during ultrasonic activation of an irrigant and to evaluate the effect of root canal size, file insertion depth, ultrasonic power, root canal level and previous training, (ii) To investigate the effect of file-to-wall contact on file oscillation. File-to-wall contact was measured during ultrasonic activation of the irrigant performed by 15 trained and 15 untrained participants in two metal root canal models. Results were analyzed by two 5-way mixed-design anovas. The level of significance was set at P root canal (P root canal (P irrigant activation. Therefore, the term 'Passive Ultrasonic Irrigation' should be amended to 'Ultrasonically Activated Irrigation'. © 2013 International Endodontic Journal. Published by John Wiley & Sons Ltd.

  20. The optimal fraction size in high-dose-rate brachytherapy: dependency on tissue repair kinetics and low-dose rate

    International Nuclear Information System (INIS)

    Sminia, Peter; Schneider, Christoph J.; Fowler, Jack F.

    2002-01-01

    Background and Purpose: Indications of the existence of long repair half-times on the order of 2-4 h for late-responding human normal tissues have been obtained from continuous hyperfractionated accelerated radiotherapy (CHART). Recently, these data were used to explain, on the basis of the biologically effective dose (BED), the potential superiority of fractionated high-dose rate (HDR) with large fraction sizes of 5-7 Gy over continuous low-dose rate (LDR) irradiation at 0.5 Gy/h in cervical carcinoma. We investigated the optimal fraction size in HDR brachytherapy and its dependency on treatment choices (overall treatment time, number of HDR fractions, and time interval between fractions) and treatment conditions (reference low-dose rate, tissue repair characteristics). Methods and Materials: Radiobiologic model calculations were performed using the linear-quadratic model for incomplete mono-exponential repair. An irradiation dose of 20 Gy was assumed to be applied either with HDR in 2-12 fractions or continuously with LDR for a range of dose rates. HDR and LDR treatment regimens were compared on the basis of the BED and BED ratio of normal tissue and tumor, assuming repair half-times between 1 h and 4 h. Results: With the assumption that the repair half-time of normal tissue was three times longer than that of the tumor, hypofractionation in HDR relative to LDR could result in relative normal tissue sparing if the optimum fraction size is selected. By dose reduction while keeping the tumor BED constant, absolute normal tissue sparing might therefore be achieved. This optimum HDR fraction size was found to be largely dependent on the LDR dose rate. On the basis of the BED NT/TUM ratio of HDR over LDR, 3 x 6.7 Gy would be the optimal HDR fractionation scheme for replacement of an LDR scheme of 20 Gy in 10-30 h (dose rate 2-0.67 Gy/h), while at a lower dose rate of 0.5 Gy/h, four fractions of 5 Gy would be preferential, still assuming large differences between tumor

  1. File sharing

    NARCIS (Netherlands)

    van Eijk, N.

    2011-01-01

    File sharing’ has become generally accepted on the Internet. Users share files for downloading music, films, games, software etc. In this note, we have a closer look at the definition of file sharing, the legal and policy-based context as well as enforcement issues. The economic and cultural

  2. Temperature increases on the external root surface during endodontic treatment using single file systems.

    Science.gov (United States)

    Özkocak, I; Taşkan, M M; Gökt Rk, H; Aytac, F; Karaarslan, E Şirin

    2015-01-01

    The aim of this study is to evaluate increases in temperature on the external root surface during endodontic treatment with different rotary systems. Fifty human mandibular incisors with a single root canal were selected. All root canals were instrumented using a size 20 Hedstrom file, and the canals were irrigated with 5% sodium hypochlorite solution. The samples were randomly divided into the following three groups of 15 teeth: Group 1: The OneShape Endodontic File no.: 25; Group 2: The Reciproc Endodontic File no.: 25; Group 3: The WaveOne Endodontic File no.: 25. During the preparation, the temperature changes were measured in the middle third of the roots using a noncontact infrared thermometer. The temperature data were transferred from the thermometer to the computer and were observed graphically. Statistical analysis was performed using the Kruskal-Wallis analysis of variance at a significance level of 0.05. The increases in temperature caused by the OneShape file system were lower than those of the other files (P file showed the highest temperature increases. However, there were no significant differences between the Reciproc and WaveOne files. The single file rotary systems used in this study may be recommended for clinical use.

  3. Optimal Sizing for Wind/PV/Battery System Using Fuzzy c-Means Clustering with Self-Adapted Cluster Number

    Directory of Open Access Journals (Sweden)

    Xin Liu

    2017-01-01

    Full Text Available Integrating wind generation, photovoltaic power, and battery storage to form hybrid power systems has been recognized to be promising in renewable energy development. However, considering the system complexity and uncertainty of renewable energies, such as wind and solar types, it is difficult to obtain practical solutions for these systems. In this paper, optimal sizing for a wind/PV/battery system is realized by trade-offs between technical and economic factors. Firstly, the fuzzy c-means clustering algorithm was modified with self-adapted parameters to extract useful information from historical data. Furthermore, the Markov model is combined to determine the chronological system states of natural resources and load. Finally, a power balance strategy is introduced to guide the optimization process with the genetic algorithm to establish the optimal configuration with minimized cost while guaranteeing reliability and environmental factors. A case of island hybrid power system is analyzed, and the simulation results are compared with the general FCM method and chronological method to validate the effectiveness of the mentioned method.

  4. Partial storage optimization and load control strategy of cloud data centers.

    Science.gov (United States)

    Al Nuaimi, Klaithem; Mohamed, Nader; Al Nuaimi, Mariam; Al-Jaroodi, Jameela

    2015-01-01

    We present a novel approach to solve the cloud storage issues and provide a fast load balancing algorithm. Our approach is based on partitioning and concurrent dual direction download of the files from multiple cloud nodes. Partitions of the files are saved on the cloud rather than the full files, which provide a good optimization to the cloud storage usage. Only partial replication is used in this algorithm to ensure the reliability and availability of the data. Our focus is to improve the performance and optimize the storage usage by providing the DaaS on the cloud. This algorithm solves the problem of having to fully replicate large data sets, which uses up a lot of precious space on the cloud nodes. Reducing the space needed will help in reducing the cost of providing such space. Moreover, performance is also increased since multiple cloud servers will collaborate to provide the data to the cloud clients in a faster manner.

  5. Partial Storage Optimization and Load Control Strategy of Cloud Data Centers

    Science.gov (United States)

    2015-01-01

    We present a novel approach to solve the cloud storage issues and provide a fast load balancing algorithm. Our approach is based on partitioning and concurrent dual direction download of the files from multiple cloud nodes. Partitions of the files are saved on the cloud rather than the full files, which provide a good optimization to the cloud storage usage. Only partial replication is used in this algorithm to ensure the reliability and availability of the data. Our focus is to improve the performance and optimize the storage usage by providing the DaaS on the cloud. This algorithm solves the problem of having to fully replicate large data sets, which uses up a lot of precious space on the cloud nodes. Reducing the space needed will help in reducing the cost of providing such space. Moreover, performance is also increased since multiple cloud servers will collaborate to provide the data to the cloud clients in a faster manner. PMID:25973444

  6. Optimal resonance configuration for ultrasonic wireless power transmission to millimeter-sized biomedical implants.

    Science.gov (United States)

    Miao Meng; Kiani, Mehdi

    2016-08-01

    In order to achieve efficient wireless power transmission (WPT) to biomedical implants with millimeter (mm) dimensions, ultrasonic WPT links have recently been proposed. Operating both transmitter (Tx) and receiver (Rx) ultrasonic transducers at their resonance frequency (fr) is key in improving power transmission efficiency (PTE). In this paper, different resonance configurations for Tx and Rx transducers, including series and parallel resonance, have been studied to help the designers of ultrasonic WPT links to choose the optimal resonance configuration for Tx and Rx that maximizes PTE. The geometries for disk-shaped transducers of four different sets of links, operating at series-series, series-parallel, parallel-series, and parallel-parallel resonance configurations in Tx and Rx, have been found through finite-element method (FEM) simulation tools for operation at fr of 1.4 MHz. Our simulation results suggest that operating the Tx transducer with parallel resonance increases PTE, while the resonance configuration of the mm-sized Rx transducer highly depends on the load resistance, Rl. For applications that involve large Rl in the order of tens of kΩ, a parallel resonance for a mm-sized Rx leads to higher PTE, while series resonance is preferred for Rl in the order of several kΩ and below.

  7. Optimizing UML Class Diagrams

    Directory of Open Access Journals (Sweden)

    Sergievskiy Maxim

    2018-01-01

    Full Text Available Most of object-oriented development technologies rely on the use of the universal modeling language UML; class diagrams play a very important role in the design process play, used to build a software system model. Modern CASE tools, which are the basic tools for object-oriented development, can’t be used to optimize UML diagrams. In this manuscript we will explain how, based on the use of design patterns and anti-patterns, class diagrams could be verified and optimized. Certain transformations can be carried out automatically; in other cases, potential inefficiencies will be indicated and recommendations given. This study also discusses additional CASE tools for validating and optimizing of UML class diagrams. For this purpose, a plugin has been developed that analyzes an XMI file containing a description of class diagrams.

  8. Seismic image watermarking using optimized wavelets

    International Nuclear Information System (INIS)

    Mufti, M.

    2010-01-01

    Geotechnical processes and technologies are becoming more and more sophisticated by the use of computer and information technology. This has made the availability, authenticity and security of geo technical data even more important. One of the most common methods of storing and sharing seismic data images is through standardized SEG- Y file format.. Geo technical industry is now primarily data centric. The analytic and detection capability of seismic processing tool is heavily dependent on the correctness of the contents of the SEG-Y data file. This paper describes a method through an optimized wavelet transform technique which prevents unauthorized alteration and/or use of seismic data. (author)

  9. Optimal siting and sizing of wind farms

    NARCIS (Netherlands)

    Cetinay-Iyicil, H.; Kuipers, F.A.; Guven, A. Nezih

    2017-01-01

    In this paper, we propose a novel technique to determine the optimal placement of wind farms, thereby taking into account wind characteristics and electrical grid constraints. We model the long-term variability of wind speed using a Weibull distribution according to wind direction intervals, and

  10. Optimal Design of Wireless Power Transmission Links for Millimeter-Sized Biomedical Implants.

    Science.gov (United States)

    Ahn, Dukju; Ghovanloo, Maysam

    2016-02-01

    This paper presents a design methodology for RF power transmission to millimeter-sized implantable biomedical devices. The optimal operating frequency and coil geometries are found such that power transfer efficiency (PTE) and tissue-loss-constrained allowed power are maximized. We define receiver power reception susceptibility (Rx-PRS) and transmitter figure of merit (Tx-FoM) such that their multiplication yields the PTE. Rx-PRS and Tx-FoM define the roles of the Rx and Tx in the PTE, respectively. First, the optimal Rx coil geometry and operating frequency range are identified such that the Rx-PRS is maximized for given implant constraints. Since the Rx is very small and has lesser design freedom than the Tx, the overall operating frequency is restricted mainly by the Rx. Rx-PRS identifies such operating frequency constraint imposed by the Rx. Secondly, the Tx coil geometry is selected such that the Tx-FoM is maximized under the frequency constraint at which the Rx-PRS was saturated. This aligns the target frequency range of Tx optimization with the frequency range at which Rx performance is high, resulting in the maximum PTE. Finally, we have found that even in the frequency range at which the PTE is relatively flat, the tissue loss per unit delivered power can be significantly different for each frequency. The Rx-PRS can predict the frequency range at which the tissue loss per unit delivered power is minimized while PTE is maintained high. In this way, frequency adjustment for the PTE and tissue-loss-constrained allowed power is realized by characterizing the Rx-PRS. The design procedure was verified through full-wave electromagnetic field simulations and measurements using de-embedding method. A prototype implant, 1 mm in diameter, achieved PTE of 0.56% ( -22.5 dB) and power delivered to load (PDL) was 224 μW at 200 MHz with 12 mm Tx-to-Rx separation in the tissue environment.

  11. Renewal-anomalous-heterogeneous files

    International Nuclear Information System (INIS)

    Flomenbom, Ophir

    2010-01-01

    Renewal-anomalous-heterogeneous files are solved. A simple file is made of Brownian hard spheres that diffuse stochastically in an effective 1D channel. Generally, Brownian files are heterogeneous: the spheres' diffusion coefficients are distributed and the initial spheres' density is non-uniform. In renewal-anomalous files, the distribution of waiting times for individual jumps is not exponential as in Brownian files, yet obeys: ψ α (t)∼t -1-α , 0 2 >, obeys, 2 >∼ 2 > nrml α , where 2 > nrml is the MSD in the corresponding Brownian file. This scaling is an outcome of an exact relation (derived here) connecting probability density functions of Brownian files and renewal-anomalous files. It is also shown that non-renewal-anomalous files are slower than the corresponding renewal ones.

  12. Size reduction of complex networks preserving modularity

    Energy Technology Data Exchange (ETDEWEB)

    Arenas, A.; Duch, J.; Fernandez, A.; Gomez, S.

    2008-12-24

    The ubiquity of modular structure in real-world complex networks is being the focus of attention in many trials to understand the interplay between network topology and functionality. The best approaches to the identification of modular structure are based on the optimization of a quality function known as modularity. However this optimization is a hard task provided that the computational complexity of the problem is in the NP-hard class. Here we propose an exact method for reducing the size of weighted (directed and undirected) complex networks while maintaining invariant its modularity. This size reduction allows the heuristic algorithms that optimize modularity for a better exploration of the modularity landscape. We compare the modularity obtained in several real complex-networks by using the Extremal Optimization algorithm, before and after the size reduction, showing the improvement obtained. We speculate that the proposed analytical size reduction could be extended to an exact coarse graining of the network in the scope of real-space renormalization.

  13. PC Graphic file programing

    International Nuclear Information System (INIS)

    Yang, Jin Seok

    1993-04-01

    This book gives description of basic of graphic knowledge and understanding and realization of graphic file form. The first part deals with graphic with graphic data, store of graphic data and compress of data, programing language such as assembling, stack, compile and link of program and practice and debugging. The next part mentions graphic file form such as Mac paint file, GEM/IMG file, PCX file, GIF file, and TIFF file, consideration of hardware like mono screen driver and color screen driver in high speed, basic conception of dithering and conversion of formality.

  14. Truss systems and shape optimization

    Science.gov (United States)

    Pricop, Mihai Victor; Bunea, Marian; Nedelcu, Roxana

    2017-07-01

    Structure optimization is an important topic because of its benefits and wide applicability range, from civil engineering to aerospace and automotive industries, contributing to a more green industry and life. Truss finite elements are still in use in many research/industrial codesfor their simple stiffness matrixand are naturally matching the requirements for cellular materials especially considering various 3D printing technologies. Optimality Criteria combined with Solid Isotropic Material with Penalization is the optimization method of choice, particularized for truss systems. Global locked structures areobtainedusinglocally locked lattice local organization, corresponding to structured or unstructured meshes. Post processing is important for downstream application of the method, to make a faster link to the CAD systems. To export the optimal structure in CATIA, a CATScript file is automatically generated. Results, findings and conclusions are given for two and three-dimensional cases.

  15. Optimal Sizing of a Solar-Plus-Storage System For Utility Bill Savings and Resiliency Benefits: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Simpkins, Travis; Anderson, Kate; Cutler, Dylan; Olis, Dan

    2016-11-01

    Solar-plus-storage systems can achieve significant utility savings in behind-the-meter deployments in buildings, campuses, or industrial sites. Common applications include demand charge reduction, energy arbitrage, time-shifting of excess photovoltaic (PV) production, and selling ancillary services to the utility grid. These systems can also offer some energy resiliency during grid outages. It is often difficult to quantify the amount of resiliency that these systems can provide, however, and this benefit is often undervalued or omitted during the design process. We propose a method for estimating the resiliency that a solar-plus-storage system can provide at a given location. We then present an optimization model that can optimally size the system components to minimize the lifecycle cost of electricity to the site, including the costs incurred during grid outages. The results show that including the value of resiliency during the feasibility stage can result in larger systems and increased resiliency.

  16. MMTF-An efficient file format for the transmission, visualization, and analysis of macromolecular structures.

    Directory of Open Access Journals (Sweden)

    Anthony R Bradley

    2017-06-01

    Full Text Available Recent advances in experimental techniques have led to a rapid growth in complexity, size, and number of macromolecular structures that are made available through the Protein Data Bank. This creates a challenge for macromolecular visualization and analysis. Macromolecular structure files, such as PDB or PDBx/mmCIF files can be slow to transfer, parse, and hard to incorporate into third-party software tools. Here, we present a new binary and compressed data representation, the MacroMolecular Transmission Format, MMTF, as well as software implementations in several languages that have been developed around it, which address these issues. We describe the new format and its APIs and demonstrate that it is several times faster to parse, and about a quarter of the file size of the current standard format, PDBx/mmCIF. As a consequence of the new data representation, it is now possible to visualize structures with millions of atoms in a web browser, keep the whole PDB archive in memory or parse it within few minutes on average computers, which opens up a new way of thinking how to design and implement efficient algorithms in structural bioinformatics. The PDB archive is available in MMTF file format through web services and data that are updated on a weekly basis.

  17. Experiences on File Systems: Which is the best file system for you?

    CERN Document Server

    Blomer, J

    2015-01-01

    The distributed file system landscape is scattered. Besides a plethora of research file systems, there is also a large number of production grade file systems with various strengths and weaknesses. The file system, as an abstraction of permanent storage, is appealing because it provides application portability and integration with legacy and third-party applications, including UNIX utilities. On the other hand, the general and simple file system interface makes it notoriously difficult for a distributed file system to perform well under a variety of different workloads. This contribution provides a taxonomy of commonly used distributed file systems and points out areas of research and development that are particularly important for high-energy physics.

  18. Multi-material size optimization of a ladder frame chassis

    Science.gov (United States)

    Baker, Michael

    The Corporate Average Fuel Economy (CAFE) is an American fuel standard that sets regulations on fuel economy in vehicles. This law ultimately shapes the development and design research for automakers. Reducing the weight of conventional cars offers a way to improve fuel efficiency. This research investigated the optimality of an automobile's ladder frame chassis (LFC) by conducting multi-objective optimization on the LFC in order to reduce the weight of the chassis. The focus of the design and optimization was a ladder frame chassis commonly used for mass production light motor vehicles with an open-top rear cargo area. This thesis is comprised of two major sections. The first looked to perform thickness optimization in the outer walls of the ladder frame. In the second section, many multi-material distributions, including steel and aluminium varieties, were investigated. A simplified model was used to do an initial hand calculation analysis of the problem. This was used to create a baseline validation to compare the theory with the modeling. A CAD model of the LFC was designed. From the CAD model, a finite element model was extracted and joined using weld and bolt connectors. Following this, a linear static analysis was performed to look at displacement and stresses when subjected to loading conditions that simulate harsh driving conditions. The analysis showed significant values of stress and displacement on the ends of the rails, suggesting improvements could be made elsewhere. An optimization scheme was used to find the values of an all steel frame an optimal thickness distribution was found. This provided a 13% weight reduction over the initial model. To advance the analysis a multi-material approach was used to push the weight savings even further. Several material distributions were analyzed and the lightest utilized aluminium in all but the most strenuous subjected components. This enabled a reduction in weight of 15% over the initial model, equivalent to

  19. Evaluation of the incidence of microcracks caused by Mtwo and ProTaper Next rotary file systems versus the self-adjusting file: A scanning electron microscopic study.

    Science.gov (United States)

    Saha, Suparna Ganguly; Vijaywargiya, Neelam; Saxena, Divya; Saha, Mainak Kanti; Bharadwaj, Anuj; Dubey, Sandeep

    2017-01-01

    To evaluate the incidence of microcrack formation canal preparation with two rotary nickel-titanium systems Mtwo and ProTaper Next along with the self-adjusting file system. One hundred and twenty mandibular premolar teeth were selected. Standardized access cavities were prepared and the canals were manually prepared up to size 20 after coronal preflaring. The teeth were divided into three experimental groups and one control group ( n = 30). Group 1: The canals were prepared using Mtwo rotary files. Group 2: The canals were prepared with ProTaper Next files. Group 3: The canals were prepared with self-adjusting files. Group 4: The canals were unprepared and used as a control. The roots were sectioned horizontally 3, 6, and 9 mm from the apex and examined under a scanning electron microscope to check for the presence of microcracks. The Pearson's Chi-square test was applied. The highest incidence of microcracks were associated with the ProTaper Next group, 80% ( P = 0.00), followed by the Mtwo group, 70% ( P = 0.000), and the least number of microcracks was noted in the self-adjusting file group, 10% ( P = 0.068). No significant difference was found between the ProTaper Next and Mtwo groups ( P = 0.368) while a significant difference was observed between the ProTaper Next and self-adjusting file groups ( P = 0.000) as well as the Mtwo and self-adjusting file groups ( P = 0.000). All nickel-titanium rotary instrument systems were associated with microcracks. However, the self-adjusting file system had significantly fewer microcracks when compared with the Mtwo and ProTaper Next.

  20. Storing files in a parallel computing system using list-based index to identify replica files

    Science.gov (United States)

    Faibish, Sorin; Bent, John M.; Tzelnic, Percy; Zhang, Zhenhua; Grider, Gary

    2015-07-21

    Improved techniques are provided for storing files in a parallel computing system using a list-based index to identify file replicas. A file and at least one replica of the file are stored in one or more storage nodes of the parallel computing system. An index for the file comprises at least one list comprising a pointer to a storage location of the file and a storage location of the at least one replica of the file. The file comprises one or more of a complete file and one or more sub-files. The index may also comprise a checksum value for one or more of the file and the replica(s) of the file. The checksum value can be evaluated to validate the file and/or the file replica(s). A query can be processed using the list.

  1. Application- and patient size-dependent optimization of x-ray spectra for CT

    International Nuclear Information System (INIS)

    Kalender, Willi A.; Deak, Paul; Kellermeier, Markus; Straten, Marcel van; Vollmar, Sabrina V.

    2009-01-01

    Although x-ray computed tomography (CT) has been in clinical use for over 3 decades, spectral optimization has not been a topic of great concern; high voltages around 120 kV have been in use since the beginning of CT. It is the purpose of this study to analyze, in a rigorous manner, the energies at which the patient dose necessary to provide a given contrast-to-noise ratio (CNR) for various diagnostic tasks can be minimized. The authors used cylindrical water phantoms and quasianthropomorphic phantoms of the thorax and the abdomen with inserts of 13 mm diameter mimicking soft tissue, bone, and iodine for simulations and measurements. To provide clearly defined contrasts, these inserts were made of solid water with a 1% difference in density (DD) to represent an energy-independent soft-tissue contrast of 10 Hounsfield units (HU), calcium hydroxyapatite (Ca) representing bone, and iodine (I) representing the typical contrast medium. To evaluate CT of the thorax, an adult thorax phantom (300x200 mm 2 ) plus extension rings up to a size of 460x300 mm 2 to mimic different patient cross sections were used. For CT of the abdomen, we used a phantom of 360x200 mm 2 and an extension ring of 460x300 mm 2 . The CT scanner that the authors used was a SOMATOM Definition (Siemens Healthcare, Forchheim, Germany) at 80, 100, 120, and 140 kV. Further voltage settings of 60, 75, 90, and 105 kV were available in an experimental mode. The authors determined contrast for the density difference, calcium, and iodine, and noise and 3D dose distributions for the available voltages by measurements. Additional voltage values and monoenergetic sources were evaluated by simulations. The dose-weighted contrast-to-noise ratio (CNRD) was used as the parameter for optimization. Simulations and measurements were in good agreement with respect to absolute values and trends regarding the dependence on energy for the parameters investigated. For soft-tissue imaging, the standard settings of 120-140 k

  2. Optimization of solid state fermentation of sugar cane by Aspergillus niger considering particles size effect

    Energy Technology Data Exchange (ETDEWEB)

    Echevarria, J.; Rodriguez, L.J.A.; Delgado, G. (Instituto Cubano de Investigaciones de los Derivados de la Cana de Azucar (ICIDCA), La Habana (Cuba)); Espinosa, M.E. (Centro Nacional de Investigaciones Cientificas, La Habana (Cuba))

    1991-01-01

    The protein enrichment of sugar cane by solid state fermentation employing Aspergillus niger was optimized in a packed bed column using a two Factor Central Composit Design {alpha} = 2, considering as independent factors the particle diameter corresponding to different times of grinding for a sample and the air flow rate. It was significative for the air flow rate (optimum 4.34 VKgM) and the particle diameter (optimum 0.136 cm). The average particle size distribution, shape factor, specific surface, volume-surface mean diameter, number of particles, real and apparent density and holloweness for the different times of grinding were determined, in order to characterize the samples. (orig.).

  3. IMPROVED COMPRESSION OF XML FILES FOR FAST IMAGE TRANSMISSION

    Directory of Open Access Journals (Sweden)

    S. Manimurugan

    2011-02-01

    Full Text Available The eXtensible Markup Language (XML is a format that is widely used as a tool for data exchange and storage. It is being increasingly used in secure transmission of image data over wireless network and World Wide Web. Verbose in nature, XML files can be tens of megabytes long. Thus, to reduce their size and to allow faster transmission, compression becomes vital. Several general purpose compression tools have been proposed without satisfactory results. This paper proposes a novel technique using modified BWT for compressing XML files in a lossless fashion. The experimental results show that the performance of the proposed technique outperforms both general purpose and XML-specific compressors.

  4. FEDGROUP - A program system for producing group constants from evaluated nuclear data of files disseminated by IAEA

    International Nuclear Information System (INIS)

    Vertes, P.

    1976-06-01

    A program system for calculating group constants from several evaluated nuclear data files has been developed. These files are distributed by the Nuclear Data Section of IAEA. Our program system - FEDGROUP - has certain advantage over the well-known similar codes such as: 1. it requires only a medium sized computer />or approximately equal to 20000 words memory/, 2. it is easily adaptable to any type of computer, 3. it is flexible to the input evaluated nuclear data file and to the output group constant file. Nowadays, FEDGROUP calculates practically all types of group constants needed for reactor physics calculations by using the most frequent representations of evaluated data. (author)

  5. On Optimal Geographical Caching in Heterogeneous Cellular Networks

    NARCIS (Netherlands)

    Serbetci, Berksan; Goseling, Jasper

    2017-01-01

    In this work we investigate optimal geographical caching in heterogeneous cellular networks where different types of base stations (BSs) have different cache capacities. Users request files from a content library according to a known probability distribution. The performance metric is the total hit

  6. Selection of window sizes for optimizing occupational comfort and hygiene based on computational fluid dynamics and neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Stavrakakis, G.M.; Karadimou, D.P.; Zervas, P.L.; Markatos, N.C. [Computational Fluid Dynamics Unit, School of Chemical Engineering, National Technical University of Athens, Heroon Polytechniou 9, GR-15780 Athens (Greece); Sarimveis, H. [Unit of Process Control and Informatics, School of Chemical Engineering, National Technical University of Athens, Heroon Polytechniou 9, GR-15780 Athens (Greece)

    2011-02-15

    The present paper presents a novel computational method to optimize window sizes for thermal comfort and indoor air quality in naturally ventilated buildings. The methodology is demonstrated by means of a prototype case, which corresponds to a single-sided naturally ventilated apartment. Initially, the airflow in and around the building is simulated using a Computational Fluid Dynamics model. Local prevailing weather conditions are imposed in the CFD model as inlet boundary conditions. The produced airflow patterns are utilized to predict thermal comfort indices, i.e. the PMV and its modifications for non-air-conditioned buildings, as well as indoor air quality indices, such as ventilation effectiveness based on carbon dioxide and volatile organic compounds removal. Mean values of these indices (output/objective variables) within the occupied zone are calculated for different window sizes (input/design variables), to generate a database of input-output data pairs. The database is then used to train and validate Radial Basis Function Artificial Neural Network input-output ''meta-models''. The produced meta-models are used to formulate an optimization problem, which takes into account special constraints recommended by design guidelines. It is concluded that the proposed methodology determines appropriate windows architectural designs for pleasant and healthy indoor environments. (author)

  7. Optimal Weight Assignment for a Chinese Signature File.

    Science.gov (United States)

    Liang, Tyne; And Others

    1996-01-01

    Investigates the performance of a character-based Chinese text retrieval scheme in which monogram keys and bigram keys are encoded into document signatures. Tests and verifies the theoretical predictions of the optimal weight assignments and the minimal false hit rate in experiments using a real Chinese corpus for disyllabic queries of different…

  8. PCF File Format.

    Energy Technology Data Exchange (ETDEWEB)

    Thoreson, Gregory G [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-08-01

    PCF files are binary files designed to contain gamma spectra and neutron count rates from radiation sensors. It is the native format for the GAmma Detector Response and Analysis Software (GADRAS) package [1]. It can contain multiple spectra and information about each spectrum such as energy calibration. This document outlines the format of the file that would allow one to write a computer program to parse and write such files.

  9. An improved algorithm to convert CAD model to MCNP geometry model based on STEP file

    International Nuclear Information System (INIS)

    Zhou, Qingguo; Yang, Jiaming; Wu, Jiong; Tian, Yanshan; Wang, Junqiong; Jiang, Hai; Li, Kuan-Ching

    2015-01-01

    Highlights: • Fully exploits common features of cells, making the processing efficient. • Accurately provide the cell position. • Flexible to add new parameters in the structure. • Application of novel structure in INP file processing, conveniently evaluate cell location. - Abstract: MCNP (Monte Carlo N-Particle Transport Code) is a general-purpose Monte Carlo N-Particle code that can be used for neutron, photon, electron, or coupled neutron/photon/electron transport. Its input file, the INP file, has the characteristics of complicated form and is error-prone when describing geometric models. Due to this, a conversion algorithm that can solve the problem by converting general geometric model to MCNP model during MCNP aided modeling is highly needed. In this paper, we revised and incorporated a number of improvements over our previous work (Yang et al., 2013), which was proposed and targeted after STEP file and INP file were analyzed. Results of experiments show that the revised algorithm is more applicable and efficient than previous work, with the optimized extraction of geometry and topology information of the STEP file, as well as the production efficiency of output INP file. This proposed research is promising, and serves as valuable reference for the majority of researchers involved with MCNP-related researches

  10. Variation in clutch size in relation to nest size in birds

    OpenAIRE

    Moller Anders P.; Adriaensen Frank; Artemyev Alexandr; Banbura Jerzy; Barba Emilio; Biard Clotilde; Blondel Jacques; Bouslama Zihad; Bouvier Jean-Charles; Camprodon Jordi; Cecere Francesco; Charmantier Anne; Charter Motti; Cichon Mariusz; Cusimano Camillo

    2014-01-01

    © 2014 The Authors. Nests are structures built to support and protect eggs and/or offspring from predators, parasites, and adverse weather conditions. Nests are mainly constructed prior to egg laying, meaning that parent birds must make decisions about nest site choice and nest building behavior before the start of egg-laying. Parent birds should be selected to choose nest sites and to build optimally sized nests, yet our current understanding of clutch size-nest size relationships is limited...

  11. Optimal sizing of utility-scale photovoltaic power generation complementarily operating with hydropower: A case study of the world’s largest hydro-photovoltaic plant

    International Nuclear Information System (INIS)

    Fang, Wei; Huang, Qiang; Huang, Shengzhi; Yang, Jie; Meng, Erhao; Li, Yunyun

    2017-01-01

    Highlights: • Feasibility of complementary hydro-photovoltaic operation across the world is revealed. • Three scenarios of the novel operation mode are proposed to satisfy different load demand. • A method for optimally sizing a utility-scale photovoltaic plant is developed by maximizing the net revenue during lifetime. • The influence of complementary hydro-photovoltaic operation upon water resources allocation is investigated. - Abstract: The high variability of solar energy makes utility-scale photovoltaic power generation confront huge challenges to penetrate into power system. In this paper, the complementary hydro-photovoltaic operation is explored, aiming at improving the power quality of photovoltaic and promoting the integration of photovoltaic into the system. First, solar-rich and hydro-rich regions across the world are revealed, which are suitable for implementing the complementary hydro-photovoltaic operation. Then, three practical scenarios of the novel operation mode are proposed for better satisfying different types of load demand. Moreover, a method for optimal sizing of a photovoltaic plant integrated into a hydropower plant is developed by maximizing the net revenue during lifetime. Longyangxia complementary hydro-photovoltaic project, the current world’s largest hydro-photovoltaic power plant, is selected as a case study and its optimal photovoltaic capacities of different scenarios are calculated. Results indicate that hydropower installed capacity and annual solar curtailment rate play crucial roles in the size optimization of a photovoltaic plant and complementary hydro-photovoltaic operation exerts little adverse effect upon the water resources allocation of Longyangxia reservoir. The novel operation mode not only improves the penetration of utility-scale photovoltaic power generation but also can provide a valuable reference for the large-scale utilization of other kinds of renewable energy worldwide.

  12. Evaluation of Optimal Pore Size of (3-Aminopropyltriethoxysilane Grafted MCM-41 for Improved CO2 Adsorption

    Directory of Open Access Journals (Sweden)

    Zhilin Liu

    2015-01-01

    Full Text Available An array of new MCM-41 with substantially larger average pore diameters was synthesized through adding 1,3,5-trimethylbenzene (TMB as the swelling agent to explore the effect of pore size on final adsorbent properties. The pore expanded MCM-41 was also grafted with (3-Aminopropyltriethoxysilane (APTES to determine the optimal pore size for CO2 adsorption. The pore-expanded mesoporous MCM-41s showed relatively less structural regularity but significant increments of pore diameter (4.64 to 7.50 nm; the fraction of mesopore volume also illustrated an increase. The adsorption heat values were correlated with the order of the adsorption capacities for pore expanded MCM-41s. After amine functionalization, the adsorption capacities and heat values showed a significant increase. APTES-grafted pore-expanded MCM-41s depicted a high potential for CO2 capture regardless of the major drawback of the high energy required for regeneration.

  13. Comparison of Geometric Design of a Brand of Stainless Steel K-Files: An In Vitro Study.

    Science.gov (United States)

    Saeedullah, Maryam; Husain, Syed Wilayat

    2018-04-01

    The purpose of this experimental study was to determine the diametric variations of a brand of handheld stainless-steel K-files, acquired from different countries, in accordance with the available standards. 20 Mani stainless-steel K-files of identical size (ISO#25) were acquired from Pakistan and were designated as Group A while 20 Mani K-files were purchased from London, UK and designated as Group B. Files were assessed using profile projector Nikon B 24V. Data was statistically compared with ISO 3630:1 and ADA 101 by one sample T test. Significant difference was found between Groups A and B. Average discrepancy of Group A fell within the tolerance limit while that of Group B exceeded the limit. Findings in this study call attention towards adherence to the dimensional standards of stainless-steel endodontic files.

  14. Optimization Specifications for CUDA Code Restructuring Tool

    KAUST Repository

    Khan, Ayaz

    2017-03-13

    In this work we have developed a restructuring software tool (RT-CUDA) following the proposed optimization specifications to bridge the gap between high-level languages and the machine dependent CUDA environment. RT-CUDA takes a C program and convert it into an optimized CUDA kernel with user directives in a configuration file for guiding the compiler. RTCUDA also allows transparent invocation of the most optimized external math libraries like cuSparse and cuBLAS enabling efficient design of linear algebra solvers. We expect RT-CUDA to be needed by many KSA industries dealing with science and engineering simulation on massively parallel computers like NVIDIA GPUs.

  15. Provider of Services File

    Data.gov (United States)

    U.S. Department of Health & Human Services — The POS file consists of two data files, one for CLIA labs and one for 18 other provider types. The file names are CLIA and OTHER. If downloading the file, note it...

  16. Optimal integrated sizing and planning of hubs with midsize/large CHP units considering reliability of supply

    International Nuclear Information System (INIS)

    Moradi, Saeed; Ghaffarpour, Reza; Ranjbar, Ali Mohammad; Mozaffari, Babak

    2017-01-01

    Highlights: • New hub planning formulation is proposed to exploit assets of midsize/large CHPs. • Linearization approaches are proposed for two-variable nonlinear CHP fuel function. • Efficient operation of addressed CHPs & hub devices at contingencies are considered. • Reliability-embedded integrated planning & sizing is formulated as one single MILP. • Noticeable results for costs & reliability-embedded planning due to mid/large CHPs. - Abstract: Use of multi-carrier energy systems and the energy hub concept has recently been a widespread trend worldwide. However, most of the related researches specialize in CHP systems with constant electricity/heat ratios and linear operating characteristics. In this paper, integrated energy hub planning and sizing is developed for the energy systems with mid-scale and large-scale CHP units, by taking their wide operating range into consideration. The proposed formulation is aimed at taking the best use of the beneficial degrees of freedom associated with these units for decreasing total costs and increasing reliability. High-accuracy piecewise linearization techniques with approximation errors of about 1% are introduced for the nonlinear two-dimensional CHP input-output function, making it possible to successfully integrate the CHP sizing. Efficient operation of CHP and the hub at contingencies is extracted via a new formulation, which is developed to be incorporated to the planning and sizing problem. Optimal operation, planning, sizing and contingency operation of hub components are integrated and formulated as a single comprehensive MILP problem. Results on a case study with midsize CHPs reveal a 33% reduction in total costs, and it is demonstrated that the proposed formulation ceases the need for additional components/capacities for increasing reliability of supply.

  17. Text File Comparator

    Science.gov (United States)

    Kotler, R. S.

    1983-01-01

    File Comparator program IFCOMP, is text file comparator for IBM OS/VScompatable systems. IFCOMP accepts as input two text files and produces listing of differences in pseudo-update form. IFCOMP is very useful in monitoring changes made to software at the source code level.

  18. TH-CD-209-05: Impact of Spot Size and Spacing On the Quality of Robustly-Optimized Intensity-Modulated Proton Therapy Plans for Lung Cancer

    Energy Technology Data Exchange (ETDEWEB)

    Liu, W; Ding, X; Hu, Y; Shen, J; Korte, S; Bues, M [Mayo Clinic Arizona, Phoenix, AZ (United States); Schild, S; Wong, W [Mayo Clinic AZ, Phoenix, AZ (United States); Chang, J [MD Anderson Cancer Center, Houston, TX (United States); Liao, Z; Sahoo, N [UT MD Anderson Cancer Center, Houston, TX (United States); Herman, M [Mayo Clinic, Rochester, MN (United States)

    2016-06-15

    Purpose: To investigate how spot size and spacing affect plan quality, especially, plan robustness and the impact of interplay effect, of robustly-optimized intensity-modulated proton therapy (IMPT) plans for lung cancer. Methods: Two robustly-optimized IMPT plans were created for 10 lung cancer patients: (1) one for a proton beam with in-air energy dependent large spot size at isocenter (σ: 5–15 mm) and spacing (1.53σ); (2) the other for a proton beam with small spot size (σ: 2–6 mm) and spacing (5 mm). Both plans were generated on the average CTs with internal-gross-tumor-volume density overridden to irradiate internal target volume (ITV). The root-mean-square-dose volume histograms (RVH) measured the sensitivity of the dose to uncertainties, and the areas under RVH curves were used to evaluate plan robustness. Dose evaluation software was developed to model time-dependent spot delivery to incorporate interplay effect with randomized starting phases of each field per fraction. Patient anatomy voxels were mapped from phase to phase via deformable image registration to score doses. Dose-volume-histogram indices including ITV coverage, homogeneity, and organs-at-risk (OAR) sparing were compared using Student-t test. Results: Compared to large spots, small spots resulted in significantly better OAR sparing with comparable ITV coverage and homogeneity in the nominal plan. Plan robustness was comparable for ITV and most OARs. With interplay effect considered, significantly better OAR sparing with comparable ITV coverage and homogeneity is observed using smaller spots. Conclusion: Robust optimization with smaller spots significantly improves OAR sparing with comparable plan robustness and similar impact of interplay effect compare to larger spots. Small spot size requires the use of larger number of spots, which gives optimizer more freedom to render a plan more robust. The ratio between spot size and spacing was found to be more relevant to determine plan

  19. TH-CD-209-05: Impact of Spot Size and Spacing On the Quality of Robustly-Optimized Intensity-Modulated Proton Therapy Plans for Lung Cancer

    International Nuclear Information System (INIS)

    Liu, W; Ding, X; Hu, Y; Shen, J; Korte, S; Bues, M; Schild, S; Wong, W; Chang, J; Liao, Z; Sahoo, N; Herman, M

    2016-01-01

    Purpose: To investigate how spot size and spacing affect plan quality, especially, plan robustness and the impact of interplay effect, of robustly-optimized intensity-modulated proton therapy (IMPT) plans for lung cancer. Methods: Two robustly-optimized IMPT plans were created for 10 lung cancer patients: (1) one for a proton beam with in-air energy dependent large spot size at isocenter (σ: 5–15 mm) and spacing (1.53σ); (2) the other for a proton beam with small spot size (σ: 2–6 mm) and spacing (5 mm). Both plans were generated on the average CTs with internal-gross-tumor-volume density overridden to irradiate internal target volume (ITV). The root-mean-square-dose volume histograms (RVH) measured the sensitivity of the dose to uncertainties, and the areas under RVH curves were used to evaluate plan robustness. Dose evaluation software was developed to model time-dependent spot delivery to incorporate interplay effect with randomized starting phases of each field per fraction. Patient anatomy voxels were mapped from phase to phase via deformable image registration to score doses. Dose-volume-histogram indices including ITV coverage, homogeneity, and organs-at-risk (OAR) sparing were compared using Student-t test. Results: Compared to large spots, small spots resulted in significantly better OAR sparing with comparable ITV coverage and homogeneity in the nominal plan. Plan robustness was comparable for ITV and most OARs. With interplay effect considered, significantly better OAR sparing with comparable ITV coverage and homogeneity is observed using smaller spots. Conclusion: Robust optimization with smaller spots significantly improves OAR sparing with comparable plan robustness and similar impact of interplay effect compare to larger spots. Small spot size requires the use of larger number of spots, which gives optimizer more freedom to render a plan more robust. The ratio between spot size and spacing was found to be more relevant to determine plan

  20. Comparison of quality of obturation and instrumentation time using hand files and two rotary file systems in primary molars: A single-blinded randomized controlled trial.

    Science.gov (United States)

    Govindaraju, Lavanya; Jeevanandan, Ganesh; Subramanian, E M G

    2017-01-01

    In permanent dentition, different rotary systems are used for canal cleaning and shaping. Rotary instrumentation in pediatric dentistry is an emerging concept. A very few studies have compared the efficiency of rotary instrumentation for canal preparation in primary teeth. Hence, this study was performed to compare the obturation quality and instrumentation time of two rotary files systems - Protaper, Mtwo with hand files in primary molars. Forty-five primary mandibular molars were randomly allotted to one of the three groups. Instrumentation was done using K-files in Group 1; Protaper in Group 2; and Mtwo in Group 3. Instrumentation time was recorded. The canal filling quality was assessed as underfill, optimal fill, and overfill. Statistical analysis was done using Chi-square, ANOVA, and post hoc Tukey test. No significant difference was observed in the quality of obturation among three groups. Intergroup comparison of the instrumentation time showed a statistically significant difference between the three groups. The use of rotary instrumentation in primary teeth results in marked reduction in the instrumentation time and improves the quality of obturation.

  1. Status and evaluation methods of JENDL fusion file and JENDL PKA/KERMA file

    International Nuclear Information System (INIS)

    Chiba, S.; Fukahori, T.; Shibata, K.; Yu Baosheng; Kosako, K.

    1997-01-01

    The status of evaluated nuclear data in the JENDL fusion file and PKA/KERMA file is presented. The JENDL fusion file was prepared in order to improve the quality of the JENDL-3.1 data especially on the double-differential cross sections (DDXs) of secondary neutrons and gamma-ray production cross sections, and to provide DDXs of secondary charged particles (p, d, t, 3 He and α-particle) for the calculation of PKA and KERMA factors. The JENDL fusion file contains evaluated data of 26 elements ranging from Li to Bi. The data in JENDL fusion file reproduce the measured data on neutron and charged-particle DDXs and also on gamma-ray production cross sections. Recoil spectra in PKA/KERMA file were calculated from secondary neutron and charged-particle DDXs contained in the fusion file with two-body reaction kinematics. The data in the JENDL fusion file and PKA/KERMA file were compiled in ENDF-6 format with an MF=6 option to store the DDX data. (orig.)

  2. Swarm size and iteration number effects to the performance of PSO algorithm in RFID tag coverage optimization

    Science.gov (United States)

    Prathabrao, M.; Nawawi, Azli; Sidek, Noor Azizah

    2017-04-01

    Radio Frequency Identification (RFID) system has multiple benefits which can improve the operational efficiency of the organization. The advantages are the ability to record data systematically and quickly, reducing human errors and system errors, update the database automatically and efficiently. It is often more readers (reader) is needed for the installation purposes in RFID system. Thus, it makes the system more complex. As a result, RFID network planning process is needed to ensure the RFID system works perfectly. The planning process is also considered as an optimization process and power adjustment because the coordinates of each RFID reader to be determined. Therefore, algorithms inspired by the environment (Algorithm Inspired by Nature) is often used. In the study, PSO algorithm is used because it has few number of parameters, the simulation time is fast, easy to use and also very practical. However, PSO parameters must be adjusted correctly, for robust and efficient usage of PSO. Failure to do so may result in disruption of performance and results of PSO optimization of the system will be less good. To ensure the efficiency of PSO, this study will examine the effects of two parameters on the performance of PSO Algorithm in RFID tag coverage optimization. The parameters to be studied are the swarm size and iteration number. In addition to that, the study will also recommend the most optimal adjustment for both parameters that is, 200 for the no. iterations and 800 for the no. of swarms. Finally, the results of this study will enable PSO to operate more efficiently in order to optimize RFID network planning system.

  3. The Cost-Optimal Size of Future Reusable Launch Vehicles

    Science.gov (United States)

    Koelle, D. E.

    2000-07-01

    The paper answers the question, what is the optimum vehicle size — in terms of LEO payload capability — for a future reusable launch vehicle ? It is shown that there exists an optimum vehicle size that results in minimum specific transportation cost. The optimum vehicle size depends on the total annual cargo mass (LEO equivalent) enviseaged, which defines at the same time the optimum number of launches per year (LpA). Based on the TRANSCOST-Model algorithms a wide range of vehicle sizes — from 20 to 100 Mg payload in LEO, as well as launch rates — from 2 to 100 per year — have been investigated. It is shown in a design chart how much the vehicle size as well as the launch rate are influencing the specific transportation cost (in MYr/Mg and USS/kg). The comparison with actual ELVs (Expendable Launch Vehicles) and Semi-Reusable Vehicles (a combination of a reusable first stage with an expendable second stage) shows that there exists only one economic solution for an essential reduction of space transportation cost: the Fully Reusable Vehicle Concept, with rocket propulsion and vertical take-off. The Single-stage Configuration (SSTO) has the best economic potential; its feasibility is not only a matter of technology level but also of the vehicle size as such. Increasing the vehicle size (launch mass) reduces the technology requirements because the law of scale provides a better mass fraction and payload fraction — practically at no cost. The optimum vehicle design (after specification of the payload capability) requires a trade-off between lightweight (and more expensive) technology vs. more conventional (and cheaper) technology. It is shown that the the use of more conventional technology and accepting a somewhat larger vehicle is the more cost-effective and less risky approach.

  4. The European Southern Observatory-MIDAS table file system

    Science.gov (United States)

    Peron, M.; Grosbol, P.

    1992-01-01

    The new and substantially upgraded version of the Table File System in MIDAS is presented as a scientific database system. MIDAS applications for performing database operations on tables are discussed, for instance, the exchange of the data to and from the TFS, the selection of objects, the uncertainty joins across tables, and the graphical representation of data. This upgraded version of the TFS is a full implementation of the binary table extension of the FITS format; in addition, it also supports arrays of strings. Different storage strategies for optimal access of very large data sets are implemented and are addressed in detail. As a simple relational database, the TFS may be used for the management of personal data files. This opens the way to intelligent pipeline processing of large amounts of data. One of the key features of the Table File System is to provide also an extensive set of tools for the analysis of the final results of a reduction process. Column operations using standard and special mathematical functions as well as statistical distributions can be carried out; commands for linear regression and model fitting using nonlinear least square methods and user-defined functions are available. Finally, statistical tests of hypothesis and multivariate methods can also operate on tables.

  5. Control of minimum member size in parameter-free structural shape optimization by a medial axis approximation

    Science.gov (United States)

    Schmitt, Oliver; Steinmann, Paul

    2017-09-01

    We introduce a manufacturing constraint for controlling the minimum member size in structural shape optimization problems, which is for example of interest for components fabricated in a molding process. In a parameter-free approach, whereby the coordinates of the FE boundary nodes are used as design variables, the challenging task is to find a generally valid definition for the thickness of non-parametric geometries in terms of their boundary nodes. Therefore we use the medial axis, which is the union of all points with at least two closest points on the boundary of the domain. Since the effort for the exact computation of the medial axis of geometries given by their FE discretization highly increases with the number of surface elements we use the distance function instead to approximate the medial axis by a cloud of points. The approximation is demonstrated on three 2D examples. Moreover, the formulation of a minimum thickness constraint is applied to a sensitivity-based shape optimization problem of one 2D and one 3D model.

  6. Optimized Sizing, Selection, and Economic Analysis of Battery Energy Storage for Grid-Connected Wind-PV Hybrid System

    Directory of Open Access Journals (Sweden)

    Hina Fathima

    2015-01-01

    Full Text Available Energy storages are emerging as a predominant sector for renewable energy applications. This paper focuses on a feasibility study to integrate battery energy storage with a hybrid wind-solar grid-connected power system to effectively dispatch wind power by incorporating peak shaving and ramp rate limiting. The sizing methodology is optimized using bat optimization algorithm to minimize the cost of investment and losses incurred by the system in form of load shedding and wind curtailment. The integrated system is then tested with an efficient battery management strategy which prevents overcharging/discharging of the battery. In the study, five major types of battery systems are considered and analyzed. They are evaluated and compared based on technoeconomic and environmental metrics as per Indian power market scenario. Technoeconomic analysis of the battery is validated by simulations, on a proposed wind-photovoltaic system in a wind site in Southern India. Environmental analysis is performed by evaluating the avoided cost of emissions.

  7. Size Optimization of 3D Stereoscopic Film Frames

    African Journals Online (AJOL)

    pc

    2018-03-22

    Mar 22, 2018 ... perception. Keywords- Optimization; Stereoscopic Film; 3D Frames;Aspect. Ratio ... television will mature to enable the viewing of 3D films prevalent[3]. On the .... Industry Standard VFX Practices and Proced. 2014. [10] N. A. ...

  8. Differentiation of Adrenal Adenoma and Nonadenoma in Unenhanced CT: New Optimal Threshold Value and the Usefulness of Size Criteria for Differentiation

    International Nuclear Information System (INIS)

    Park, Sung Hee; Kim, Myeong Jin; Kim, Joo Hee; Lim, Joon Seok; Kim, Ki Whang

    2007-01-01

    To determine the optimal threshold for the attenuation values in unenhanced computed tomography (CT) and assess the value of the size criteria for differentiating between an adrenal adenoma and a nonadenoma. The unenhanced CT images of 45 patients at our institution, who underwent a surgical resection of an adrenal masses between January 2001 and July 2005, were retrospectively reviewed. Forty-five adrenal masses included 25 cortical adenomas, 12 pheochromocytomas, three lymphomas, and five metastases confirmed by pathology were examined. The CT images were obtained at a slice thickness of 2 mm to 3 mm. The mAs were varied from 100 to 160 and 200 to 280, while the 120 KVp was maintained in all cases. The mean attenuation values of an adrenal adenoma and nonadenoma were compared using an unpaired t test. The sensitivity, specificity, positive predictive value, negative predictive value, and accuracy at thresholds of 10 HU, 20 HU, and 25 HU were compared. The diagnostic accuracy according to the size criteria from 2 cm to 6 cm was also compared. The twenty-five adenomas showed significantly lower (p 90% but a specificity < 70%. Size criteria of 2 or 3 cm had a high specificity of 100% and 80% but a low sensitivity of 20% and 60%. The threshold attenuation values of 20 or 25 HU in the unenhanced CT appear optimal for discriminating an adrenal adenoma from a nonadenoma. The size criteria are of little value in differentiating adrenal masses because of their low specificity or low sensitivity

  9. Aida-CMK multi-algorithm optimization kernel applied to analog IC sizing

    CERN Document Server

    Lourenço, Ricardo; Horta, Nuno

    2015-01-01

    This work addresses the research and development of an innovative optimization kernel applied to analog integrated circuit (IC) design. Particularly, this works describes the modifications inside the AIDA Framework, an electronic design automation framework fully developed by at the Integrated Circuits Group-LX of the Instituto de Telecomunicações, Lisbon. It focusses on AIDA-CMK, by enhancing AIDA-C, which is the circuit optimizer component of AIDA, with a new multi-objective multi-constraint optimization module that constructs a base for multiple algorithm implementations. The proposed solution implements three approaches to multi-objective multi-constraint optimization, namely, an evolutionary approach with NSGAII, a swarm intelligence approach with MOPSO and stochastic hill climbing approach with MOSA. Moreover, the implemented structure allows the easy hybridization between kernels transforming the previous simple NSGAII optimization module into a more evolved and versatile module supporting multiple s...

  10. Size optimization of stand-alone photovoltaic (PV) room air conditioners

    International Nuclear Information System (INIS)

    Chen, Chien-Wei; Zahedi, A.

    2006-01-01

    Sizing of a stand-alone PV system determines the main cost of the system. PV electricity cost is determined by the amount of solar energy received, hence the actual climate and weather conditions such as solar irradiance and ambient temperature affect the size required and cost of the system. Air conditioning demand also depends on the weather conditions. Therefore, sizing a PV powered air conditioner must consider the characteristics of local climate and temperature. In this paper, sizing procedures and special considerations for air conditioning under Melbourne's climatic conditions is presented. The reliability of various PV-battery size combinations is simulated by MATLAB. As a result, excellent system performance can be predicated.(Author)

  11. SU-F-T-79: Monte Carlo Investigation of Optimizing Parameters for Modulated Electron Arc Therapy

    International Nuclear Information System (INIS)

    Al Ashkar, E; Eraba, K; Imam, M; Eldib, A; Ma, C

    2016-01-01

    Purpose: Electron arc therapy provides excellent dose distributions for treating superficial tumors along curved surfaces. However this modality has not received widespread application due to the lack of needed advancement in electron beam delivery, accurate electron dose calculation and treatment plan optimization. The aim of the current work is to investigate possible parameters that can be optimized for electron arc (eARC) therapy. Methods: The MCBEAM code was used to generate phase space files for 6 and 12MeV electron beam energies from a Varian trilogy machine. An Electron Multi-leaf collimator eMLC of 2cm thickness positioned at 82 cm source collimated distance was used in the study. Dose distributions for electron arcs were calculated inside a cylindrical phantom using the MCSIM code. The Cylindrical phantom was constructed with 0.2cm voxels and a 15cm diameter. Electron arcs were delivered with two different approaches. The first approach was to deliver the arc as segments of very small field widths. In this approach we also tested the impact of the segment size and the arc increment angle. The second approach is to deliver the arc as a sum of large fields each covering the whole target as seen from the beam eye view. Results: In considering 90 % as the prescription isodose line, the first approach showed a region of buildup proceeding before the prescription zone. This build up is minimizing with the second approach neglecting need for bolus. The second approach also showed less x-ray contamination. In both approaches the variation of the segment size changed the size and location of the prescription isodose line. The optimization process for eARC could involve interplay between small and large segments to achieve desired coverage. Conclusion: An advanced modulation of eARCs will allow for tailored dose distribution for superficial curved target as with challenging scalp cases

  12. SU-F-T-79: Monte Carlo Investigation of Optimizing Parameters for Modulated Electron Arc Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Al Ashkar, E; Eraba, K; Imam, M [Azhar university, Nasr City, Cairo (Egypt); Eldib, A; Ma, C [Fox Chase Cancer Center, Philadelphia, PA (United States)

    2016-06-15

    Purpose: Electron arc therapy provides excellent dose distributions for treating superficial tumors along curved surfaces. However this modality has not received widespread application due to the lack of needed advancement in electron beam delivery, accurate electron dose calculation and treatment plan optimization. The aim of the current work is to investigate possible parameters that can be optimized for electron arc (eARC) therapy. Methods: The MCBEAM code was used to generate phase space files for 6 and 12MeV electron beam energies from a Varian trilogy machine. An Electron Multi-leaf collimator eMLC of 2cm thickness positioned at 82 cm source collimated distance was used in the study. Dose distributions for electron arcs were calculated inside a cylindrical phantom using the MCSIM code. The Cylindrical phantom was constructed with 0.2cm voxels and a 15cm diameter. Electron arcs were delivered with two different approaches. The first approach was to deliver the arc as segments of very small field widths. In this approach we also tested the impact of the segment size and the arc increment angle. The second approach is to deliver the arc as a sum of large fields each covering the whole target as seen from the beam eye view. Results: In considering 90 % as the prescription isodose line, the first approach showed a region of buildup proceeding before the prescription zone. This build up is minimizing with the second approach neglecting need for bolus. The second approach also showed less x-ray contamination. In both approaches the variation of the segment size changed the size and location of the prescription isodose line. The optimization process for eARC could involve interplay between small and large segments to achieve desired coverage. Conclusion: An advanced modulation of eARCs will allow for tailored dose distribution for superficial curved target as with challenging scalp cases.

  13. Multi-objective component sizing of a power-split plug-in hybrid electric vehicle powertrain using Pareto-based natural optimization machines

    Science.gov (United States)

    Mozaffari, Ahmad; Vajedi, Mahyar; Chehresaz, Maryyeh; Azad, Nasser L.

    2016-03-01

    The urgent need to meet increasingly tight environmental regulations and new fuel economy requirements has motivated system science researchers and automotive engineers to take advantage of emerging computational techniques to further advance hybrid electric vehicle and plug-in hybrid electric vehicle (PHEV) designs. In particular, research has focused on vehicle powertrain system design optimization, to reduce the fuel consumption and total energy cost while improving the vehicle's driving performance. In this work, two different natural optimization machines, namely the synchronous self-learning Pareto strategy and the elitism non-dominated sorting genetic algorithm, are implemented for component sizing of a specific power-split PHEV platform with a Toyota plug-in Prius as the baseline vehicle. To do this, a high-fidelity model of the Toyota plug-in Prius is employed for the numerical experiments using the Autonomie simulation software. Based on the simulation results, it is demonstrated that Pareto-based algorithms can successfully optimize the design parameters of the vehicle powertrain.

  14. An Optimal Mobile Service for Telecare Data Synchronization using a Role-based Access Control Model and Mobile Peer-to-Peer Technology.

    Science.gov (United States)

    Ke, Chih-Kun; Lin, Zheng-Hua

    2015-09-01

    The progress of information and communication technologies (ICT) has promoted the development of healthcare which has enabled the exchange of resources and services between organizations. Organizations want to integrate mobile devices into their hospital information systems (HIS) due to the convenience to employees who are then able to perform specific healthcare processes from any location. The collection and merage of healthcare data from discrete mobile devices are worth exploring possible ways for further use, especially in remote districts without public data network (PDN) to connect the HIS. In this study, we propose an optimal mobile service which automatically synchronizes the telecare file resources among discrete mobile devices. The proposed service enforces some technical methods. The role-based access control model defines the telecare file resources accessing mechanism; the symmetric data encryption method protects telecare file resources transmitted over a mobile peer-to-peer network. The multi-criteria decision analysis method, ELECTRE (Elimination Et Choice Translating Reality), evaluates multiple criteria of the candidates' mobile devices to determine a ranking order. This optimizes the synchronization of telecare file resources among discrete mobile devices. A prototype system is implemented to examine the proposed mobile service. The results of the experiment show that the proposed mobile service can automatically and effectively synchronize telecare file resources among discrete mobile devices. The contribution of this experiment is to provide an optimal mobile service that enhances the security of telecare file resource synchronization and strengthens an organization's mobility.

  15. Optimal Safety Earthing – Earth Electrode Sizing Using A ...

    African Journals Online (AJOL)

    In this paper a deterministic approach in the sizing of earth electrode using the permissible touch voltage criteria is presented. The deterministic approach is effectively applied in the sizing of the length of earth rod required for the safe earthing of residential and facility buildings. This approach ensures that the earthing ...

  16. HUD GIS Boundary Files

    Data.gov (United States)

    Department of Housing and Urban Development — The HUD GIS Boundary Files are intended to supplement boundary files available from the U.S. Census Bureau. The files are for community planners interested in...

  17. ITP Adjuster 1.0: A New Utility Program to Adjust Charges in the Topology Files Generated by the PRODRG Server

    Directory of Open Access Journals (Sweden)

    Diogo de Jesus Medeiros

    2013-01-01

    Full Text Available The suitable computation of accurate atomic charges for the GROMACS topology *.itp files of small molecules, generated in the PRODRG server, has been a tricky task nowadays because it does not calculate atomic charges using an ab initio method. Usually additional steps of structure optimization and charges calculation, followed by a tedious manual replacement of atomic charges in the *.itp file, are needed. In order to assist this task, we report here the ITP Adjuster 1.0, a utility program developed to perform the replacement of the PRODRG charges in the *.itp files of small molecules by ab initio charges.

  18. Optimal placement of capacito

    Directory of Open Access Journals (Sweden)

    N. Gnanasekaran

    2016-06-01

    Full Text Available Optimal size and location of shunt capacitors in the distribution system plays a significant role in minimizing the energy loss and the cost of reactive power compensation. This paper presents a new efficient technique to find optimal size and location of shunt capacitors with the objective of minimizing cost due to energy loss and reactive power compensation of distribution system. A new Shark Smell Optimization (SSO algorithm is proposed to solve the optimal capacitor placement problem satisfying the operating constraints. The SSO algorithm is a recently developed metaheuristic optimization algorithm conceptualized using the shark’s hunting ability. It uses a momentum incorporated gradient search and a rotational movement based local search for optimization. To demonstrate the applicability of proposed method, it is tested on IEEE 34-bus and 118-bus radial distribution systems. The simulation results obtained are compared with previous methods reported in the literature and found to be encouraging.

  19. 33 CFR 148.246 - When is a document considered filed and where should I file it?

    Science.gov (United States)

    2010-07-01

    ... filed and where should I file it? 148.246 Section 148.246 Navigation and Navigable Waters COAST GUARD... Formal Hearings § 148.246 When is a document considered filed and where should I file it? (a) If a document to be filed is submitted by mail, it is considered filed on the date it is postmarked. If a...

  20. Generic Optimization Program User Manual Version 3.0.0

    International Nuclear Information System (INIS)

    Wetter, Michael

    2009-01-01

    GenOpt is an optimization program for the minimization of a cost function that is evaluated by an external simulation program. It has been developed for optimization problems where the cost function is computationally expensive and its derivatives are not available or may not even exist. GenOpt can be coupled to any simulation program that reads its input from text files and writes its output to text files. The independent variables can be continuous variables (possibly with lower and upper bounds), discrete variables, or both, continuous and discrete variables. Constraints on dependent variables can be implemented using penalty or barrier functions. GenOpt uses parallel computing to evaluate the simulations. GenOpt has a library with local and global multi-dimensional and one-dimensional optimization algorithms, and algorithms for doing parametric runs. An algorithm interface allows adding new minimization algorithms without knowing the details of the program structure. GenOpt is written in Java so that it is platform independent. The platform independence and the general interface make GenOpt applicable to a wide range of optimization problems. GenOpt has not been designed for linear programming problems, quadratic programming problems, and problems where the gradient of the cost function is available. For such problems, as well as for other problems, special tailored software exists that is more efficient

  1. Generic Optimization Program User Manual Version 3.0.0

    Energy Technology Data Exchange (ETDEWEB)

    Wetter, Michael

    2009-05-11

    GenOpt is an optimization program for the minimization of a cost function that is evaluated by an external simulation program. It has been developed for optimization problems where the cost function is computationally expensive and its derivatives are not available or may not even exist. GenOpt can be coupled to any simulation program that reads its input from text files and writes its output to text files. The independent variables can be continuous variables (possibly with lower and upper bounds), discrete variables, or both, continuous and discrete variables. Constraints on dependent variables can be implemented using penalty or barrier functions. GenOpt uses parallel computing to evaluate the simulations. GenOpt has a library with local and global multi-dimensional and one-dimensional optimization algorithms, and algorithms for doing parametric runs. An algorithm interface allows adding new minimization algorithms without knowing the details of the program structure. GenOpt is written in Java so that it is platform independent. The platform independence and the general interface make GenOpt applicable to a wide range of optimization problems. GenOpt has not been designed for linear programming problems, quadratic programming problems, and problems where the gradient of the cost function is available. For such problems, as well as for other problems, special tailored software exists that is more efficient.

  2. Access to DIII-D data located in multiple files and multiple locations

    International Nuclear Information System (INIS)

    McHarg, B.B. Jr.

    1993-10-01

    The General Atomics DIII-D tokamak fusion experiment is now collecting over 80 MB of data per discharge once every 10 min, and that quantity is expected to double within the next year. The size of the data files, even in compressed format, is becoming increasingly difficult to handle. Data is also being acquired now on a variety of UNIX systems as well as MicroVAX and MODCOMP computer systems. The existing computers collect all the data into a single shot file, and this data collection is taking an ever increasing amount of time as the total quantity of data increases. Data is not available to experimenters until it has been collected into the shot file, which is in conflict with the substantial need for data examination on a timely basis between shots. The experimenters are also spread over many different types of computer systems (possibly located at other sites). To improve data availability and handling, software has been developed to allow individual computer systems to create their own shot files locally. The data interface routine PTDATA that is used to access DIII-D data has been modified so that a user's code on any computer can access data from any computer where that data might be located. This data access is transparent to the user. Breaking up the shot file into separate files in multiple locations also impacts software used for data archiving, data management, and data restoration

  3. Shaping ability of 4 different single-file systems in simulated S-shaped canals.

    Science.gov (United States)

    Saleh, Abdulrahman Mohammed; Vakili Gilani, Pouyan; Tavanafar, Saeid; Schäfer, Edgar

    2015-04-01

    The aim of this study was to compare the shaping ability of 4 different single-file systems in simulated S-shaped canals. Sixty-four S-shaped canals in resin blocks were prepared to an apical size of 25 using Reciproc (VDW, Munich, Germany), WaveOne (Dentsply Maillefer, Ballaigues, Switzerland), OneShape (Micro Méga, Besançon, France), and F360 (Komet Brasseler, Lemgo, Germany) (n = 16 canals/group) systems. Composite images were made from the superimposition of pre- and postinstrumentation images. The amount of resin removed by each system was measured by using a digital template and image analysis software. Canal aberrations and the preparation time were also recorded. The data were statistically analyzed by using analysis of variance, Tukey, and chi-square tests. Canals prepared with the F360 and OneShape systems were better centered compared with the Reciproc and WaveOne systems. Reciproc and WaveOne files removed significantly greater amounts of resin from the inner side of both curvatures (P files was significantly faster compared with WaveOne and F360 files (P file instruments were safe to use and were able to prepare the canals efficiently. However, single-file systems that are less tapered seem to be more favorable when preparing S-shaped canals. Copyright © 2015 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  4. Root Canal Cleaning Efficacy of Rotary and Hand Files Instrumentation in Primary Molars

    Science.gov (United States)

    Nazari Moghaddam, Kiumars; Mehran, Majid; Farajian Zadeh, Hamideh

    2009-01-01

    INTRODUCTION: Pulpectomy of primary teeth is commonly carried out with hand files and broaches; a tricky and time consuming procedure. The purpose of this in vitro study was to compare the cleaning efficacy and time taken for instrumentation of deciduous molars using hand K-files and Flex Master rotary system. MATERIALS AND METHODS: In this study, 68 canals of 23 extracted primary molars with at least two third intact roots and 7-12 mm length were selected. After preparing an access cavity, K-file size #15 was introduced into the root canal and India ink was injected with an insulin syringe. Sixty samples were randomly divided in to experimental groups in group I (n=30), root canals were prepared with hand K-files; in group II (n=30), rotary Flex Master files were used for instrumentation, and in group III 8 remained samples were considered as negative controls. After clearing and root sectioning, the removal of India ink from cervical, middle, and apical thirds was scored. Data was analyzed using student's T-test and Mann-Whitney U test. RESULTS: There was no significant difference between experimental groups cleaning efficacy at the cervical, middle and apical root canal thirds. Only the coronal third scored higher in the hand instrumented group (PInstrumentation with Flex Master rotary files was significantly less time consuming (Protary technique. PMID:23940486

  5. Analysis of Radiation Field and Block Pattern for Optimal Size in Multileaf Collimator

    International Nuclear Information System (INIS)

    Ahn, Seoung Do; Yang, Kwang Mo; Yi, Byong Yong; Choi, Eun Kyong; Chang, Hye Sook

    1994-01-01

    The patterns of the conventional radiation treatment fields and their shielding blocks are analysed to determine the optimal dimension of the MultiLeaf Collimator (MLC) which is considered as an essential tool for conformal therapy. Total 1169 radiation fields from 303 patients (203 from Asan Medical center, 50 from Baek Hosp and 50 from Hanyang Univ. Hosp.) were analysed for this study. Weighted case selection treatment site (from The Korean Society of Therapeutic Radiology 1003). Ninety one percent of total fields have shielding blocks. Y axis is defined as leaf movement direction and it is assumed that MLC is installed on the cranial-caudal direction. The length of X axis were distributed from 4cm to 40cm (less than 21cm for 95% of cases), and Y axis from 5cm to 38cm (less than 22cm for 95% of cases). The shielding blocks extended to less than 6cm from center of the filed for 95% of the cases. Start length for ninety five percent of block is less than 10cm for X axis and 11cm for Y axis. Seventy six percent of shielding blocks could be placed by either X or Y axis direction, 7.9% only by Y axis, 5.1% only by X axis and it is reasonable to install MLC for Y direction. Ninety five percent of patients can be treated with coplanar rotation therapy without changing the collimator angle. Eleven percent of cases of cases were impossible to replace with MLC. Futher study of shielding should be larger than 21cm X 22cm. The MLC should be designed as a pair of 21 leaves with 1cm wide for an acceptable resolution and 17cm long to enable the leaf to overtravel at least 6cm from the treatment field center

  6. Protecting your files on the DFS file system

    CERN Multimedia

    Computer Security Team

    2011-01-01

    The Windows Distributed File System (DFS) hosts user directories for all NICE users plus many more data.    Files can be accessed from anywhere, via a dedicated web portal (http://cern.ch/dfs). Due to the ease of access to DFS with in CERN it is of utmost importance to properly protect access to sensitive data. As the use of DFS access control mechanisms is not obvious to all users, passwords, certificates or sensitive files might get exposed. At least this happened in past to the Andrews File System (AFS) - the Linux equivalent to DFS) - and led to bad publicity due to a journalist accessing supposedly "private" AFS folders (SonntagsZeitung 2009/11/08). This problem does not only affect the individual user but also has a bad impact on CERN's reputation when it comes to IT security. Therefore, all departments and LHC experiments agreed recently to apply more stringent protections to all DFS user folders. The goal of this data protection policy is to assist users in pro...

  7. Protecting your files on the AFS file system

    CERN Multimedia

    2011-01-01

    The Andrew File System is a world-wide distributed file system linking hundreds of universities and organizations, including CERN. Files can be accessed from anywhere, via dedicated AFS client programs or via web interfaces that export the file contents on the web. Due to the ease of access to AFS it is of utmost importance to properly protect access to sensitive data in AFS. As the use of AFS access control mechanisms is not obvious to all users, passwords, private SSH keys or certificates have been exposed in the past. In one specific instance, this also led to bad publicity due to a journalist accessing supposedly "private" AFS folders (SonntagsZeitung 2009/11/08). This problem does not only affect the individual user but also has a bad impact on CERN's reputation when it comes to IT security. Therefore, all departments and LHC experiments agreed in April 2010 to apply more stringent folder protections to all AFS user folders. The goal of this data protection policy is to assist users in...

  8. Zebra: A striped network file system

    Science.gov (United States)

    Hartman, John H.; Ousterhout, John K.

    1992-01-01

    The design of Zebra, a striped network file system, is presented. Zebra applies ideas from log-structured file system (LFS) and RAID research to network file systems, resulting in a network file system that has scalable performance, uses its servers efficiently even when its applications are using small files, and provides high availability. Zebra stripes file data across multiple servers, so that the file transfer rate is not limited by the performance of a single server. High availability is achieved by maintaining parity information for the file system. If a server fails its contents can be reconstructed using the contents of the remaining servers and the parity information. Zebra differs from existing striped file systems in the way it stripes file data: Zebra does not stripe on a per-file basis; instead it stripes the stream of bytes written by each client. Clients write to the servers in units called stripe fragments, which are analogous to segments in an LFS. Stripe fragments contain file blocks that were written recently, without regard to which file they belong. This method of striping has numerous advantages over per-file striping, including increased server efficiency, efficient parity computation, and elimination of parity update.

  9. Using pilot data to size a two-arm randomized trial to find a nearly optimal personalized treatment strategy.

    Science.gov (United States)

    Laber, Eric B; Zhao, Ying-Qi; Regh, Todd; Davidian, Marie; Tsiatis, Anastasios; Stanford, Joseph B; Zeng, Donglin; Song, Rui; Kosorok, Michael R

    2016-04-15

    A personalized treatment strategy formalizes evidence-based treatment selection by mapping patient information to a recommended treatment. Personalized treatment strategies can produce better patient outcomes while reducing cost and treatment burden. Thus, among clinical and intervention scientists, there is a growing interest in conducting randomized clinical trials when one of the primary aims is estimation of a personalized treatment strategy. However, at present, there are no appropriate sample size formulae to assist in the design of such a trial. Furthermore, because the sampling distribution of the estimated outcome under an estimated optimal treatment strategy can be highly sensitive to small perturbations in the underlying generative model, sample size calculations based on standard (uncorrected) asymptotic approximations or computer simulations may not be reliable. We offer a simple and robust method for powering a single stage, two-armed randomized clinical trial when the primary aim is estimating the optimal single stage personalized treatment strategy. The proposed method is based on inverting a plugin projection confidence interval and is thereby regular and robust to small perturbations of the underlying generative model. The proposed method requires elicitation of two clinically meaningful parameters from clinical scientists and uses data from a small pilot study to estimate nuisance parameters, which are not easily elicited. The method performs well in simulated experiments and is illustrated using data from a pilot study of time to conception and fertility awareness. Copyright © 2015 John Wiley & Sons, Ltd.

  10. A Sustainability-Oriented Multiobjective Optimization Model for Siting and Sizing Distributed Generation Plants in Distribution Systems

    Directory of Open Access Journals (Sweden)

    Guang Chen

    2013-01-01

    Full Text Available This paper proposes a sustainability-oriented multiobjective optimization model for siting and sizing DG plants in distribution systems. Life cycle exergy (LCE is used as a unified indicator of the entire system’s environmental sustainability, and it is optimized as an objective function in the model. Other two objective functions include economic cost and expected power loss. Chance constraints are used to control the operation risks caused by the uncertain power loads and renewable energies. A semilinearized simulation method is proposed and combined with the Latin hypercube sampling (LHS method to improve the efficiency of probabilistic load flow (PLF analysis which is repeatedly performed to verify the chance constraints. A numerical study based on the modified IEEE 33-node system is performed to verify the proposed method. Numerical results show that the proposed semilinearized simulation method reduces about 93.3% of the calculation time of PLF analysis and guarantees satisfying accuracy. The results also indicate that benefits for environmental sustainability of using DG plants can be effectively reflected by the proposed model which helps the planner to make rational decision towards sustainable development of the distribution system.

  11. Prototype of a file-based high-level trigger in CMS

    International Nuclear Information System (INIS)

    Bauer, G; Darlea, G-L; Gomez-Ceballos, G; Bawej, T; Chaze, O; Coarasa, J A; Deldicque, C; Dobson, M; Dupont, A; Gigi, D; Glege, F; Gomez-Reino, R; Hartl, C; Hegeman, J; Masetti, L; Behrens, U; Branson, J; Cittolin, S; Holzner, A; Erhan, S

    2014-01-01

    The DAQ system of the CMS experiment at the LHC is upgraded during the accelerator shutdown in 2013/14. To reduce the interdependency of the DAQ system and the high-level trigger (HLT), we investigate the feasibility of using a file-system-based HLT. Events of ∼1 MB size are built at the level-1 trigger rate of 100 kHz. The events are assembled by ∼50 builder units (BUs). Each BU writes the raw events at ∼2GB/s to a local file system shared with Q(10) filter-unit machines (FUs) running the HLT code. The FUs read the raw data from the file system, select Q(1%) of the events, and write the selected events together with monitoring meta-data back to a disk. This data is then aggregated over several steps and made available for offline reconstruction and online monitoring. We present the challenges, technical choices, and performance figures from the prototyping phase. In addition, the steps to the final system implementation will be discussed.

  12. iTOUGH2 Universal Optimization Using the PEST Protocol

    International Nuclear Information System (INIS)

    Finsterle, S.A.

    2010-01-01

    , the PEST protocol [Doherty, 2007] has been implemented into iTOUGH2. This protocol enables communication between the application (which can be a single 'black-box' executable or a script or batch file that calls multiple codes) and iTOUGH2. The concept requires that for the application model: (1) Input is provided on one or more ASCII text input files; (2) Output is returned to one or more ASCII text output files; (3) The model is run using a system command (executable or script/batch file); and (4) The model runs to completion without any user intervention. For each forward run invoked by iTOUGH2, select parameters cited within the application model input files are then overwritten with values provided by iTOUGH2, and select variables cited within the output files are extracted and returned to iTOUGH2. It should be noted that the core of iTOUGH2, i.e., its optimization routines and related analysis tools, remains unchanged; it is only the communication format between input parameters, the application model, and output variables that are borrowed from PEST. The interface routines have been provided by Doherty [2007]. The iTOUGH2-PEST architecture is shown in Figure 1. This manual contains installation instructions for the iTOUGH2-PEST module, and describes the PEST protocol as well as the input formats needed in iTOUGH2. Examples are provided that demonstrate the use of model-independent optimization and analysis using iTOUGH2.

  13. Optimal dynamic control of resources in a distributed system

    Science.gov (United States)

    Shin, Kang G.; Krishna, C. M.; Lee, Yann-Hang

    1989-01-01

    The authors quantitatively formulate the problem of controlling resources in a distributed system so as to optimize a reward function and derive optimal control strategies using Markov decision theory. The control variables treated are quite general; they could be control decisions related to system configuration, repair, diagnostics, files, or data. Two algorithms for resource control in distributed systems are derived for time-invariant and periodic environments, respectively. A detailed example to demonstrate the power and usefulness of the approach is provided.

  14. Sizing solar home systems for optimal development impact

    International Nuclear Information System (INIS)

    Bond, M.; Fuller, R.J.; Aye, Lu

    2012-01-01

    The paper compares the development impact of three different sized solar home systems (SHS) (10, 40 and 80 W p ) installed in rural East Timor. It describes research aimed to determine whether the higher cost of the larger systems was justified by additional household benefits. To assess the development impact of these different sizes of SHS the research used a combination of participatory and quantitative tools. Participatory exercises were conducted with seventy-seven small groups of SHS users in twenty-four rural communities and supplemented with a household survey of 195 SHS users. The combined results of these evaluation processes enabled the three sizes of SHS to be compared for two types of benefits—those associated with carrying out important household tasks and attributes of SHS which were advantageous compared to the use of non-electric lighting sources. The research findings showed that the small, 10 W p SHS provided much of the development impact of the larger systems. It suggests three significant implications for the design of SHS programs in contexts such as East Timor: provide more small systems rather than fewer large ones; provide lighting in the kitchen wherever possible; and carefully match SHS operating costs to the incomes of rural users. - Highlights: ► We compare development benefits for 3 sizes of solar home systems—10, 40 and 80 W p . ► Benefit assessment uses a combination of qualitative and quantitative approaches. ► Small systems are found to provide much of the benefits of the larger systems. ► To maximise benefits systems should be fitted with luminaires in kitchen areas. ► Financial benefits are important to users and may not accrue for large systems.

  15. The Optimal Wavelengths for Light Absorption Spectroscopy Measurements Based on Genetic Algorithm-Particle Swarm Optimization

    Science.gov (United States)

    Tang, Ge; Wei, Biao; Wu, Decao; Feng, Peng; Liu, Juan; Tang, Yuan; Xiong, Shuangfei; Zhang, Zheng

    2018-03-01

    To select the optimal wavelengths in the light extinction spectroscopy measurement, genetic algorithm-particle swarm optimization (GAPSO) based on genetic algorithm (GA) and particle swarm optimization (PSO) is adopted. The change of the optimal wavelength positions in different feature size parameters and distribution parameters is evaluated. Moreover, the Monte Carlo method based on random probability is used to identify the number of optimal wavelengths, and good inversion effects of the particle size distribution are obtained. The method proved to have the advantage of resisting noise. In order to verify the feasibility of the algorithm, spectra with bands ranging from 200 to 1000 nm are computed. Based on this, the measured data of standard particles are used to verify the algorithm.

  16. Economies of scale and optimal size of hospitals: Empirical results for Danish public hospitals

    DEFF Research Database (Denmark)

    Kristensen, Troels

    number of beds per hospital is estimated to be 275 beds per site. Sensitivity analysis to partial changes in model parameters yields a joint 95% confidence interval in the range 130 - 585 beds per site. Conclusions: The results indicate that it may be appropriate to consolidate the production of small...... the current configuration of Danish hospitals is subject to scale economies that may justify such plans and to estimate an optimal hospital size. Methods: We estimate cost functions using panel data on total costs, DRG-weighted casemix, and number : We estimate cost functions using panel data on total costs......, DRG-weighted casemix, and number of beds for three years from 2004-2006. A short-run cost function is used to derive estimates of long-run scale economies by applying the envelope condition. Results: We identify moderate to significant long-run economies of scale when applying two alternative We...

  17. Performance Evaluation of Various STL File Mesh Refining Algorithms Applied for FDM-RP Process

    Science.gov (United States)

    Ledalla, Siva Rama Krishna; Tirupathi, Balaji; Sriram, Venkatesh

    2018-06-01

    Layered manufacturing machines use the stereolithography (STL) file to build parts. When a curved surface is converted from a computer aided design (CAD) file to STL, it results in a geometrical distortion and chordal error. Parts manufactured with this file, might not satisfy geometric dimensioning and tolerance requirements due to approximated geometry. Current algorithms built in CAD packages have export options to globally reduce this distortion, which leads to an increase in the file size and pre-processing time. In this work, different mesh subdivision algorithms are applied on STL file of a complex geometric features using MeshLab software. The mesh subdivision algorithms considered in this work are modified butterfly subdivision technique, loops sub division technique and general triangular midpoint sub division technique. A comparative study is made with respect to volume and the build time using the above techniques. It is found that triangular midpoint sub division algorithm is more suitable for the geometry under consideration. Only the wheel cap part is then manufactured on Stratasys MOJO FDM machine. The surface roughness of the part is measured on Talysurf surface roughness tester.

  18. JENDL Dosimetry File

    International Nuclear Information System (INIS)

    Nakazawa, Masaharu; Iguchi, Tetsuo; Kobayashi, Katsuhei; Iwasaki, Shin; Sakurai, Kiyoshi; Ikeda, Yujiro; Nakagawa, Tsuneo.

    1992-03-01

    The JENDL Dosimetry File based on JENDL-3 was compiled and integral tests of cross section data were performed by the Dosimetry Integral Test Working Group of the Japanese Nuclear Data Committee. Data stored in the JENDL Dosimetry File are the cross sections and their covariance data for 61 reactions. The cross sections were mainly taken from JENDL-3 and the covariances from IRDF-85. For some reactions, data were adopted from other evaluated data files. The data are given in the neutron energy region below 20 MeV in both of point-wise and group-wise files in the ENDF-5 format. In order to confirm reliability of the data, several integral tests were carried out; comparison with the data in IRDF-85 and average cross sections measured in fission neutron fields, fast reactor spectra, DT neutron fields and Li(d, n) neutron fields. As a result, it has been found that the JENDL Dosimetry File gives better results than IRDF-85 but there are some problems to be improved in future. The contents of the JENDL Dosimetry File and the results of the integral tests are described in this report. All of the dosimetry cross sections are shown in a graphical form. (author) 76 refs

  19. JENDL Dosimetry File

    Energy Technology Data Exchange (ETDEWEB)

    Nakazawa, Masaharu; Iguchi, Tetsuo [Tokyo Univ. (Japan). Faculty of Engineering; Kobayashi, Katsuhei [Kyoto Univ., Kumatori, Osaka (Japan). Research Reactor Inst.; Iwasaki, Shin [Tohoku Univ., Sendai (Japan). Faculty of Engineering; Sakurai, Kiyoshi; Ikeda, Yujior; Nakagawa, Tsuneo [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1992-03-15

    The JENDL Dosimetry File based on JENDL-3 was compiled and integral tests of cross section data were performed by the Dosimetry Integral Test Working Group of the Japanese Nuclear Data Committee. Data stored in the JENDL Dosimetry File are the cross sections and their covariance data for 61 reactions. The cross sections were mainly taken from JENDL-3 and the covariances from IRDF-85. For some reactions, data were adopted from other evaluated data files. The data are given in the neutron energy region below 20 MeV in both of point-wise and group-wise files in the ENDF-5 format. In order to confirm reliability of the data, several integral tests were carried out; comparison with the data in IRDF-85 and average cross sections measured in fission neutron fields, fast reactor spectra, DT neutron fields and Li(d,n) neutron fields. As a result, it has been found that the JENDL Dosimetry File gives better results than IRDF-85 but there are some problems to be improved in future. The contents of the JENDL Dosimetry File and the results of the integral tests are described in this report. All of the dosimetry cross sections are shown in a graphical form.

  20. The Health Assessment Longitudinal File imperative: foundation for improving the health of the force.

    Science.gov (United States)

    Kemper, Judith A; Donahue, Donald A; Harris, Judith S

    2003-08-01

    A smaller active duty force and an increased operational tempo have made the Reserve components (RC) essential elements in the accomplishment of the mission of the U.S. Army. One critical factor in meeting mission is maintaining the optimal health of each soldier. Baseline health data about the RC is currently not being collected, even though increasing numbers of reserve soldiers are being activated. The Annual Health Certification and Survey is being developed as a way to meet the RCs' statutory requirement for annual certification of health while at the same time generating and tracking baseline data on each reservist in a longitudinal health file, the Health Assessment Longitudinal File. This article discusses the Annual Health Certification Questionnaire/Health Assessment Longitudinal File, which will greatly enhance the Army's ability to accurately certify the health status of the RC and track health in relation to training, mission activities, and deployment.

  1. Optimization strategies for discrete multi-material stiffness optimization

    DEFF Research Database (Denmark)

    Hvejsel, Christian Frier; Lund, Erik; Stolpe, Mathias

    2011-01-01

    Design of composite laminated lay-ups are formulated as discrete multi-material selection problems. The design problem can be modeled as a non-convex mixed-integer optimization problem. Such problems are in general only solvable to global optimality for small to moderate sized problems. To attack...... which numerically confirm the sought properties of the new scheme in terms of convergence to a discrete solution....

  2. Optimal sizing and location of SVC devices for improvement of voltage profile in distribution network with dispersed photovoltaic and wind power plants

    International Nuclear Information System (INIS)

    Savić, Aleksandar; Đurišić, Željko

    2014-01-01

    Highlights: • Significant voltage variations in a distribution network with dispersed generation. • The use of SVC devices to improve the voltage profiles are an effective solution. • Number, size and location of SVC devices are optimized using genetic algorithm. • The methodology is presented on an example of a real distribution system in Serbia. - Abstract: Intermittent power generation of wind turbines and photovoltaic plants creates voltage disturbances in power distribution networks which may not be acceptable to the consumers. To control the deviations of the nodal voltages, it is necessary to use fast dynamic control of the reactive power in the distribution network. Implementation of the power electronic devices, such as Static Var Compensator (SVC), enables effective dynamic state as well as a static state of the nodal voltage control in the distribution network. This paper analyzed optimal sizing and location of SVC devices by using genetic algorithm, to improve nodal voltages profile in a distribution network with dispersed photovoltaic and wind power plants. Practical application of the developed methodology was tested on an example of a real distribution network

  3. Optimal production lot size and reorder point of a two-stage supply chain while random demand is sensitive with sales teams' initiatives

    Science.gov (United States)

    Sankar Sana, Shib

    2016-01-01

    The paper develops a production-inventory model of a two-stage supply chain consisting of one manufacturer and one retailer to study production lot size/order quantity, reorder point sales teams' initiatives where demand of the end customers is dependent on random variable and sales teams' initiatives simultaneously. The manufacturer produces the order quantity of the retailer at one lot in which the procurement cost per unit quantity follows a realistic convex function of production lot size. In the chain, the cost of sales team's initiatives/promotion efforts and wholesale price of the manufacturer are negotiated at the points such that their optimum profits reached nearer to their target profits. This study suggests to the management of firms to determine the optimal order quantity/production quantity, reorder point and sales teams' initiatives/promotional effort in order to achieve their maximum profits. An analytical method is applied to determine the optimal values of the decision variables. Finally, numerical examples with its graphical presentation and sensitivity analysis of the key parameters are presented to illustrate more insights of the model.

  4. Comparison of three digital radiographic imaging systems for the visibility of endodontic files

    International Nuclear Information System (INIS)

    Park, Jong Won; Kim, Eun Kyung; Han, Won Jeong

    2004-01-01

    To compare three digital radiographic imaging sensors by evaluating the visibility of endodontic file tips with interobserver reproducibility and assessing subjectively the clarity of images in comparison with the x-ray film images. Forty-five extracted sound premolars were used for this study. Fifteen plaster blocks were made with three premolars each and 8, 10, 15 K-flexofiles were inserted into the root canal of premolars. They were radiographically exposed using periapical x-ray films (Kodak Insight Dental film, Eastmann Kodak company, Rochester, USA), Digora imaging plates (Soredex-Orion Co., Helsinki, Finland), CDX 2000HQ sensors (Biomedisys Co., Seoul, Korea), and CDR sensors (Schick Inc., Long Island, USA). The visibility of endodontic files was evaluated with interobserver reproducibility, which was calculated as the standard deviations of X, Y coordinated of endodontic file tips measured on digital images by three oral and maxillofacial radiologists. The clarity of images was assessed subjectively using 3 grades, i.e, plus, equal, and minus in comparison with the conventional x-ray film images. Interobserver reproducibility of endodontic file tips was the highest in CDR sensor (p<0.05) only except at Y coordinates of 15 file. In the subjective assessment of the image clarity, the plus grade was the most frequent in CDR sensor at all size of endodontic file (p<0.05). CDR sensor was the most superior to the other sensors, CDX 2000HQ sensor and Digora imaging plate in the evaluation of interobserver reproducibility of endodontic file tip and subjective assessment of image clarity.

  5. Qualitative and quantitative assessment of step size adaptation rules

    DEFF Research Database (Denmark)

    Krause, Oswin; Glasmachers, Tobias; Igel, Christian

    2017-01-01

    We present a comparison of step size adaptation methods for evolution strategies, covering recent developments in the field. Following recent work by Hansen et al. we formulate a concise list of performance criteria: a) fast convergence of the mean, b) near-optimal fixed point of the normalized s...... that cumulative step size adaptation (CSA) and twopoint adaptation (TPA) provide reliable estimates of the optimal step size. We further find that removing the evolution path of CSA still leads to a reliable algorithm without the computational requirements of CSA.......We present a comparison of step size adaptation methods for evolution strategies, covering recent developments in the field. Following recent work by Hansen et al. we formulate a concise list of performance criteria: a) fast convergence of the mean, b) near-optimal fixed point of the normalized...

  6. Ada Integrated Environment III Computer Program Development Specification. Volume III. Ada Optimizing Compiler.

    Science.gov (United States)

    1981-12-01

    file.library-unit{.subunit).SYMAP Statement Map: library-file. library-unit.subunit).SMAP Type Map: 1 ibrary.fi le. 1 ibrary-unit{.subunit). TMAP The library...generator SYMAP Symbol Map code generator SMAP Updated Statement Map code generator TMAP Type Map code generator A.3.5 The PUNIT Command The P UNIT...Core.Stmtmap) NAME Tmap (Core.Typemap) END Example A-3 Compiler Command Stream for the Code Generator Texas Instruments A-5 Ada Optimizing Compiler

  7. A general solution for optimal egg size during external fertilization, extended scope for intermediate optimal egg size and the introduction of Don Ottavio 'tango'

    NARCIS (Netherlands)

    Luttikhuizen, PC; Honkoop, PJC; Drent, J; van der Meer, J

    2004-01-01

    Egg sizes of marine invertebrates vary greatly, both within and between species. Among the proposed causes of this are a trade-off between egg size, egg number and survival probability of offspring, and a selection pressure exerted by sperm limitation during external fertilization. Although larger

  8. Sample size optimization in nuclear material control. 1

    International Nuclear Information System (INIS)

    Gladitz, J.

    1982-01-01

    Equations have been derived and exemplified which allow the determination of the minimum variables sample size for given false alarm and detection probabilities of nuclear material losses and diversions, respectively. (author)

  9. A File Archival System

    Science.gov (United States)

    Fanselow, J. L.; Vavrus, J. L.

    1984-01-01

    ARCH, file archival system for DEC VAX, provides for easy offline storage and retrieval of arbitrary files on DEC VAX system. System designed to eliminate situations that tie up disk space and lead to confusion when different programers develop different versions of same programs and associated files.

  10. Optimal Sizing of Vanadium Redox Flow Battery Systems for Residential Applications Based on Battery Electrochemical Characteristics

    Directory of Open Access Journals (Sweden)

    Xinan Zhang

    2016-10-01

    Full Text Available The penetration of solar photovoltaic (PV systems in residential areas contributes to the generation and usage of renewable energy. Despite its advantages, the PV system also creates problems caused by the intermittency of renewable energy. As suggested by researchers, such problems deteriorate the applicability of the PV system and have to be resolved by employing a battery energy storage system (BESS. With concern for the high investment cost, the choice of a cost-effective BESS with proper sizing is necessary. To this end, this paper proposes the employment of a vanadium redox flow battery (VRB, which possesses a long cycle life and high energy efficiency, for residential users with PV systems. It further proposes methods of computing the capital and maintenance cost of VRB systems and evaluating battery efficiency based on VRB electrochemical characteristics. Furthermore, by considering the cost and efficiency of VRB, the prevalent time-of-use electricity price, the solar feed-in tariff, the solar power profile and the user load pattern, an optimal sizing algorithm for VRB systems is proposed. Simulation studies are carried out to show the effectiveness of the proposed methods.

  11. TH-C-18A-12: Evaluation of the Impact of Body Size and Tube Output Limits in the Optimization of Fast Scanning with High-Pitch Dual Source CT

    Energy Technology Data Exchange (ETDEWEB)

    Ramirez Giraldo, J [Siemens Medical Solutions USA, Inc (United States); Mileto, A.; Hurwitz, L.; Marin, D. [Duke University Medical Center, Durham NC (United States)

    2014-06-15

    Purpose: To evaluate the impact of body size and tube power limits in the optimization of fast scanning with high-pitch dual source CT (DSCT). Methods: A previously validated MERCURY phantom, made of polyethylene, with circular cross-section of diameters 16, 23, 30 and 37cm, and connected through tapered sections, was scanned using a second generation DSCT system. The DSCT operates with two independently controlled x-ray tube generators offering up to 200 kW power reserve (100 kW per tube). The entire length of the phantom (42cm) was scanned with two protocols using: A)Standard single-source CT (SSCT) protocol with pitch of 0.8, and B) DSCT protocol with high-pitch values ranging from 1.6 to 3.2 (0.2 steps). All scans used 120 kVp with 150 quality reference mAs using automatic exposure control. Scanner radiation output (CTDIvol) and effective mAs values were extracted retrospectively from DICOM files for each slice. Image noise was recorded. All variables were assessed relative to phantom diameter. Results: With standard-pitch SSCT, the scanner radiation output (and tube-current) were progressively adapted with increasing size, from 6 mGy (120 mAs) up to 15 mGy (270 mAs) from the thinnest (16cm) to the thickest diameter (37 cm), respectively. By comparison, using high-pitch (3.2), the scanner output was bounded at about 8 mGy (140 mAs), independent of phantom diameter. Although relative to standard-pitch, the high-pitch led to lower radiation output for the same scan, the image noise was higher, particularly for larger diameters. To match the radiation output adaptation of standard-pitch, a high-pitch mode of 1.6 was needed, with the advantage of scanning twice as fast. Conclusion: To maximize the benefits of fast scanning with high-pitch DSCT, the body size and tube power limits of the system need to be considered such that a good balance between speed of acquisition and image quality are warranted. JCRG is an employee of Siemens Medical Solutions USA Inc.

  12. Comparative evaluation of debris extruded apically by using, Protaper retreatment file, K3 file and H-file with solvent in endodontic retreatment

    Directory of Open Access Journals (Sweden)

    Chetna Arora

    2012-01-01

    Full Text Available Aim: The aim of this study was to evaluate the apical extrusion of debris comparing 2 engine driven systems and hand instrumentation technique during root canal retreatment. Materials and Methods: Forty five human permanent mandibular premolars were prepared using the step-back technique, obturated with gutta-percha/zinc oxide eugenol sealer and cold lateral condensation technique. The teeth were divided into three groups: Group A: Protaper retreatment file, Group B: K3, file Group C: H-file with tetrachloroethylene. All the canals were irrigated with 20ml distilled water during instrumentation. Debris extruded along with the irrigating solution during retreatment procedure was carefully collected in preweighed Eppendorf tubes. The tubes were stored in an incubator for 5 days, placed in a desiccator and then re-weighed. Weight of dry debris was calculated by subtracting the weight of the tube before instrumentation and from the weight of the tube after instrumentation. Data was analyzed using Two Way ANOVA and Post Hoc test. Results : There was statistically significant difference in the apical extrusion of debris between hand instrumentation and protaper retreatment file and K3 file. The amount of extruded debris caused by protaper retreatment file and K3 file instrumentation technique was not statistically significant. All the three instrumentation techniques produced apically extruded debris and irrigant. Conclusion: The best way to minimize the extrusion of debris is by adapting crown down technique therefore the use of rotary technique (Protaper retreatment file, K3 file is recommended.

  13. Computer Forensics Method in Analysis of Files Timestamps in Microsoft Windows Operating System and NTFS File System

    Directory of Open Access Journals (Sweden)

    Vesta Sergeevna Matveeva

    2013-02-01

    Full Text Available All existing file browsers displays 3 timestamps for every file in file system NTFS. Nowadays there are a lot of utilities that can manipulate temporal attributes to conceal the traces of file using. However every file in NTFS has 8 timestamps that are stored in file record and used in detecting the fact of attributes substitution. The authors suggest a method of revealing original timestamps after replacement and automated variant of it in case of a set of files.

  14. A cost-efficient method to optimize package size in emerging markets

    NARCIS (Netherlands)

    Gamez-Alban, H.M.; Soto-Cardona, O.C.; Mejia Argueta, C.; Sarmiento, A.T.

    2015-01-01

    Packaging links the entire supply chain and coordinates all participants in the process to give a flexible and effective response to customer needs in order to maximize satisfaction at optimal cost. This research proposes an optimization model to define the minimum total cost combination of outer

  15. Comparative Analysis of Battery Behavior with Different Modes of Discharge for Optimal Capacity Sizing and BMS Operation

    Directory of Open Access Journals (Sweden)

    Mazhar Abbas

    2016-10-01

    Full Text Available Battery-operated systems are always concerned about the proper management and sizing of a battery. A Traditional Battery Management System (BMS only includes battery-aware task scheduling based on the discharge characteristics of a whole battery pack and do not take into account the mode of the load being served by the battery. On the other hand, an efficient and intelligent BMS should monitor the battery at a cell level and track the load with significant consideration of the load mode. Depending upon the load modes, the common modes of discharge (MOD of a battery identified so far are Constant Power Mode (CPM, Constant Current Mode (CCM and Constant Impedance Mode (CIM. This paper comparatively analyzes the discharging behavior of batteries at an individual cell level for different load modes. The difference in discharging behavior from mode to mode represents the study of the mode-dependent behavior of the battery before its deployment in some application. Based on simulation results, optimal capacity sizing and BMS operation of battery for an assumed situation in a remote microgrid has been proposed.

  16. 76 FR 43679 - Filing via the Internet; Notice of Additional File Formats for efiling

    Science.gov (United States)

    2011-07-21

    ... list of acceptable file formats the four-character file extensions for Microsoft Office 2007/2010... files from Office 2007 or Office 2010 in an Office 2003 format prior to submission. Dated: July 15, 2011...

  17. Neighborhood size and local geographic variation of health and social determinants

    Directory of Open Access Journals (Sweden)

    Emch Michael

    2005-06-01

    Full Text Available Abstract Background Spatial filtering using a geographic information system (GIS is often used to smooth health and ecological data. Smoothing disease data can help us understand local (neighborhood geographic variation and ecological risk of diseases. Analyses that use small neighborhood sizes yield individualistic patterns and large sizes reveal the global structure of data where local variation is obscured. Therefore, choosing an optimal neighborhood size is important for understanding ecological associations with diseases. This paper uses Hartley's test of homogeneity of variance (Fmax as a methodological solution for selecting optimal neighborhood sizes. The data from a study area in Vietnam are used to test the suitability of this method. Results The Hartley's Fmax test was applied to spatial variables for two enteric diseases and two socioeconomic determinants. Various neighbourhood sizes were tested by using a two step process to implement the Fmaxtest. First the variance of each neighborhood was compared to the highest neighborhood variance (upper, Fmax1 and then they were compared with the lowest neighborhood variance (lower, Fmax2. A significant value of Fmax1 indicates that the neighborhood does not reveal the global structure of data, and in contrast, a significant value in Fmax2 implies that the neighborhood data are not individualistic. The neighborhoods that are between the lower and the upper limits are the optimal neighbourhood sizes. Conclusion The results of tests provide different neighbourhood sizes for different variables suggesting that optimal neighbourhood size is data dependent. In ecology, it is well known that observation scales may influence ecological inference. Therefore, selecting optimal neigborhood size is essential for understanding disease ecologies. The optimal neighbourhood selection method that is tested in this paper can be useful in health and ecological studies.

  18. UPIN Group File

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Group Unique Physician Identifier Number (UPIN) File is the business entity file that contains the group practice UPIN and descriptive information. It does NOT...

  19. Interoperability format translation and transformation between IFC architectural design file and simulation file formats

    Science.gov (United States)

    Chao, Tian-Jy; Kim, Younghun

    2015-01-06

    Automatically translating a building architecture file format (Industry Foundation Class) to a simulation file, in one aspect, may extract data and metadata used by a target simulation tool from a building architecture file. Interoperability data objects may be created and the extracted data is stored in the interoperability data objects. A model translation procedure may be prepared to identify a mapping from a Model View Definition to a translation and transformation function. The extracted data may be transformed using the data stored in the interoperability data objects, an input Model View Definition template, and the translation and transformation function to convert the extracted data to correct geometric values needed for a target simulation file format used by the target simulation tool. The simulation file in the target simulation file format may be generated.

  20. GIS-based approach for optimal siting and sizing of renewables considering techno-environmental constraints and the stochastic nature of meteorological inputs

    Science.gov (United States)

    Daskalou, Olympia; Karanastasi, Maria; Markonis, Yannis; Dimitriadis, Panayiotis; Koukouvinos, Antonis; Efstratiadis, Andreas; Koutsoyiannis, Demetris

    2016-04-01

    Following the legislative EU targets and taking advantage of its high renewable energy potential, Greece can obtain significant benefits from developing its water, solar and wind energy resources. In this context we present a GIS-based methodology for the optimal sizing and siting of solar and wind energy systems at the regional scale, which is tested in the Prefecture of Thessaly. First, we assess the wind and solar potential, taking into account the stochastic nature of the associated meteorological processes (i.e. wind speed and solar radiation, respectively), which is essential component for both planning (i.e., type selection and sizing of photovoltaic panels and wind turbines) and management purposes (i.e., real-time operation of the system). For the optimal siting, we assess the efficiency and economic performance of the energy system, also accounting for a number of constraints, associated with topographic limitations (e.g., terrain slope, proximity to road and electricity grid network, etc.), the environmental legislation and other land use constraints. Based on this analysis, we investigate favorable alternatives using technical, environmental as well as financial criteria. The final outcome is GIS maps that depict the available energy potential and the optimal layout for photovoltaic panels and wind turbines over the study area. We also consider a hypothetical scenario of future development of the study area, in which we assume the combined operation of the above renewables with major hydroelectric dams and pumped-storage facilities, thus providing a unique hybrid renewable system, extended at the regional scale.

  1. Economic considerations in the optimal size and number of reserve sites

    NARCIS (Netherlands)

    Groeneveld, R.A.

    2005-01-01

    The debate among ecologists on the optimal number of reserve sites under a fixed maximum total reserve area-the single large or several small (SLOSS) problem-has so far neglected the economic aspects of the problem. This paper argues that economic considerations can affect the optimal number and

  2. 12 CFR 5.4 - Filing required.

    Science.gov (United States)

    2010-01-01

    ... CORPORATE ACTIVITIES Rules of General Applicability § 5.4 Filing required. (a) Filing. A depository institution shall file an application or notice with the OCC to engage in corporate activities and... advise an applicant through a pre-filing communication to send the filing or submission directly to the...

  3. Joint Economic Lot Sizing Optimization in a Supplier-Buyer Inventory System When the Supplier Offers Decremental Temporary Discounts

    Directory of Open Access Journals (Sweden)

    Diana Puspita Sari

    2012-02-01

    Full Text Available This research discusses mathematical models of joint economic lot size optimization in a supplier-buyer inventory system in a situation when the supplier offers decremental temporary discounts during a sale period. Here, the sale period consists of n phases and the phases of discounts offered descend as much as the number of phases. The highest discount will be given when orders are placed in the first phase while the lowest one will be given when they are placed in the last phase. In this situation, the supplier attempts to attract the buyer to place orders as early as possible during the sale period. The buyers will respon these offers by ordering a special quantity in one of the phase. In this paper, we propose such a forward buying model with discount-proportionally-distributed time phases. To examine the behaviour of the proposed model, we conducted numerical experiments. We assumed that there are three phases of discounts during the sale period. We then compared the total joint costs of special order placed in each phase for two scenarios. The first scenario is the case of independent situation – there is no coordination between the buyer and the supplie-, while the second scenario is the opposite one, the coordinated model. Our results showed the coordinated model outperform the independent model in terms of producing total joint costs. We finally conducted a sensitivity analyzis to examine the other behaviour of the proposed model. Keywords: supplier-buyer inventory system, forward buying model, decremental temporary discounts, joint economic lot sizing, optimization.

  4. A simulator-independent optimization tool based on genetic algorithm applied to nuclear reactor design

    International Nuclear Information System (INIS)

    Abreu Pereira, Claudio Marcio Nascimento do; Schirru, Roberto; Martinez, Aquilino Senra

    1999-01-01

    Here is presented an engineering optimization tool based on a genetic algorithm, implemented according to the method proposed in recent work that has demonstrated the feasibility of the use of this technique in nuclear reactor core designs. The tool is simulator-independent in the sense that it can be customized to use most of the simulators which have the input parameters read from formatted text files and the outputs also written from a text file. As the nuclear reactor simulators generally use such kind of interface, the proposed tool plays an important role in nuclear reactor designs. Research reactors may often use non-conventional design approaches, causing different situations that may lead the nuclear engineer to face new optimization problems. In this case, a good optimization technique, together with its customizing facility and a friendly man-machine interface could be very interesting. Here, the tool is described and some advantages are outlined. (author)

  5. Learning analytics in serious gaming: uncovering the hidden treasury of game log files

    NARCIS (Netherlands)

    Westera, Wim; Nadolski, Rob; Hummel, Hans

    2018-01-01

    This paper presents an exploratory analysis of existing log files of the VIBOA environmental policy games at Utrecht University. For reasons of statistical power we’ve combined student cohorts 2008, 2009, 2010, and 2011, which led to a sample size of 118 students. The VIBOA games are inquiry-based

  6. Huygens file service and storage architecture

    NARCIS (Netherlands)

    Bosch, H.G.P.; Mullender, Sape J.; Stabell-Kulo, Tage; Stabell-Kulo, Tage

    1993-01-01

    The Huygens file server is a high-performance file server which is able to deliver multi-media data in a timely manner while also providing clients with ordinary “Unix” like file I/O. The file server integrates client machines, file servers and tertiary storage servers in the same storage

  7. Huygens File Service and Storage Architecture

    NARCIS (Netherlands)

    Bosch, H.G.P.; Mullender, Sape J.; Stabell-Kulo, Tage; Stabell-Kulo, Tage

    1993-01-01

    The Huygens file server is a high-performance file server which is able to deliver multi-media data in a timely manner while also providing clients with ordinary “Unix” like file I/O. The file server integrates client machines, file servers and tertiary storage servers in the same storage

  8. A numerical approach for size optimization and performance prediction of solar P V-hybrid power systems

    International Nuclear Information System (INIS)

    Zahedi, A.; Calia, N.

    2001-10-01

    Iran is blessed with an abundance of sunlight almost all year round. so obviously, with the right planning and strategies that are coupled to the right technology and development in the market, the potential for the new renewable energies, specially solar photovoltaic, as an alternative source of power looks promising and is constantly gaining popularity. Development and application of new renewable energy in Iran, however, is still in its infancy and will require active support by government, utilities and financing institutions. some experts might argue that Iran has plenty of natural resources like oil and gas. We should not forget, however, that even in countries with cheap fossil energy, the P V system is an economical option in supplying electricity for remote located communities and facilities. But there are good reasons suggesting that like many other countries in the world, Iran also needs to be active in utilization of sun energy. The objectives of this paper are: to give a comprehensive overview on the current solar photovoltaic energy technology. (Authors of this paper believe that Photovoltaic is the most appropriate renewable energy technology for Iran); to present the results obtained from a study which has been carried out on the size optimization, cost calculation of the photovoltaic systems for climate conditions of Iran. The method presented in this paper can be used for systems of any size and application. A further objective of this paper is to present a numerical approach for evaluating the performance of P V-Hybrid power systems. A method is developed to predict the performance of all components integrated into a P V-hybrid system. The system under investigation is a hybrid power system, in which the integrated components are P V array, a battery bank for backing up the system and a diesel generator set for supporting the battery bank. State of charge of batteries is used as a measure for the performance of the system. The running time of

  9. Optimal capacitor placement and sizing using combined fuzzy ...

    African Journals Online (AJOL)

    user

    The studies have specified that as much as 13% of total power generated is consumed as ... sizing is designed with the objective function, which minimises the power loss. ..... System Engineering in 2007 (Anna University) Tamil Nadu, India.

  10. Towards Optimal Buffer Size in Wi-Fi Networks

    KAUST Repository

    Showail, Ahmad

    2016-01-01

    extensively for wired networks. However, there is little work addressing the unique challenges of wireless environment. In this dissertation, we discuss buffer sizing challenges in wireless networks, classify the state-of-the-art solutions, and propose two

  11. KungFQ: a simple and powerful approach to compress fastq files.

    Science.gov (United States)

    Grassi, Elena; Di Gregorio, Federico; Molineris, Ivan

    2012-01-01

    Nowadays storing data derived from deep sequencing experiments has become pivotal and standard compression algorithms do not exploit in a satisfying manner their structure. A number of reference-based compression algorithms have been developed but they are less adequate when approaching new species without fully sequenced genomes or nongenomic data. We developed a tool that takes advantages of fastq characteristics and encodes them in a binary format optimized in order to be further compressed with standard tools (such as gzip or lzma). The algorithm is straightforward and does not need any external reference file, it scans the fastq only once and has a constant memory requirement. Moreover, we added the possibility to perform lossy compression, losing some of the original information (IDs and/or qualities) but resulting in smaller files; it is also possible to define a quality cutoff under which corresponding base calls are converted to N. We achieve 2.82 to 7.77 compression ratios on various fastq files without losing information and 5.37 to 8.77 losing IDs, which are often not used in common analysis pipelines. In this paper, we compare the algorithm performance with known tools, usually obtaining higher compression levels.

  12. Apically extruded dentin debris by reciprocating single-file and multi-file rotary system.

    Science.gov (United States)

    De-Deus, Gustavo; Neves, Aline; Silva, Emmanuel João; Mendonça, Thais Accorsi; Lourenço, Caroline; Calixto, Camila; Lima, Edson Jorge Moreira

    2015-03-01

    This study aims to evaluate the apical extrusion of debris by the two reciprocating single-file systems: WaveOne and Reciproc. Conventional multi-file rotary system was used as a reference for comparison. The hypotheses tested were (i) the reciprocating single-file systems extrude more than conventional multi-file rotary system and (ii) the reciprocating single-file systems extrude similar amounts of dentin debris. After solid selection criteria, 80 mesial roots of lower molars were included in the present study. The use of four different instrumentation techniques resulted in four groups (n = 20): G1 (hand-file technique), G2 (ProTaper), G3 (WaveOne), and G4 (Reciproc). The apparatus used to evaluate the collection of apically extruded debris was typical double-chamber collector. Statistical analysis was performed for multiple comparisons. No significant difference was found in the amount of the debris extruded between the two reciprocating systems. In contrast, conventional multi-file rotary system group extruded significantly more debris than both reciprocating groups. Hand instrumentation group extruded significantly more debris than all other groups. The present results yielded favorable input for both reciprocation single-file systems, inasmuch as they showed an improved control of apically extruded debris. Apical extrusion of debris has been studied extensively because of its clinical relevance, particularly since it may cause flare-ups, originated by the introduction of bacteria, pulpal tissue, and irrigating solutions into the periapical tissues.

  13. Building a parallel file system simulator

    International Nuclear Information System (INIS)

    Molina-Estolano, E; Maltzahn, C; Brandt, S A; Bent, J

    2009-01-01

    Parallel file systems are gaining in popularity in high-end computing centers as well as commercial data centers. High-end computing systems are expected to scale exponentially and to pose new challenges to their storage scalability in terms of cost and power. To address these challenges scientists and file system designers will need a thorough understanding of the design space of parallel file systems. Yet there exist few systematic studies of parallel file system behavior at petabyte- and exabyte scale. An important reason is the significant cost of getting access to large-scale hardware to test parallel file systems. To contribute to this understanding we are building a parallel file system simulator that can simulate parallel file systems at very large scale. Our goal is to simulate petabyte-scale parallel file systems on a small cluster or even a single machine in reasonable time and fidelity. With this simulator, file system experts will be able to tune existing file systems for specific workloads, scientists and file system deployment engineers will be able to better communicate workload requirements, file system designers and researchers will be able to try out design alternatives and innovations at scale, and instructors will be able to study very large-scale parallel file system behavior in the class room. In this paper we describe our approach and provide preliminary results that are encouraging both in terms of fidelity and simulation scalability.

  14. 76 FR 52323 - Combined Notice of Filings; Filings Instituting Proceedings

    Science.gov (United States)

    2011-08-22

    .... Applicants: Young Gas Storage Company, Ltd. Description: Young Gas Storage Company, Ltd. submits tariff..., but intervention is necessary to become a party to the proceeding. The filings are accessible in the.... More detailed information relating to filing requirements, interventions, protests, and service can be...

  15. Optimal size of stochastic Hodgkin-Huxley neuronal systems for maximal energy efficiency in coding pulse signals

    Science.gov (United States)

    Yu, Lianchun; Liu, Liwei

    2014-03-01

    The generation and conduction of action potentials (APs) represents a fundamental means of communication in the nervous system and is a metabolically expensive process. In this paper, we investigate the energy efficiency of neural systems in transferring pulse signals with APs. By analytically solving a bistable neuron model that mimics the AP generation with a particle crossing the barrier of a double well, we find the optimal number of ion channels that maximizes the energy efficiency of a neuron. We also investigate the energy efficiency of a neuron population in which the input pulse signals are represented with synchronized spikes and read out with a downstream coincidence detector neuron. We find an optimal number of neurons in neuron population, as well as the number of ion channels in each neuron that maximizes the energy efficiency. The energy efficiency also depends on the characters of the input signals, e.g., the pulse strength and the interpulse intervals. These results are confirmed by computer simulation of the stochastic Hodgkin-Huxley model with a detailed description of the ion channel random gating. We argue that the tradeoff between signal transmission reliability and energy cost may influence the size of the neural systems when energy use is constrained.

  16. An efficient one-step condensation and activation strategy to synthesize porous carbons with optimal micropore sizes for highly selective CO₂ adsorption.

    Science.gov (United States)

    Wang, Jiacheng; Liu, Qian

    2014-04-21

    A series of microporous carbons (MPCs) were successfully prepared by an efficient one-step condensation and activation strategy using commercially available dialdehyde and diamine as carbon sources. The resulting MPCs have large surface areas (up to 1881 m(2) g(-1)), micropore volumes (up to 0.78 cm(3) g(-1)), and narrow micropore size distributions (0.7-1.1 nm). The CO₂ uptakes of the MPCs prepared at high temperatures (700-750 °C) are higher than those prepared under mild conditions (600-650 °C), because the former samples possess optimal micropore sizes (0.7-0.8 nm) that are highly suitable for CO₂ capture due to enhanced adsorbate-adsorbent interactions. At 1 bar, MPC-750 prepared at 750 °C demonstrates the best CO₂ capture performance and can efficiently adsorb CO₂ molecules at 2.86 mmol g(-1) and 4.92 mmol g(-1) at 25 and 0 °C, respectively. In particular, the MPCs with optimal micropore sizes (0.7-0.8 nm) have extremely high CO₂/N₂ adsorption ratios (47 and 52 at 25 and 0 °C, respectively) at 1 bar, and initial CO₂/N₂ adsorption selectivities of up to 81 and 119 at 25 °C and 0 °C, respectively, which are far superior to previously reported values for various porous solids. These excellent results, combined with good adsorption capacities and efficient regeneration/recyclability, make these carbons amongst the most promising sorbents reported so far for selective CO₂ adsorption in practical applications.

  17. Optimal placement, sizing, and daily charge/discharge of battery energy storage in low voltage distribution network with high photovoltaic penetration

    DEFF Research Database (Denmark)

    Jannesar, Mohammad Rasol; Sedighi, Alireza; Savaghebi, Mehdi

    2018-01-01

    when photovoltaic penetration is increased in low voltage distribution network. Local battery energy storage system can mitigate these disadvantages and as a result, improve the system operation. For this purpose, battery energy storage system is charged when production of photovoltaic is more than...... consumers’ demands and discharged when consumers’ demands are increased. Since the price of battery energy storage system is high, economic, environmental, and technical objectives should be considered together for its placement and sizing. In this paper, optimal placement, sizing, and daily (24 h) charge......Proper installation of rooftop photovoltaic generation in distribution networks can improve voltage profile, reduce energy losses, and enhance the reliability. But, on the other hand, some problems regarding harmonic distortion, voltage magnitude, reverse power flow, and energy losses can arise...

  18. Subthreshold SPICE Model Optimization

    Science.gov (United States)

    Lum, Gregory; Au, Henry; Neff, Joseph; Bozeman, Eric; Kamin, Nick; Shimabukuro, Randy

    2011-04-01

    The first step in integrated circuit design is the simulation of said design in software to verify proper functionally and design requirements. Properties of the process are provided by fabrication foundries in the form of SPICE models. These SPICE models contain the electrical data and physical properties of the basic circuit elements. A limitation of these models is that the data collected by the foundry only accurately model the saturation region. This is fine for most users, but when operating devices in the subthreshold region they are inadequate for accurate simulation results. This is why optimizing the current SPICE models to characterize the subthreshold region is so important. In order to accurately simulate this region of operation, MOSFETs of varying widths and lengths are fabricated and the electrical test data is collected. From the data collected the parameters of the model files are optimized through parameter extraction rather than curve fitting. With the completed optimized models the circuit designer is able to simulate circuit designs for the sub threshold region accurately.

  19. MOS-Based Multiuser Multiapplication Cross-Layer Optimization for Mobile Multimedia Communication

    Directory of Open Access Journals (Sweden)

    Shoaib Khan

    2007-01-01

    Full Text Available We propose a cross-layer optimization strategy that jointly optimizes the application layer, the data-link layer, and the physical layer of a wireless protocol stack using an application-oriented objective function. The cross-layer optimization framework provides efficient allocation of wireless network resources across multiple types of applications run by different users to maximize network resource usage and user perceived quality of service. We define a novel optimization scheme based on the mean opinion score (MOS as the unifying metric over different application classes. Our experiments, applied to scenarios where users simultaneously run three types of applications, namely voice communication, streaming video and file download, confirm that MOS-based optimization leads to significant improvement in terms of user perceived quality when compared to conventional throughput-based optimization.

  20. Evaluated neutronic file for indium

    International Nuclear Information System (INIS)

    Smith, A.B.; Chiba, S.; Smith, D.L.; Meadows, J.W.; Guenther, P.T.; Lawson, R.D.; Howerton, R.J.

    1990-01-01

    A comprehensive evaluated neutronic data file for elemental indium is documented. This file, extending from 10 -5 eV to 20 MeV, is presented in the ENDF/B-VI format, and contains all neutron-induced processes necessary for the vast majority of neutronic applications. In addition, an evaluation of the 115 In(n,n') 116m In dosimetry reaction is presented as a separate file. Attention is given in quantitative values, with corresponding uncertainty information. These files have been submitted for consideration as a part of the ENDF/B-VI national evaluated-file system. 144 refs., 10 figs., 4 tabs

  1. FHEO Filed Cases

    Data.gov (United States)

    Department of Housing and Urban Development — The dataset is a list of all the Title VIII fair housing cases filed by FHEO from 1/1/2007 - 12/31/2012 including the case number, case name, filing date, state and...

  2. A Novel Numerical Algorithm for Optimal Sizing of a Photovoltaic/Wind/Diesel Generator/Battery Microgrid Using Loss of Load Probability Index

    Directory of Open Access Journals (Sweden)

    Hussein A. Kazem

    2013-01-01

    Full Text Available This paper presents a method for determining optimal sizes of PV array, wind turbine, diesel generator, and storage battery installed in a building integrated system. The objective of the proposed optimization is to design the system that can supply a building load demand at minimum cost and maximum availability. The mathematical models for the system components as well as meteorological variables such as solar energy, temperature, and wind speed are employed for this purpose. Moreover, the results showed that the optimum sizing ratios (the daily energy generated by the source to the daily energy demand for the PV array, wind turbine, diesel generator, and battery for a system located in Sohar, Oman, are 0.737, 0.46, 0.22, and 0.17, respectively. A case study represented by a system consisting of 30 kWp PV array (36%, 18 kWp wind farm (55%, and 5 kVA diesel generator (9% is presented. This system is supposed to power a 200 kWh/day load demand. It is found that the generated energy share of the PV array, wind farm, and diesel generator is 36%, 55%, and 9%, respectively, while the cost of energy is 0.17 USD/kWh.

  3. 75 FR 42801 - Self-Regulatory Organizations; International Securities Exchange, LLC; Notice of Filing and...

    Science.gov (United States)

    2010-07-22

    ... Organizations; International Securities Exchange, LLC; Notice of Filing and Immediate Effectiveness of Proposed... at or under the threshold are charged the constituent's prescribed execution fee. This waiver applies... members to execute large-sized FX options orders on the Exchange in a manner that is cost effective. The...

  4. An in vitro comparison of root canal transportation by reciproc file with and without glide path.

    Science.gov (United States)

    Nazarimoghadam, Kiumars; Daryaeian, Mohammad; Ramazani, Nahid

    2014-09-01

    The aim of ideal canal preparation is to prevent iatrogenic aberrations such as transportation. The aim of this study was to evaluate the root canal transportation by Reciproc file with and without glide path. Thirty acrylic-resin blocks with a curvature of 60° and size#10 (2% taper) were assigned into two groups (n= 15). In group 1, the glide path was performed using stainless steel k-files size#10 and 15 at working length In group 2, canals were prepared with Reciproc file system at working length. By using digital imaging software (AutoCAD 2008), the pre-instrumentation and post-instrumentation digital images were superimposed over, taking the landmarks as reference points. Then the radius of the internal and external curve of the specimens was calculated at three α, β and γ points (1mm to apex as α, 3mm to apex as β, and 5mm to apex as γ). The data were statically analyzed using the independent T-test and Mann-Whitney U test by SPSS version 16. Glide path was found significant for only external curve in the apical third of the canal; that is, 5mm to apex (P=0.005). But in the other third, canal modification was not significant (P> 0.008). Canal transportation in the apical third of the canal seems to be significantly reduced when glide path is performed using reciprocating files.

  5. ITP Adjuster 1.0: A New Utility Program to Adjust Charges in the Topology Files Generated by the PRODRG Server

    OpenAIRE

    Medeiros, Diogo de Jesus; Cortopassi, Wilian Augusto; Costa França, Tanos Celmar; Pimentel, André Silva

    2013-01-01

    The suitable computation of accurate atomic charges for the GROMACS topology *.itp files of small molecules, generated in the PRODRG server, has been a tricky task nowadays because it does not calculate atomic charges using an ab initio method. Usually additional steps of structure optimization and charges calculation, followed by a tedious manual replacement of atomic charges in the *.itp file, are needed. In order to assist this task, we report here the ITP Adjuster 1.0, a utility program d...

  6. Siting and sizing of distributed generators based on improved simulated annealing particle swarm optimization.

    Science.gov (United States)

    Su, Hongsheng

    2017-12-18

    Distributed power grids generally contain multiple diverse types of distributed generators (DGs). Traditional particle swarm optimization (PSO) and simulated annealing PSO (SA-PSO) algorithms have some deficiencies in site selection and capacity determination of DGs, such as slow convergence speed and easily falling into local trap. In this paper, an improved SA-PSO (ISA-PSO) algorithm is proposed by introducing crossover and mutation operators of genetic algorithm (GA) into SA-PSO, so that the capabilities of the algorithm are well embodied in global searching and local exploration. In addition, diverse types of DGs are made equivalent to four types of nodes in flow calculation by the backward or forward sweep method, and reactive power sharing principles and allocation theory are applied to determine initial reactive power value and execute subsequent correction, thus providing the algorithm a better start to speed up the convergence. Finally, a mathematical model of the minimum economic cost is established for the siting and sizing of DGs under the location and capacity uncertainties of each single DG. Its objective function considers investment and operation cost of DGs, grid loss cost, annual purchase electricity cost, and environmental pollution cost, and the constraints include power flow, bus voltage, conductor current, and DG capacity. Through applications in an IEEE33-node distributed system, it is found that the proposed method can achieve desirable economic efficiency and safer voltage level relative to traditional PSO and SA-PSO algorithms, and is a more effective planning method for the siting and sizing of DGs in distributed power grids.

  7. Moderately thin advertising models are optimal, most of the time: Moderation of the quadratic effect of model body-size on ad attitude by fashion leadership

    NARCIS (Netherlands)

    Janssen, D.M.; Paas, L.J.

    2014-01-01

    The authors hypothesize and find that an advertising model's body size has an inverted U-shaped relationship with ad attitude in the apparel product category, in which moderately thin advertising models are optimal. They assess the moderating effect of consumers' fashion leadership on this quadratic

  8. Geometrical Optimization Of Clinch Forming Process Using The Response Surface Method

    International Nuclear Information System (INIS)

    Oudjene, M.; Ben-Ayed, L.; Batoz, J.-L.

    2007-01-01

    The determination of optimum tool shapes in clinch forming process is needed to achieve the required high quality of clinch joints. The design of the tools (punch and die) is crucial since the strength of the clinch joints is closely correlated to the tools geometry. To increase the strength of clinch joints, an automatic optimization procedure is developed. The objective function is defined in terms of the maximum value of the tensile force, obtained by separation of the sheets. Feasibility constraints on the geometrical parameters are also taken into account. First, a Python Script is used to generate the ABAQUS finite element model, to run the computations and post-process results, which are exported in an ASCII file. Then, this ASCII file is read by a FORTRAN program, in which the response surface approximation and SQP algorithm are implemented. The results show the potential interest of the developed optimization procedure towards the improvement of the strength of the clinch forming joints to tensile loading

  9. 76 FR 62092 - Filing Procedures

    Science.gov (United States)

    2011-10-06

    ... INTERNATIONAL TRADE COMMISSION Filing Procedures AGENCY: International Trade Commission. ACTION: Notice of issuance of Handbook on Filing Procedures. SUMMARY: The United States International Trade Commission (``Commission'') is issuing a Handbook on Filing Procedures to replace its Handbook on Electronic...

  10. Cyclic Fatigue Resistance of Novel Rotary Files Manufactured from Different Thermal Treated Nickel-Titanium Wires in Artificial Canals.

    Science.gov (United States)

    Karataşlıoglu, E; Aydın, U; Yıldırım, C

    2018-02-01

    The aim of this in vitro study was to compare the static cyclic fatigue resistance of thermal treated rotary files with a conventional nickel-titanium (NiTi) rotary file. Four groups of 60 rotary files with similar file dimensions, geometries, and motion were selected. Groups were set as HyFlex Group [controlled memory wire (CM-Wire)], ProfileVortex Group (M-Wire), Twisted File Group (R-Phase Wire), and OneShape Group (conventional NiTi wire)] and tested using a custom-made static cyclic fatigue testing apparatus. The fracture time and fragment length of the each file was also recorded. Statistical analysis was performed using one-way analysis of variance and Tukey's test at the 95% confidence level (P = 0.05). The HyFlex group had a significantly higher mean cyclic fatigue resistance than the other three groups (P Wire alloy represented the best performance in cyclic fatigue resistance, and NiTi alloy in R-Phase had the second highest fatigue resistance. CM and R-Phase manufacturing technology processed to the conventional NiTi alloy enhance the cyclic fatigue resistance of files that have similar design and size. M-wire alloy did not show any superiority in cyclic fatigue resistance when compared with conventional NiTi wire.

  11. ONLINE MINIMIZATION OF VERTICAL BEAM SIZES AT APS

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Yipeng

    2017-06-25

    In this paper, online minimization of vertical beam sizes along the APS (Advanced Photon Source) storage ring is presented. A genetic algorithm (GA) was developed and employed for the online optimization in the APS storage ring. A total of 59 families of skew quadrupole magnets were employed as knobs to adjust the coupling and the vertical dispersion in the APS storage ring. Starting from initially zero current skew quadrupoles, small vertical beam sizes along the APS storage ring were achieved in a short optimization time of one hour. The optimization results from this method are briefly compared with the one from LOCO (Linear Optics from Closed Orbits) response matrix correction.

  12. Evaluation of apical canal shapes produced sequentially during instrumentation with stainless steel hand and Ni-Ti rotary instruments using Micro-computed tomography

    Directory of Open Access Journals (Sweden)

    Woo-Jin Lee

    2011-05-01

    Full Text Available Objectives The purpose of this study was to determine the optimal master apical file size with minimal transportation and optimal efficiency in removing infected dentin. We evaluated the transportation of the canal center and the change in untouched areas after sequential preparation with a #25 to #40 file using 3 different instruments: stainless steel K-type (SS K-file hand file, ProFile and LightSpeed using microcomputed tomography (MCT. Materials and Methods Thirty extracted human mandibular molars with separated orifices and apical foramens on mesial canals were used. Teeth were randomly divided into three groups: SS K-file, Profile, LightSpeed and the root canals were instrumented using corresponding instruments from #20 to #40. All teeth were scanned with MCT before and after instrumentation. Cross section images were used to evaluate canal transportation and untouched area at 1- , 2- , 3- , and 5- mm level from the apex. Data were statistically analyzed according to' repeated nested design'and Mann-Whitney test (p = 0.05. Results In SS K-file group, canal transportation was significantly increased over #30 instrument. In the ProFile group, canal transportation was significantly increased after preparation with the #40 instrument at the 1- and 2- mm levels. LightSpeed group showed better centering ability than ProFile group after preparation with the #40 instrument at the 1 and 2 mm levels. Conclusions SS K-file, Profile, and LightSpeed showed differences in the degree of apical transportation depending on the size of the master apical file.

  13. 12 CFR 1780.9 - Filing of papers.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Filing of papers. 1780.9 Section 1780.9 Banks... papers. (a) Filing. Any papers required to be filed shall be addressed to the presiding officer and filed... Director or the presiding officer. All papers filed by electronic media shall also concurrently be filed in...

  14. SU-F-T-465: Two Years of Radiotherapy Treatments Analyzed Through MLC Log Files

    Energy Technology Data Exchange (ETDEWEB)

    Defoor, D [University of Texas HSC SA, New Braunfles, TX (United States); Kabat, C; Papanikolaou, N [University of Texas HSC SA, San Antonio, TX (United States); Stathakis, S [Cancer Therapy and Research Center, San Antonio, TX (United States)

    2016-06-15

    Purpose: To present treatment statistics of a Varian Novalis Tx using more than 90,000 Varian Dynalog files collected over the past 2 years. Methods: Varian Dynalog files are recorded for every patient treated on our Varian Novalis Tx. The files are collected and analyzed daily to check interfraction agreement of treatment deliveries. This is accomplished by creating fluence maps from the data contained in the Dynalog files. From the Dynalog files we have also compiled statistics for treatment delivery times, MLC errors, gantry errors and collimator errors. Results: The mean treatment time for VMAT patients was 153 ± 86 seconds while the mean treatment time for step & shoot was 256 ± 149 seconds. Patient’s treatment times showed a variation of 0.4% over there treatment course for VMAT and 0.5% for step & shoot. The average field sizes were 40 cm2 and 26 cm2 for VMAT and step & shoot respectively. VMAT beams contained and average overall leaf travel of 34.17 meters and step & shoot beams averaged less than half of that at 15.93 meters. When comparing planned and delivered fluence maps generated using the Dynalog files VMAT plans showed an average gamma passing percentage of 99.85 ± 0.47. Step & shoot plans showed an average gamma passing percentage of 97.04 ± 0.04. 5.3% of beams contained an MLC error greater than 1 mm and 2.4% had an error greater than 2mm. The mean gantry speed for VMAT plans was 1.01 degrees/s with a maximum of 6.5 degrees/s. Conclusion: Varian Dynalog files are useful for monitoring machine performance treatment parameters. The Dynalog files have shown that the performance of the Novalis Tx is consistent over the course of a patients treatment with only slight variations in patient treatment times and a low rate of MLC errors.

  15. Optimization of the fabrication of novel stealth PLA-based nanoparticles by dispersion polymerization using D-optimal mixture design.

    Science.gov (United States)

    Adesina, Simeon K; Wight, Scott A; Akala, Emmanuel O

    2014-11-01

    Nanoparticle size is important in drug delivery. Clearance of nanoparticles by cells of the reticuloendothelial system has been reported to increase with increase in particle size. Further, nanoparticles should be small enough to avoid lung or spleen filtering effects. Endocytosis and accumulation in tumor tissue by the enhanced permeability and retention effect are also processes that are influenced by particle size. We present the results of studies designed to optimize cross-linked biodegradable stealth polymeric nanoparticles fabricated by dispersion polymerization. Nanoparticles were fabricated using different amounts of macromonomer, initiators, crosslinking agent and stabilizer in a dioxane/DMSO/water solvent system. Confirmation of nanoparticle formation was by scanning electron microscopy (SEM). Particle size was measured by dynamic light scattering (DLS). D-optimal mixture statistical experimental design was used for the experimental runs, followed by model generation (Scheffe polynomial) and optimization with the aid of a computer software. Model verification was done by comparing particle size data of some suggested solutions to the predicted particle sizes. Data showed that average particle sizes follow the same trend as predicted by the model. Negative terms in the model corresponding to the cross-linking agent and stabilizer indicate the important factors for minimizing particle size.

  16. Energy Optimal Control of Induction Motor Drives

    DEFF Research Database (Denmark)

    Abrahamsen, Flemming

    This thesis deals with energy optimal control of small and medium-size variable speed induction motor drives for especially Heating, Ventilation and Air-Condition (HVAC) applications. Optimized efficiency is achieved by adapting the magnetization level in the motor to the load, and the basic...... demonstrated that energy optimal control will sometimes improve and sometimes deteriorate the stability. Comparison of small and medium-size induction motor drives with permanent magnet motor drives indicated why, and in which applications, PM motors are especially good. Calculations of economical aspects...... improvement by energy optimal control for any standard induction motor drive between 2.2 kW and 90 kW. A simple method to evaluate the robustness against load disturbances was developed and used to compare the robustness of different motor types and sizes. Calculation of the oscillatory behavior of a motor...

  17. UDECON: deconvolution optimization software for restoring high-resolution records from pass-through paleomagnetic measurements

    Science.gov (United States)

    Xuan, Chuang; Oda, Hirokuni

    2015-11-01

    The rapid accumulation of continuous paleomagnetic and rock magnetic records acquired from pass-through measurements on superconducting rock magnetometers (SRM) has greatly contributed to our understanding of the paleomagnetic field and paleo-environment. Pass-through measurements are inevitably smoothed and altered by the convolution effect of SRM sensor response, and deconvolution is needed to restore high-resolution paleomagnetic and environmental signals. Although various deconvolution algorithms have been developed, the lack of easy-to-use software has hindered the practical application of deconvolution. Here, we present standalone graphical software UDECON as a convenient tool to perform optimized deconvolution for pass-through paleomagnetic measurements using the algorithm recently developed by Oda and Xuan (Geochem Geophys Geosyst 15:3907-3924, 2014). With the preparation of a format file, UDECON can directly read pass-through paleomagnetic measurement files collected at different laboratories. After the SRM sensor response is determined and loaded to the software, optimized deconvolution can be conducted using two different approaches (i.e., "Grid search" and "Simplex method") with adjustable initial values or ranges for smoothness, corrections of sample length, and shifts in measurement position. UDECON provides a suite of tools to view conveniently and check various types of original measurement and deconvolution data. Multiple steps of measurement and/or deconvolution data can be compared simultaneously to check the consistency and to guide further deconvolution optimization. Deconvolved data together with the loaded original measurement and SRM sensor response data can be saved and reloaded for further treatment in UDECON. Users can also export the optimized deconvolution data to a text file for analysis in other software.

  18. Optimization of routing strategies for data transfer in peer-to-peer networks

    International Nuclear Information System (INIS)

    Morioka, Atsushi; Igarashi, Akito

    2014-01-01

    Since peer-to-peer file-sharing systems have become familiar recently, the information traffic in the networks is increasing. Therefore it causes various traffic problems in peer-to-peer networks. In this paper, we model some features of the peer-to-peer networks, and investigate the traffic problems. Peer-to-peer networks have two notable characters. One is that each peer frequently searches for a file and download it from a peer who has the requested file. To decide whether a peer has the requested file or not in modelling of the search and download process, we introduce file-parameter P j , which expresses the amount of files stored in peer j. It is assumed that if P j is large, peer j has many files and can meet other peers' requests with high probability. The other character is that peers leave and join into the network repeatedly. Many researchers address traffic problems of data transfer in computer communication networks. To our knowledge, however, no reports focus on those in peer-to-peer networks whose topology changes with time. For routing paths of data transfer, generally, the shortest paths are used in usual computer networks. In this paper, we introduce a new optimal routing strategy which uses weights of peers to avoid traffic congestion. We find that the new routing strategy is superior to the shortest path strategy in terms of congestion frequency in data transfer

  19. 78 FR 21930 - Aquenergy Systems, Inc.; Notice of Intent To File License Application, Filing of Pre-Application...

    Science.gov (United States)

    2013-04-12

    ... Systems, Inc.; Notice of Intent To File License Application, Filing of Pre-Application Document, and Approving Use of the Traditional Licensing Process a. Type of Filing: Notice of Intent to File License...: November 11, 2012. d. Submitted by: Aquenergy Systems, Inc., a fully owned subsidiaries of Enel Green Power...

  20. 12 CFR 16.33 - Filing fees.

    Science.gov (United States)

    2010-01-01

    ... Banking COMPTROLLER OF THE CURRENCY, DEPARTMENT OF THE TREASURY SECURITIES OFFERING DISCLOSURE RULES § 16.33 Filing fees. (a) Filing fees must accompany certain filings made under the provisions of this part... Comptroller of the Currency Fees published pursuant to § 8.8 of this chapter. (b) Filing fees must be paid by...

  1. 75 FR 4689 - Electronic Tariff Filings

    Science.gov (United States)

    2010-01-29

    ... elements ``are required to properly identify the nature of the tariff filing, organize the tariff database... (or other pleading) and the Type of Filing code chosen will be resolved in favor of the Type of Filing...'s wish expressed in its transmittal letter or in other pleadings, the Commission may not review a...

  2. 77 FR 23770 - Self-Regulatory Organizations; Financial Industry Regulatory Authority, Inc.; Notice of Filing of...

    Science.gov (United States)

    2012-04-20

    ...: The financial markets as a whole should benefit from [limit order display] because the price discovery... revised tier sizes and corresponding liquidity minimum amounts are in the best interest of the market for...-Regulatory Organizations; Financial Industry Regulatory Authority, Inc.; Notice of Filing of Amendment No. 1...

  3. Jurisdiction Size and Local Democracy

    DEFF Research Database (Denmark)

    Lassen, David Dreyer; Serritslew, Søren

    2011-01-01

    and problems of endogeneity. We focus on internal political efficacy, a psychological condition that many see as necessary for high-quality participatory democracy. We identify a quasiexperiment, a large-scale municipal reform in Denmark, which allows us to estimate a causal effect of jurisdiction size......Optimal jurisdiction size is a cornerstone of government design. A strong tradition in political thought argues that democracy thrives in smaller jurisdictions, but existing studies of the effects of jurisdiction size, mostly cross-sectional in nature, yield ambiguous results due to sorting effects...

  4. An analytical mechanical model to describe the response of NiTi rotary endodontic files in a curved root canal

    Energy Technology Data Exchange (ETDEWEB)

    Leroy, Agnes Marie Francoise [Department of Metallurgical and Materials Engineering, School of Engineering, Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil); Department of Mechanical and Materials Engineering, Ecole des Ponts Paristech (ENPC), Champs-sur-Marne (France); Bahia, Maria Guiomar de Azevedo [Department of Restorative Dentistry, Faculty of Dentistry, Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil); Ehrlacher, Alain [Department of Mechanical and Materials Engineering, Ecole des Ponts Paristech (ENPC), Champs-sur-Marne (France); Buono, Vicente Tadeu Lopes, E-mail: vbuono@demet.ufmg.br [Department of Metallurgical and Materials Engineering, School of Engineering, Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil)

    2012-08-01

    Aim: To build a mathematical model describing the mechanical behavior of NiTi rotary files while they are rotating in a root canal. Methodology: The file was seen as a beam undergoing large transformations. The instrument was assumed to be rotating steadily in the root canal, and the geometry of the canal was considered as a known parameter of the problem. The formulae of large transformations mechanics then allowed the calculation of the Green-Lagrange strain field in the file. The non-linear mechanical behavior of NiTi was modeled as a continuous piecewise linear function, assuming that the material did not reach plastic deformation. Criteria locating the changes of behavior of NiTi were established and the tension field in the file, and the external efforts applied on it were calculated. The unknown variable of torsion was deduced from the equilibrium equation system using a Coulomb contact law which solved the problem on a cycle of rotation. Results: In order to verify that the model described well reality, three-point bending experiments were managed on superelastic NiTi wires, whose results were compared to the theoretical ones. It appeared that the model gave a good mentoring of the empirical results in the range of bending angles that interested us. Conclusions: Knowing the geometry of the root canal, one is now able to write the equations of the strain and stress fields in the endodontic instrument, and to quantify the impact of each macroscopic parameter of the problem on its response. This should be useful to predict failure of the files under rotating bending fatigue, and to optimize the geometry of the files. - Highlights: Black-Right-Pointing-Pointer A mechanical model of the behavior of a NiTi endodontic instrument was developed. Black-Right-Pointing-Pointer The model was validated with results of three-point bending tests on NiTi wires. Black-Right-Pointing-Pointer The model is appropriate for the optimization of instruments' geometry.

  5. An analytical mechanical model to describe the response of NiTi rotary endodontic files in a curved root canal

    International Nuclear Information System (INIS)

    Leroy, Agnès Marie Françoise; Bahia, Maria Guiomar de Azevedo; Ehrlacher, Alain; Buono, Vicente Tadeu Lopes

    2012-01-01

    Aim: To build a mathematical model describing the mechanical behavior of NiTi rotary files while they are rotating in a root canal. Methodology: The file was seen as a beam undergoing large transformations. The instrument was assumed to be rotating steadily in the root canal, and the geometry of the canal was considered as a known parameter of the problem. The formulae of large transformations mechanics then allowed the calculation of the Green–Lagrange strain field in the file. The non-linear mechanical behavior of NiTi was modeled as a continuous piecewise linear function, assuming that the material did not reach plastic deformation. Criteria locating the changes of behavior of NiTi were established and the tension field in the file, and the external efforts applied on it were calculated. The unknown variable of torsion was deduced from the equilibrium equation system using a Coulomb contact law which solved the problem on a cycle of rotation. Results: In order to verify that the model described well reality, three-point bending experiments were managed on superelastic NiTi wires, whose results were compared to the theoretical ones. It appeared that the model gave a good mentoring of the empirical results in the range of bending angles that interested us. Conclusions: Knowing the geometry of the root canal, one is now able to write the equations of the strain and stress fields in the endodontic instrument, and to quantify the impact of each macroscopic parameter of the problem on its response. This should be useful to predict failure of the files under rotating bending fatigue, and to optimize the geometry of the files. - Highlights: ► A mechanical model of the behavior of a NiTi endodontic instrument was developed. ► The model was validated with results of three-point bending tests on NiTi wires. ► The model is appropriate for the optimization of instruments' geometry.

  6. 78 FR 75554 - Combined Notice of Filings

    Science.gov (United States)

    2013-12-12

    ...-000. Applicants: Young Gas Storage Company, Ltd. Description: Young Fuel Reimbursement Filing to be.... Protests may be considered, but intervention is necessary to become a party to the proceeding. eFiling is... qualifying facilities filings can be found at: http://www.ferc.gov/docs-filing/efiling/filing-req.pdf . For...

  7. Optimizing concentration of shifter additive for plastic scintillators of different size

    Science.gov (United States)

    Adadurov, A. F.; Zhmurin, P. N.; Lebedev, V. N.; Titskaya, V. D.

    2009-02-01

    This paper concerns the influence of wavelength shifting (secondary) luminescent additive (LA 2) on the light yield of polystyrene-based plastic scintillator (PS) taking self-absorption into account. Calculations of light yield dependence on concentration of 1.4-bis(2-(5-phenyloxazolyl)-benzene (POPOP) as LA 2 were made for various path lengths of photons in PS. It is shown that there is an optimal POPOP concentration ( Copt), which provides a maximum light yield for a given path length. This optimal concentration is determined by the competition of luminescence and self-reflection processes. Copt values were calculated for PS of different dimensions. For small PS, Copt≈0.02%, which agree with a common (standard) value of POPOP concentration. For higher PS dimensions, the optimal POPOP concentration is decreased (to Copt≈0.006% for 320×30×2 cm sample), reducing the light yield from PS by almost 35%.

  8. Optimal siting of capacitors in radial distribution network using Whale Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    D.B. Prakash

    2017-12-01

    Full Text Available In present days, continuous effort is being made in bringing down the line losses of the electrical distribution networks. Therefore proper allocation of capacitors is of utmost importance because, it will help in reducing the line losses and maintaining the bus voltage. This in turn results in improving the stability and reliability of the system. In this paper Whale Optimization Algorithm (WOA is used to find optimal sizing and placement of capacitors for a typical radial distribution system. Multi objectives such as operating cost reduction and power loss minimization with inequality constraints on voltage limits are considered and the proposed algorithm is validated by applying it on standard radial systems: IEEE-34 bus and IEEE-85 bus radial distribution test systems. The results obtained are compared with those of existing algorithms. The results show that the proposed algorithm is more effective in bringing down the operating costs and in maintaining better voltage profile. Keywords: Whale Optimization Algorithm (WOA, Optimal allocation and sizing of capacitors, Power loss reduction and voltage stability improvement, Radial distribution system, Operating cost minimization

  9. Small file aggregation in a parallel computing system

    Science.gov (United States)

    Faibish, Sorin; Bent, John M.; Tzelnic, Percy; Grider, Gary; Zhang, Jingwang

    2014-09-02

    Techniques are provided for small file aggregation in a parallel computing system. An exemplary method for storing a plurality of files generated by a plurality of processes in a parallel computing system comprises aggregating the plurality of files into a single aggregated file; and generating metadata for the single aggregated file. The metadata comprises an offset and a length of each of the plurality of files in the single aggregated file. The metadata can be used to unpack one or more of the files from the single aggregated file.

  10. 5 CFR 1203.13 - Filing pleadings.

    Science.gov (United States)

    2010-01-01

    ... delivery, by facsimile, or by e-filing in accordance with § 1201.14 of this chapter. If the document was... submitted by e-filing, it is considered to have been filed on the date of electronic submission. (e... 5 Administrative Personnel 3 2010-01-01 2010-01-01 false Filing pleadings. 1203.13 Section 1203.13...

  11. A note on discriminating equally optimal semi-Latin squares for sixteen treatments in blocks of size four

    International Nuclear Information System (INIS)

    Chigbu, P.E.

    2004-08-01

    A semi-Latin square for sixteen treatments in blocks of size four is like a 4x4 Latin square except that there exist four treatments in each cell and each of the sixteen treatments occurs once in each row and once in each column. In the literature, three of this class of squares has been found to be A-, D- and E-optimal while an analytic approach has been adopted to further distinguish these optimal ones with the view of identifying the best for experimentation. With this analytic approach the 'best' square was identified - however, it neither provided a common basis for the discrimination of the three squares nor the further classification of the other two good squares. In this paper, therefore, a numerical approach, which basically involves the computation of the generalized inverses of the information matrices of these squares, is adopted. Each of the generalized inverses satisfies the Moore-Penrose inverse properties. Thereafter, a square is considered most preferable among others if it has the maximum number of minimum variance of simple treatment contrasts as well as the minimum number of distinct pairwise treatment variances. Above all, a mini-league table for the three squares is ascertained. (author)

  12. Molecular Level Design Principle behind Optimal Sizes of Photosynthetic LH2 Complex: Taming Disorder through Cooperation of Hydrogen Bonding and Quantum Delocalization.

    Science.gov (United States)

    Jang, Seogjoo; Rivera, Eva; Montemayor, Daniel

    2015-03-19

    The light harvesting 2 (LH2) antenna complex from purple photosynthetic bacteria is an efficient natural excitation energy carrier with well-known symmetric structure, but the molecular level design principle governing its structure-function relationship is unknown. Our all-atomistic simulations of nonnatural analogues of LH2 as well as those of a natural LH2 suggest that nonnatural sizes of LH2-like complexes could be built. However, stable and consistent hydrogen bonding (HB) between bacteriochlorophyll and the protein is shown to be possible only near naturally occurring sizes, leading to significantly smaller disorder than for nonnatural ones. Extensive quantum calculations of intercomplex exciton transfer dynamics, sampled for a large set of disorder, reveal that taming the negative effect of disorder through a reliable HB as well as quantum delocalization of the exciton is a critical mechanism that makes LH2 highly functional, which also explains why the natural sizes of LH2 are indeed optimal.

  13. PFS: a distributed and customizable file system

    NARCIS (Netherlands)

    Bosch, H.G.P.; Mullender, Sape J.

    1996-01-01

    In this paper we present our ongoing work on the Pegasus File System (PFS), a distributed and customizable file system that can be used for off-line file system experiments and on-line file system storage. PFS is best described as an object-oriented component library from which either a true file

  14. 76 FR 61351 - Combined Notice of Filings #1

    Science.gov (United States)

    2011-10-04

    ... MBR Baseline Tariff Filing to be effective 9/22/2011. Filed Date: 09/22/2011. Accession Number... submits tariff filing per 35.1: ECNY MBR Re-File to be effective 9/22/2011. Filed Date: 09/22/2011... Industrial Energy Buyers, LLC submits tariff filing per 35.1: NYIEB MBR Re-File to be effective 9/22/2011...

  15. Deceit: A flexible distributed file system

    Science.gov (United States)

    Siegel, Alex; Birman, Kenneth; Marzullo, Keith

    1989-01-01

    Deceit, a distributed file system (DFS) being developed at Cornell, focuses on flexible file semantics in relation to efficiency, scalability, and reliability. Deceit servers are interchangeable and collectively provide the illusion of a single, large server machine to any clients of the Deceit service. Non-volatile replicas of each file are stored on a subset of the file servers. The user is able to set parameters on a file to achieve different levels of availability, performance, and one-copy serializability. Deceit also supports a file version control mechanism. In contrast with many recent DFS efforts, Deceit can behave like a plain Sun Network File System (NFS) server and can be used by any NFS client without modifying any client software. The current Deceit prototype uses the ISIS Distributed Programming Environment for all communication and process group management, an approach that reduces system complexity and increases system robustness.

  16. Optimal Laser Phototherapy Parameters for Pain Relief.

    Science.gov (United States)

    Kate, Rohit J; Rubatt, Sarah; Enwemeka, Chukuka S; Huddleston, Wendy E

    2018-03-27

    Studies on laser phototherapy for pain relief have used parameters that vary widely and have reported varying outcomes. The purpose of this study was to determine the optimal parameter ranges of laser phototherapy for pain relief by analyzing data aggregated from existing primary literature. Original studies were gathered from available sources and were screened to meet the pre-established inclusion criteria. The included articles were then subjected to meta-analysis using Cohen's d statistic for determining treatment effect size. From these studies, ranges of the reported parameters that always resulted into large effect sizes were determined. These optimal ranges were evaluated for their accuracy using leave-one-article-out cross-validation procedure. A total of 96 articles met the inclusion criteria for meta-analysis and yielded 232 effect sizes. The average effect size was highly significant: d = +1.36 (confidence interval [95% CI] = 1.04-1.68). Among all the parameters, total energy was found to have the greatest effect on pain relief and had the most prominent optimal ranges of 120-162 and 15.36-20.16 J, which always resulted in large effect sizes. The cross-validation accuracy of the optimal ranges for total energy was 68.57% (95% CI = 53.19-83.97). Fewer and less-prominent optimal ranges were obtained for the energy density and duration parameters. None of the remaining parameters was found to be independently related to pain relief outcomes. The findings of meta-analysis indicate that laser phototherapy is highly effective for pain relief. Based on the analysis of parameters, total energy can be optimized to yield the largest effect on pain relief.

  17. 10 CFR 110.89 - Filing and service.

    Science.gov (United States)

    2010-01-01

    ...: Rulemakings and Adjudications Staff or via the E-Filing system, following the procedure set forth in 10 CFR 2.302. Filing by mail is complete upon deposit in the mail. Filing via the E-Filing system is completed... residence with some occupant of suitable age and discretion; (2) Following the requirements for E-Filing in...

  18. 49 CFR 1104.6 - Timely filing required.

    Science.gov (United States)

    2010-10-01

    ... offers next day delivery to Washington, DC. If the e-filing option is chosen (for those pleadings and documents that are appropriate for e-filing, as determined by reference to the information on the Board's Web site), then the e-filed pleading or document is timely filed if the e-filing process is completed...

  19. DICOM supported sofware configuration by XML files

    International Nuclear Information System (INIS)

    LucenaG, Bioing Fabian M; Valdez D, Andres E; Gomez, Maria E; Nasisi, Oscar H

    2007-01-01

    A method for the configuration of informatics systems that provide support to DICOM standards using XML files is proposed. The difference with other proposals is base on that this system does not code the information of a DICOM objects file, but codes the standard itself in an XML file. The development itself is the format for the XML files mentioned, in order that they can support what DICOM normalizes for multiple languages. In this way, the same configuration file (or files) can be use in different systems. Jointly the XML configuration file generated, we wrote also a set of CSS and XSL files. So the same file can be visualized in a standard browser, as a query system of DICOM standard, emerging use, that did not was a main objective but brings a great utility and versatility. We exposed also some uses examples of the configuration file mainly in relation with the load of DICOM information objects. Finally, at the conclusions we show the utility that the system has already provided when the edition of DICOM standard changes from 2006 to 2007

  20. I/O Performance Characterization of Lustre and NASA Applications on Pleiades

    Science.gov (United States)

    Saini, Subhash; Rappleye, Jason; Chang, Johnny; Barker, David Peter; Biswas, Rupak; Mehrotra, Piyush

    2012-01-01

    In this paper we study the performance of the Lustre file system using five scientific and engineering applications representative of NASA workload on large-scale supercomputing systems such as NASA s Pleiades. In order to facilitate the collection of Lustre performance metrics, we have developed a software tool that exports a wide variety of client and server-side metrics using SGI's Performance Co-Pilot (PCP), and generates a human readable report on key metrics at the end of a batch job. These performance metrics are (a) amount of data read and written, (b) number of files opened and closed, and (c) remote procedure call (RPC) size distribution (4 KB to 1024 KB, in powers of 2) for I/O operations. RPC size distribution measures the efficiency of the Lustre client and can pinpoint problems such as small write sizes, disk fragmentation, etc. These extracted statistics are useful in determining the I/O pattern of the application and can assist in identifying possible improvements for users applications. Information on the number of file operations enables a scientist to optimize the I/O performance of their applications. Amount of I/O data helps users choose the optimal stripe size and stripe count to enhance I/O performance. In this paper, we demonstrate the usefulness of this tool on Pleiades for five production quality NASA scientific and engineering applications. We compare the latency of read and write operations under Lustre to that with NFS by tracing system calls and signals. We also investigate the read and write policies and study the effect of page cache size on I/O operations. We examine the performance impact of Lustre stripe size and stripe count along with performance evaluation of file per process and single shared file accessed by all the processes for NASA workload using parameterized IOR benchmark.

  1. LOG FILE ANALYSIS AND CREATION OF MORE INTELLIGENT WEB SITES

    Directory of Open Access Journals (Sweden)

    Mislav Šimunić

    2012-07-01

    Full Text Available To enable successful performance of any company or business system, both inthe world and in the Republic of Croatia, among many problems relating to its operationsand particularly to maximum utilization and efficiency of the Internet as a media forrunning business (especially in terms of marketing, they should make the best possible useof the present-day global trends and advantages of sophisticated technologies andapproaches to running a business. Bearing in mind the fact of daily increasing competitionand more demanding market, this paper addresses certain scientific and practicalcontribution to continuous analysis of demand market and adaptation thereto by analyzingthe log files and by retroactive effect on the web site. A log file is a carrier of numerousdata and indicators that should be used in the best possible way to improve the entirebusiness operations of a company. However, this is not always simple and easy. The websites differ in size, purpose, and technology used for designing them. For this very reason,the analytic analysis frameworks should be such that can cover any web site and at thesame time leave some space for analyzing and investigating the specific characteristicof each web site and provide for its dynamics by analyzing the log file records. Thoseconsiderations were a basis for this paper

  2. 12 CFR 908.25 - Filing of papers.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Filing of papers. 908.25 Section 908.25 Banks... RULES OF PRACTICE AND PROCEDURE IN HEARINGS ON THE RECORD General Rules § 908.25 Filing of papers. (a) Filing. Any papers required to be filed shall be addressed to the presiding officer and filed with the...

  3. Optimizing concentration of shifter additive for plastic scintillators of different size

    Energy Technology Data Exchange (ETDEWEB)

    Adadurov, A.F. [Institute for Scintillating materials, NPC Institute for Single Crystals, NAN of Ukraine, Lenin Avenue 61, 61001 Kharkov (Ukraine)], E-mail: adadurov@isma.kharkov.ua; Zhmurin, P.N.; Lebedev, V.N.; Titskaya, V.D. [Institute for Scintillating materials, NPC Institute for Single Crystals, NAN of Ukraine, Lenin Avenue 61, 61001 Kharkov (Ukraine)

    2009-02-11

    This paper concerns the influence of wavelength shifting (secondary) luminescent additive (LA{sub 2}) on the light yield of polystyrene-based plastic scintillator (PS) taking self-absorption into account. Calculations of light yield dependence on concentration of 1.4-bis(2-(5-phenyloxazolyl)-benzene (POPOP) as LA{sub 2} were made for various path lengths of photons in PS. It is shown that there is an optimal POPOP concentration (C{sub opt}), which provides a maximum light yield for a given path length. This optimal concentration is determined by the competition of luminescence and self-reflection processes. C{sub opt} values were calculated for PS of different dimensions. For small PS, C{sub opt}{approx}0.02%, which agree with a common (standard) value of POPOP concentration. For higher PS dimensions, the optimal POPOP concentration is decreased (to C{sub opt}{approx}0.006% for 320x30x2 cm sample), reducing the light yield from PS by almost 35%.

  4. PFS: a distributed and customizable file system

    OpenAIRE

    Bosch, H.G.P.; Mullender, Sape J.

    1996-01-01

    In this paper we present our ongoing work on the Pegasus File System (PFS), a distributed and customizable file system that can be used for off-line file system experiments and on-line file system storage. PFS is best described as an object-oriented component library from which either a true file system or a file-system simulator can be constructed. Each of the components in the library is easily replaced by another implementation to accommodate a wide range of applications.

  5. Assessment of apically extruded debris produced by the single-file ProTaper F2 technique under reciprocating movement.

    Science.gov (United States)

    De-Deus, Gustavo; Brandão, Maria Claudia; Barino, Bianca; Di Giorgi, Karina; Fidel, Rivail Antonio Sergio; Luna, Aderval Severino

    2010-09-01

    This study was designed to quantitatively evaluate the amount of dentin debris extruded from the apical foramen by comparing the conventional sequence of the ProTaper Universal nickel-titanium (NiTi) files with the single-file ProTaper F2 technique. Thirty mesial roots of lower molars were selected, and the use of different instrumentation techniques resulted in 3 groups (n=10 each). In G1, a crown-down hand-file technique was used, and in G2 conventional ProTaper Universal technique was used. In G3, ProTaper F2 file was used in a reciprocating motion. The apical finish preparation was equivalent to ISO size 25. An apparatus was used to evaluate the apically extruded debris. Statistical analysis was performed using 1-way analysis of variance and Tukey multiple comparisons. No significant difference was found in the amount of the debris extruded between the conventional sequence of the ProTaper Universal NiTi files and the single-file ProTaper F2 technique (P>.05). In contrast, the hand instrumentation group extruded significantly more debris than both NiTi groups (P<.05). The present results yielded favorable input for the F2 single-file technique in terms of apically extruded debris, inasmuch as it is the most simple and cost-effective instrumentation approach. Copyright (c) 2010 Mosby, Inc. All rights reserved.

  6. Detecting Malicious Code by Binary File Checking

    Directory of Open Access Journals (Sweden)

    Marius POPA

    2014-01-01

    Full Text Available The object, library and executable code is stored in binary files. Functionality of a binary file is altered when its content or program source code is changed, causing undesired effects. A direct content change is possible when the intruder knows the structural information of the binary file. The paper describes the structural properties of the binary object files, how the content can be controlled by a possible intruder and what the ways to identify malicious code in such kind of files. Because the object files are inputs in linking processes, early detection of the malicious content is crucial to avoid infection of the binary executable files.

  7. Formalizing a hierarchical file system

    NARCIS (Netherlands)

    Hesselink, Wim H.; Lali, Muhammad Ikram

    An abstract file system is defined here as a partial function from (absolute) paths to data. Such a file system determines the set of valid paths. It allows the file system to be read and written at a valid path, and it allows the system to be modified by the Unix operations for creation, removal,

  8. 77 FR 74839 - Combined Notice of Filings

    Science.gov (United States)

    2012-12-18

    ..., LP. Description: National Grid LNG, LP submits tariff filing per 154.203: Adoption of NAESB Version 2... with Order to Amend NAESB Version 2.0 Filing to be effective 12/1/2012. Filed Date: 12/11/12. Accession...: Refile to comply with Order on NAESB Version 2.0 Filing to be effective 12/1/2012. Filed Date: 12/11/12...

  9. Formalizing a Hierarchical File System

    NARCIS (Netherlands)

    Hesselink, Wim H.; Lali, M.I.

    2009-01-01

    In this note, we define an abstract file system as a partial function from (absolute) paths to data. Such a file system determines the set of valid paths. It allows the file system to be read and written at a valid path, and it allows the system to be modified by the Unix operations for removal

  10. Optimizing the grain size distribution for talc-magnesite ore flotation

    Directory of Open Access Journals (Sweden)

    Škvarla Jiøí

    2001-06-01

    Full Text Available Flotation is the only separation method with an universal utilization. Along with the separation of particulate valuable or hazardous components from primary and seconadry mineral raw materials, it is of usage in biotechnologies and water cleaning. The success of the flotation separation crucially depends on the particle size distribution or composition of the ore charge entering the process. The paper deals with the problem of flotation treatment of talc-magnesite ore. The main components of the ore, i.e. talc and magnesite are appreciably different in their grindability and floatability. For such a type of raw material, grinding of the charge plays a very important role in the process. The (unwanted influence of ultrafine particles on the course of the flotation process is well known. On the other hand, in order to liberate and subsequently to selectively separate both the components, a maximum particle size has to be respected.An influence of artificial samples of selected particle size fractions on the flotation efficiency has been studied experimentally by the quantitative evaluation of flotation products. The flotation experiments on the samples provided an information not obtainable from traditional flotation tests. An adverse effect of the size fraction 0 – 0.04 mm was revealed, decreasing the flotation selectivity appreciably. These results are of theoretical and practical importance.

  11. 77 FR 35371 - Combined Notice of Filings #1

    Science.gov (United States)

    2012-06-13

    .... Applicants: Duke Energy Miami Fort, LLC. Description: MBR Filing to be effective 10/1/2012. Filed Date: 6/5...-000. Applicants: Duke Energy Piketon, LLC. Description: MBR Filing to be effective 10/1/2012. Filed...-1959-000. Applicants: Duke Energy Stuart, LLC. Description: MBR Filing to be effective 10/1/2012. Filed...

  12. Modeling and sizing a Storage System coupled with intermittent renewable power generation

    International Nuclear Information System (INIS)

    Bridier, Laurent

    2016-01-01

    This thesis aims at presenting an optimal management and sizing of an Energy Storage System (ESS) paired up with Intermittent Renewable Energy Sources (IReN). Firstly, we developed a technical-economic model of the system which is associated with three typical scenarios of utility grid power supply: hourly smoothing based on a one-day-ahead forecast (S1), guaranteed power supply (S2) and combined scenarios (S3). This model takes the form of a large-scale non-linear optimization program. Secondly, four heuristic strategies are assessed and lead to an optimized management of the power output with storage according to the reliability, productivity, efficiency and profitability criteria. This ESS optimized management is called 'Adaptive Storage Operation' (ASO). When compared to a mixed integer linear program (MILP), this optimized operation that is practicable under operational conditions gives rapidly near-optimal results. Finally, we use the ASO in ESS optimal sizing for each renewable energy: wind, wave and solar (PV). We determine the minimal sizing that complies with each scenario, by inferring the failure rate, the viable feed-in tariff of the energy, and the corresponding compliant, lost or missing energies. We also perform sensitivity analysis which highlights the importance of the ESS efficiency and of the forecasting accuracy and the strong influence of the hybridization of renewables on ESS technical-economic sizing. (author) [fr

  13. Virtual file system for PSDS

    Science.gov (United States)

    Runnels, Tyson D.

    1993-01-01

    This is a case study. It deals with the use of a 'virtual file system' (VFS) for Boeing's UNIX-based Product Standards Data System (PSDS). One of the objectives of PSDS is to store digital standards documents. The file-storage requirements are that the files must be rapidly accessible, stored for long periods of time - as though they were paper, protected from disaster, and accumulative to about 80 billion characters (80 gigabytes). This volume of data will be approached in the first two years of the project's operation. The approach chosen is to install a hierarchical file migration system using optical disk cartridges. Files are migrated from high-performance media to lower performance optical media based on a least-frequency-used algorithm. The optical media are less expensive per character stored and are removable. Vital statistics about the removable optical disk cartridges are maintained in a database. The assembly of hardware and software acts as a single virtual file system transparent to the PSDS user. The files are copied to 'backup-and-recover' media whose vital statistics are also stored in the database. Seventeen months into operation, PSDS is storing 49 gigabytes. A number of operational and performance problems were overcome. Costs are under control. New and/or alternative uses for the VFS are being considered.

  14. Optimizing a reconfigurable material via evolutionary computation

    Science.gov (United States)

    Wilken, Sam; Miskin, Marc Z.; Jaeger, Heinrich M.

    2015-08-01

    Rapid prototyping by combining evolutionary computation with simulations is becoming a powerful tool for solving complex design problems in materials science. This method of optimization operates in a virtual design space that simulates potential material behaviors and after completion needs to be validated by experiment. However, in principle an evolutionary optimizer can also operate on an actual physical structure or laboratory experiment directly, provided the relevant material parameters can be accessed by the optimizer and information about the material's performance can be updated by direct measurements. Here we provide a proof of concept of such direct, physical optimization by showing how a reconfigurable, highly nonlinear material can be tuned to respond to impact. We report on an entirely computer controlled laboratory experiment in which a 6 ×6 grid of electromagnets creates a magnetic field pattern that tunes the local rigidity of a concentrated suspension of ferrofluid and iron filings. A genetic algorithm is implemented and tasked to find field patterns that minimize the force transmitted through the suspension. Searching within a space of roughly 1010 possible configurations, after testing only 1500 independent trials the algorithm identifies an optimized configuration of layered rigid and compliant regions.

  15. Strong Stability Preserving Explicit Linear Multistep Methods with Variable Step Size

    KAUST Repository

    Hadjimichael, Yiannis

    2016-09-08

    Strong stability preserving (SSP) methods are designed primarily for time integration of nonlinear hyperbolic PDEs, for which the permissible SSP step size varies from one step to the next. We develop the first SSP linear multistep methods (of order two and three) with variable step size, and prove their optimality, stability, and convergence. The choice of step size for multistep SSP methods is an interesting problem because the allowable step size depends on the SSP coefficient, which in turn depends on the chosen step sizes. The description of the methods includes an optimal step-size strategy. We prove sharp upper bounds on the allowable step size for explicit SSP linear multistep methods and show the existence of methods with arbitrarily high order of accuracy. The effectiveness of the methods is demonstrated through numerical examples.

  16. One new route to optimize the oxidation resistance of TiC/hastelloy (Ni-based alloy) composites applied for intermediate temperature solid oxide fuel cell interconnect by increasing graphite particle size

    Science.gov (United States)

    Qi, Qian; Liu, Yan; Wang, Lujie; Zhang, Hui; Huang, Jian; Huang, Zhengren

    2017-09-01

    TiC/hastelloy composites with suitable thermal expansion and excellent electrical conductivity are promising candidates for IT-SOFC interconnect. In this paper, the TiC/hastelloy composites are fabricated by in-situ reactive infiltration, and the oxidation resistance of composites is optimized by increasing graphite particle size. Results show that the increase of graphite particles size from 1 μm to 40 μm reduces TiC particle size from 2.68 μm to 2.22 μm by affecting the formation process of TiC. Moreover, the decrease of TiC particles size accelerates the fast formation of dense and continuous TiO2/Cr2O3 oxide layer, which bring down the mass gain (800 °C/100 h) from 2.03 mg cm-2 to 1.18 mg cm-2. Meanwhile, the coefficient of thermal expansion decreases from 11.15 × 10-6 °C-1 to 10.80 × 10-6 °C-1, and electrical conductivity maintains about 5800 S cm-1 at 800 °C. Therefore, the decrease of graphite particle size is one simple and effective route to optimize the oxidation resistance of composites, and meantime keeps suitable thermal expansion and good electrical conductivity.

  17. 76 FR 63291 - Combined Notice Of Filings #1

    Science.gov (United States)

    2011-10-12

    ... filing per 35: MBR Tariff to be effective 9/23/2011. Filed Date: 09/23/2011. Accession Number: 20110923.... submits tariff filing per 35: MBR Tariff to be effective 9/23/2011. Filed Date: 09/23/2011. Accession.... submits tariff filing per 35: MBR Tariff to be effective 9/23/2011. Filed Date: 09/23/2011. Accession...

  18. Titanium-II: an evaluated nuclear data file

    International Nuclear Information System (INIS)

    Philis, C.; Howerton, R.; Smith, A.B.

    1977-06-01

    A comprehensive evaluated nuclear data file for elemental titanium is outlined including definition of the data base, the evaluation procedures and judgments, and the final evaluated results. The file describes all significant neutron-induced reactions with elemental titanium and the associated photon-production processes to incident neutron energies of 20.0 MeV. In addition, isotopic-reaction files, consistent with the elemental file, are separately defined for those processes which are important to applied considerations of material-damage and neutron-dosimetry. The file is formulated in the ENDF format. This report formally documents the evaluation and, together with the numerical file, is submitted for consideration as a part of the ENDF/B-V evaluated file system. 20 figures, 9 tables

  19. Motor unit recruitment by size does not provide functional advantages for motor performance.

    Science.gov (United States)

    Dideriksen, Jakob L; Farina, Dario

    2013-12-15

    It is commonly assumed that the orderly recruitment of motor units by size provides a functional advantage for the performance of movements compared with a random recruitment order. On the other hand, the excitability of a motor neuron depends on its size and this is intrinsically linked to its innervation number. A range of innervation numbers among motor neurons corresponds to a range of sizes and thus to a range of excitabilities ordered by size. Therefore, if the excitation drive is similar among motor neurons, the recruitment by size is inevitably due to the intrinsic properties of motor neurons and may not have arisen to meet functional demands. In this view, we tested the assumption that orderly recruitment is necessarily beneficial by determining if this type of recruitment produces optimal motor output. Using evolutionary algorithms and without any a priori assumptions, the parameters of neuromuscular models were optimized with respect to several criteria for motor performance. Interestingly, the optimized model parameters matched well known neuromuscular properties, but none of the optimization criteria determined a consistent recruitment order by size unless this was imposed by an association between motor neuron size and excitability. Further, when the association between size and excitability was imposed, the resultant model of recruitment did not improve the motor performance with respect to the absence of orderly recruitment. A consistent observation was that optimal solutions for a variety of criteria of motor performance always required a broad range of innervation numbers in the population of motor neurons, skewed towards the small values. These results indicate that orderly recruitment of motor units in itself does not provide substantial functional advantages for motor control. Rather, the reason for its near-universal presence in human movements is that motor functions are optimized by a broad range of innervation numbers.

  20. Preservation of root canal anatomy using self-adjusting file instrumentation with glide path prepared by 20/0.02 hand files versus 20/0.04 rotary files

    Science.gov (United States)

    Jain, Niharika; Pawar, Ajinkya M.; Ukey, Piyush D.; Jain, Prashant K.; Thakur, Bhagyashree; Gupta, Abhishek

    2017-01-01

    Objectives: To compare the relative axis modification and canal concentricity after glide path preparation with 20/0.02 hand K-file (NITIFLEX®) and 20/0.04 rotary file (HyFlex™ CM) with subsequent instrumentation with 1.5 mm self-adjusting file (SAF). Materials and Methods: One hundred and twenty ISO 15, 0.02 taper, Endo Training Blocks (Dentsply Maillefer, Ballaigues, Switzerland) were acquired and randomly divided into following two groups (n = 60): group 1, establishing glide path till 20/0.02 hand K-file (NITIFLEX®) followed by instrumentation with 1.5 mm SAF; and Group 2, establishing glide path till 20/0.04 rotary file (HyFlex™ CM) followed by instrumentation with 1.5 mm SAF. Pre- and post-instrumentation digital images were processed with MATLAB R 2013 software to identify the central axis, and then superimposed using digital imaging software (Picasa 3.0 software, Google Inc., California, USA) taking five landmarks as reference points. Student's t-test for pairwise comparisons was applied with the level of significance set at 0.05. Results: Training blocks instrumented with 20/0.04 rotary file and SAF were associated less deviation in canal axis (at all the five marked points), representing better canal concentricity compared to those, in which glide path was established by 20/0.02 hand K-files followed by SAF instrumentation. Conclusion: Canal geometry is better maintained after SAF instrumentation with a prior glide path established with 20/0.04 rotary file. PMID:28855752

  1. Influence of axial movement on fatigue of ProFile Ni-Ti rotary instruments: an in vitro evaluation.

    Science.gov (United States)

    Avoaka, Marie-Chantal; Haïkel, Youssef

    2010-05-01

    The aim of this study was to evaluate the influence of the axial movement and the angle of curve (in degrees) on fatigue of nickel-titanium (Ni-Ti) ProFile rotary endodontic instruments. Ni-Ti ProFile rotary instruments (Maillefer SA, Ballaigues, Switzerland), 25 mm long in the range of ISO size 15 to 40 with two tapers (0.4 and 0.6) were evaluated. They are divided in two groups: the instruments with axial movement and those without axial movement. The system used to test the fatigue is maintained in mechanical conditions as close as possible to the clinical situation. The axial movement is in the order of 2 mm in corono-apical direction with a frequency of 1 Hz. The concave radii incorporating a notched V-form for guiding the instruments were: 5; 7,5 and 10 mm. The rotary system is mounted on an electric handpiece and rotated at 350 rpm speed as recommended by the manufacturers. The instruments are rotated until their separation, and the time, in seconds, is recorded. Statistical evaluation is undertaken using a two-way t-test to identify significant differences between variables in the study (p engine drive ProFile instruments incorporating an axial movement and the instruments without axial movement with the same radius of curvature, size and taper.The incorporation of the axial movement increases significantly the life-span of the ProFile rotary instruments. This should reduce the risk of the instrument separation during the endodontic treatment.

  2. 76 FR 28018 - Combined Notice of Filings #1

    Science.gov (United States)

    2011-05-13

    ... tariff filing per 35.13(a)(2)(iii: Information Policy Revisions to be effective 6/20/ 2011. Filed Date... Interconnection, L.L.C. Description: PJM Interconnection, L.L.C. submits tariff filing per 35.13(a)(2)(iii: Queue... New Mexico submits tariff filing per 35.13(a)(2)(iii: PNM LGIP Filing to be effective 7/5/2011. Filed...

  3. 75 FR 62381 - Combined Notice of Filings #2

    Science.gov (United States)

    2010-10-08

    ... filing per 35.12: MeadWestvaco Virginia MBR Filing to be effective 9/ 28/2010. Filed Date: 09/29/2010... submits tariff filing per 35.12: City Power MBR Tariff to be effective 9/30/2010. Filed Date: 09/29/2010... Baseline MBR Tariff to be effective 9[sol]29[sol]2010. Filed Date: 09/29/2010. Accession Number: 20100929...

  4. Optimal unit sizing of a hybrid renewable energy system for isolated applications; Optimalite des elements d'un systeme decentralise de production d'energie electrique

    Energy Technology Data Exchange (ETDEWEB)

    Morales, D

    2006-07-15

    In general, the methods used to conceive a renewable energy production system overestimate the size of the generating units. These methods increase the investment cost and the production cost of energy. The work presented in this thesis proposes a methodology to optimally size a renewable energy system.- This study shows that the classic approach based only on a long term analysis of system's behaviour is not sufficient and a complementary methodology based on a short term analysis is proposed. A numerical simulation was developed in which the mathematical models of the solar panel, the wind turbines and battery are integrated. The daily average solar energy per m2 is decomposed into a series of hourly I energy values using the Collares-Pereira equations. The time series analysis of the wind speed is made using the Monte Carlo Simulation Method. The second part of this thesis makes a detailed analysis of an isolated wind energy production system. The average energy produced by the system depends on the generator's rated power, the total swept area of the wind turbine, the gearbox's transformation ratio, the battery voltage and the wind speed probability function. The study proposes a methodology to determine the optimal matching between the rated power of the permanent magnet synchronous machine and the wind turbine's rotor size. This is made taking into account the average electrical energy produced over a period of time. (author)

  5. Finding the Energy Efficient Curve: Gate Sizing for Minimum Power under Delay Constraints

    Directory of Open Access Journals (Sweden)

    Yoni Aizik

    2011-01-01

    Full Text Available A design scenario examined in this paper assumes that a circuit has been designed initially for high speed, and it is redesigned for low power by downsizing of the gates. In recent years, as power consumption has become a dominant issue, new optimizations of circuits are required for saving energy. This is done by trading off some speed in exchange for reduced power. For each feasible speed, an optimization problem is solved in this paper, finding new sizes for the gates such that the circuit satisfies the speed goal while dissipating minimal power. Energy/delay gain (EDG is defined as a metric to quantify the most efficient tradeoff. The EDG of the circuit is evaluated for a range of reduced circuit speeds, and the power-optimal gate sizes are compared with the initial sizes. Most of the energy savings occur at the final stages of the circuits, while the largest relative downsizing occurs in middle stages. Typical tapering factors for power efficient circuits are larger than those for speed-optimal circuits. Signal activity and signal probability affect the optimal gate sizes in the combined optimization of speed and power.

  6. Integrated topology and shape optimization in structural design

    Science.gov (United States)

    Bremicker, M.; Chirehdast, M.; Kikuchi, N.; Papalambros, P. Y.

    1990-01-01

    Structural optimization procedures usually start from a given design topology and vary its proportions or boundary shapes to achieve optimality under various constraints. Two different categories of structural optimization are distinguished in the literature, namely sizing and shape optimization. A major restriction in both cases is that the design topology is considered fixed and given. Questions concerning the general layout of a design (such as whether a truss or a solid structure should be used) as well as more detailed topology features (e.g., the number and connectivities of bars in a truss or the number of holes in a solid) have to be resolved by design experience before formulating the structural optimization model. Design quality of an optimized structure still depends strongly on engineering intuition. This article presents a novel approach for initiating formal structural optimization at an earlier stage, where the design topology is rigorously generated in addition to selecting shape and size dimensions. A three-phase design process is discussed: an optimal initial topology is created by a homogenization method as a gray level image, which is then transformed to a realizable design using computer vision techniques; this design is then parameterized and treated in detail by sizing and shape optimization. A fully automated process is described for trusses. Optimization of two dimensional solid structures is also discussed. Several application-oriented examples illustrate the usefulness of the proposed methodology.

  7. Framework for optimizing chlorine dose in small- to medium-sized ...

    African Journals Online (AJOL)

    2015-10-05

    Oct 5, 2015 ... should also be addressed in the modelling process. The main objective of the ... The hydraulic simulation model outputs include flows at the junctions .... MATLAB is used in this study to build Mamdani Systems for optimizing ...

  8. Ultrasound transmission spectroscopy: in-line sizing of nanoparticles

    NARCIS (Netherlands)

    Neer, P.L.M.J. van; Volker, A.W.F.; Pierre, G.; Bouvet, F.; Crozat, S.

    2014-01-01

    Nanoparticles are increasingly used in a number of applications, e.g. coatings or paints. To optimize nanoparticle production in-line quantitative measurements of their size distribution and concentration are needed. Ultrasound-based methods are especially suited for in-line particle sizing. These

  9. Optimal Sizing and Control Strategy of renewable hybrid systems PV-Diesel Generator-Battery: application to the case of Djanet city of Algeria

    Directory of Open Access Journals (Sweden)

    Adel Yahiaoui

    2017-05-01

    Full Text Available A method for optimal sizing of hybrid system consisting of a Photovoltaic (PV panel, diesel generator, Battery banks and load is considered in this paper. To this end a novel approach is proposed. More precisely a methodology for the design and simulation of the behavior of Hybrid system PV-Diesel-Battery banks to electrify an isolated rural site in southern Algeria Illizi (Djanet. This methodology is based on the concept of the loss power supply probability. Sizing and simulation are performed using MATLAB. The technique developed in this study is to determine the number of photovoltaic panels, diesel generators and batteries needed to cover the energy deficit and respond to the growing rural resident energy demand. The obtained results demonstrate the superior capabilities of this proposed method.

  10. Hybrid chaotic ant swarm optimization

    International Nuclear Information System (INIS)

    Li Yuying; Wen Qiaoyan; Li Lixiang; Peng Haipeng

    2009-01-01

    Chaotic ant swarm optimization (CASO) is a powerful chaos search algorithm that is used to find the global optimum solution in search space. However, the CASO algorithm has some disadvantages, such as lower solution precision and longer computational time, when solving complex optimization problems. To resolve these problems, an improved CASO, called hybrid chaotic swarm optimization (HCASO), is proposed in this paper. The new algorithm introduces preselection operator and discrete recombination operator into the CASO; meanwhile it replaces the best position found by own and its neighbors' ants with the best position found by preselection operator and discrete recombination operator in evolution equation. Through testing five benchmark functions with large dimensionality, the experimental results show the new method enhances the solution accuracy and stability greatly, as well as reduces the computational time and computer memory significantly when compared to the CASO. In addition, we observe the results can become better with swarm size increasing from the sensitivity study to swarm size. And we gain some relations between problem dimensions and swam size according to scalability study.

  11. Evaluation of canal transportation after preparation with Reciproc single-file systems with or without glide path files.

    Science.gov (United States)

    Aydin, Ugur; Karataslioglu, Emrah

    2017-01-01

    Canal transportation is a common sequel caused by rotary instruments. The purpose of the present study is to evaluate the degree of transportation after the use of Reciproc single-file instruments with or without glide path files. Thirty resin blocks with L-shaped canals were divided into three groups ( n = 10). Group 1 - canals were prepared with Reciproc-25 file. Group 2 - glide path file-G1 was used before Reciproc. Group 3 - glide path files-G1 and G2 were used before Reciproc. Pre- and post-instrumentation images were superimposed under microscope, and resin removed from the inner and outer surfaces of the root canal was calculated throughout 10 points. Statistical analysis was performed with Kruskal-Wallis test and post hoc Dunn test. For coronal and middle one-thirds, there was no significant difference among groups ( P > 0.05). For apical section, transportation of Group 1 was significantly higher than other groups ( P files before Reciproc single-file system reduced the degree of apical canal transportation.

  12. Sequential ensemble-based optimal design for parameter estimation: SEQUENTIAL ENSEMBLE-BASED OPTIMAL DESIGN

    Energy Technology Data Exchange (ETDEWEB)

    Man, Jun [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Zhang, Jiangjiang [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Li, Weixuan [Pacific Northwest National Laboratory, Richland Washington USA; Zeng, Lingzao [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Wu, Laosheng [Department of Environmental Sciences, University of California, Riverside California USA

    2016-10-01

    The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees of freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.

  13. 77 FR 13587 - Combined Notice of Filings

    Science.gov (United States)

    2012-03-07

    .... Applicants: Transcontinental Gas Pipe Line Company. Description: Annual Electric Power Tracker Filing... Company. Description: 2012 Annual Fuel and Electric Power Reimbursement to be effective 4/1/2012. Filed... submits tariff filing per 154.403: Storm Surcharge 2012 to be effective 4/1/2012. Filed Date: 3/1/12...

  14. 10 CFR 2.302 - Filing of documents.

    Science.gov (United States)

    2010-01-01

    ... this part shall be electronically transmitted through the E-Filing system, unless the Commission or... all methods of filing have been completed. (e) For filings by electronic transmission, the filer must... digital ID certificates, the NRC permits participants in the proceeding to access the E-Filing system to...

  15. 5 CFR 1201.14 - Electronic filing procedures.

    Science.gov (United States)

    2010-01-01

    ... form. (b) Matters subject to electronic filing. Subject to the registration requirement of paragraph (e) of this section, parties and representatives may use electronic filing (e-filing) to do any of the...). (d) Internet is sole venue for electronic filing. Following the instructions at e-Appeal Online, the...

  16. The File System Interface is an Anachronism

    OpenAIRE

    Ellard, Daniel

    2003-01-01

    Contemporary file systems implement a set of abstractions and semantics that are suboptimal for many (if not most) purposes. The philosophy of using the simple mechanisms of the file system as the basis for a vast array of higher-level mechanisms leads to inefficient and incorrect implementations. We propose several extensions to the canonical file system model, including explicit support for lock files, indexed files, and resource forks, and the benefit of session semantics for write updates...

  17. Integrated solar energy system optimization

    Science.gov (United States)

    Young, S. K.

    1982-11-01

    The computer program SYSOPT, intended as a tool for optimizing the subsystem sizing, performance, and economics of integrated wind and solar energy systems, is presented. The modular structure of the methodology additionally allows simulations when the solar subsystems are combined with conventional technologies, e.g., a utility grid. Hourly energy/mass flow balances are computed for interconnection points, yielding optimized sizing and time-dependent operation of various subsystems. The program requires meteorological data, such as insolation, diurnal and seasonal variations, and wind speed at the hub height of a wind turbine, all of which can be taken from simulations like the TRNSYS program. Examples are provided for optimization of a solar-powered (wind turbine and parabolic trough-Rankine generator) desalinization plant, and a design analysis for a solar powered greenhouse.

  18. High School and Beyond: Twins and Siblings' File Users' Manual, User's Manual for Teacher Comment File, Friends File Users' Manual.

    Science.gov (United States)

    National Center for Education Statistics (ED), Washington, DC.

    These three users' manuals are for specific files of the High School and Beyond Study, a national longitudinal study of high school sophomores and seniors in 1980. The three files are computerized databases that are available on magnetic tape. As one component of base year data collection, information identifying twins, triplets, and some non-twin…

  19. Firm Size as Moderator to Non-Linear Leverage-Performance Relation: An Emerging Market Review

    Directory of Open Access Journals (Sweden)

    Umar Farooq

    2017-08-01

    such losses are more prominent for small size firms. Results also show that the leverage-performance relation is nonlinear for medium and large size firms. However, these firms are not targeting optimal level and overleveraging that ultimately decrease their profits. So, financial managers of small size firms should avoid debt financing while for large and medium size firms, managers need to adjust their debt ratio to its optimal level.

  20. Menggabungkan Beberapa File Dalam SPSS/PC

    Directory of Open Access Journals (Sweden)

    Syahrudji Naseh

    2012-09-01

    Full Text Available Pada dasamya piranti lunak komputer dapat dibagi ke dalam lima kelompok besar yaitu pengolah kata, spreadsheet database, statistika dan animasi/desktop. Masing-masing mempunyai kelebihan dan kekurangannya. Piranti lunak dBase 111+ yang merupakan piranti lunak paling populer dalam"database", hanya dapat menampung 128 variabel saja. Oleh karenanya pada suatu kuesioner yang besar seperti Susenas (Survei Sosial Ekonomi Nasional atau SKRT (Survei Kesehatan Rumah Tangga, datanya tidak dapat dijadikan satu "file". Biasanya dipecah menjadi banyak "file", umpamanya fileldbf, file2.dbf dan seterusnya.Masalahnya adalah bagaimana menggabung beberapa variabel yang ada di file1.dbf engan beberapa variabel yang ada di file5.dbf? Tulisan ini mencoba membahas masalah tersebut

  1. Optimization of protein fractionation by skim milk microfiltration: Choice of ceramic membrane pore size and filtration temperature.

    Science.gov (United States)

    Jørgensen, Camilla Elise; Abrahamsen, Roger K; Rukke, Elling-Olav; Johansen, Anne-Grethe; Schüller, Reidar B; Skeie, Siv B

    2016-08-01

    The objective of this study was to investigate how ceramic membrane pore size and filtration temperature influence the protein fractionation of skim milk by cross flow microfiltration (MF). Microfiltration was performed at a uniform transmembrane pressure with constant permeate flux to a volume concentration factor of 2.5. Three different membrane pore sizes, 0.05, 0.10, and 0.20µm, were used at a filtration temperature of 50°C. Furthermore, at pore size 0.10µm, 2 different filtration temperatures were investigated: 50 and 60°C. The transmission of proteins increased with increasing pore size, giving the permeate from MF with the 0.20-µm membrane a significantly higher concentration of native whey proteins compared with the permeates from the 0.05- and 0.10-µm membranes (0.50, 0.24, and 0.39%, respectively). Significant amounts of caseins permeated the 0.20-µm membrane (1.4%), giving a permeate with a whitish appearance and a casein distribution (αS2-CN: αS1-CN: κ-CN: β-CN) similar to that of skim milk. The 0.05- and 0.10-µm membranes were able to retain all caseins (only negligible amounts were detected). A permeate free from casein is beneficial in the production of native whey protein concentrates and in applications where transparency is an important functional characteristic. Microfiltration of skim milk at 50°C with the 0.10-µm membrane resulted in a permeate containing significantly more native whey proteins than the permeate from MF at 60°C. The more rapid increase in transmembrane pressure and the significantly lower concentration of caseins in the retentate at 60°C indicated that a higher concentration of caseins deposited on the membrane, and consequently reduced the native whey protein transmission. Optimal protein fractionation of skim milk into a casein-rich retentate and a permeate with native whey proteins were obtained by 0.10-µm MF at 50°C. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All

  2. 76 FR 70651 - Fee for Filing a Patent Application Other Than by the Electronic Filing System

    Science.gov (United States)

    2011-11-15

    ... government; or (3) preempt tribal law. Therefore, a tribal summary impact statement is not required under... 0651-AC64 Fee for Filing a Patent Application Other Than by the Electronic Filing System AGENCY: United..., that is not filed by electronic means as prescribed by the Director of the United States Patent and...

  3. A simple and optimal ancestry labeling scheme for trees

    DEFF Research Database (Denmark)

    Dahlgaard, Søren; Knudsen, Mathias Bæk Tejs; Rotbart, Noy Galil

    2015-01-01

    We present a lg n + 2 lg lg n + 3 ancestry labeling scheme for trees. The problem was first presented by Kannan et al. [STOC 88’] along with a simple 2 lg n solution. Motivated by applications to XML files, the label size was improved incrementally over the course of more than 20 years by a series...

  4. Earnings Public-Use File, 2006

    Data.gov (United States)

    Social Security Administration — Social Security Administration released Earnings Public-Use File (EPUF) for 2006. File contains earnings information for individuals drawn from a systematic random...

  5. 12 CFR 509.10 - Filing of papers.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 5 2010-01-01 2010-01-01 false Filing of papers. 509.10 Section 509.10 Banks... IN ADJUDICATORY PROCEEDINGS Uniform Rules of Practice and Procedure § 509.10 Filing of papers. (a) Filing. Any papers required to be filed, excluding documents produced in response to a discovery request...

  6. Magnetic nanoparticles for power absorption: Optimizing size, shape and magnetic properties

    International Nuclear Information System (INIS)

    Gonzalez-Fernandez, M.A.; Torres, T.E.; Andres-Verges, M.; Costo, R.; Presa, P. de la; Serna, C.J.; Morales, M.P.; Marquina, C.; Ibarra, M.R.; Goya, G.F.

    2009-01-01

    We present a study on the magnetic properties of naked and silica-coated Fe 3 O 4 nanoparticles with sizes between 5 and 110 nm. Their efficiency as heating agents was assessed through specific power absorption (SPA) measurements as a function of particle size and shape. The results show a strong dependence of the SPA with the particle size, with a maximum around 30 nm, as expected for a Neel relaxation mechanism in single-domain particles. The SiO 2 shell thickness was found to play an important role in the SPA mechanism by hindering the heat outflow, thus decreasing the heating efficiency. It is concluded that a compromise between good heating efficiency and surface functionality for biomedical purposes can be attained by making the SiO 2 functional coating as thin as possible. - Graphical Abstract: The magnetic properties of Fe 3 O 4 nanoparticles from 5 to 110 nm are presented, and their efficiency as heating agents discussed as a function of particle size, shape and surface functionalization.

  7. On the optimal sizing of batteries for electric vehicles and the influence of fast charge

    Science.gov (United States)

    Verbrugge, Mark W.; Wampler, Charles W.

    2018-04-01

    We provide a brief summary of advanced battery technologies and a framework (i.e., a simple model) for assessing electric-vehicle (EV) architectures and associated costs to the customer. The end result is a qualitative model that can be used to calculate the optimal EV range (which maps back to the battery size and performance), including the influence of fast charge. We are seeing two technological pathways emerging: fast-charge-capable batteries versus batteries with much higher energy densities (and specific energies) but without the capability to fast charge. How do we compare and contrast the two alternatives? This work seeks to shed light on the question. We consider costs associated with the cells, added mass due to the use of larger batteries, and charging, three factors common in such analyses. In addition, we consider a new cost input, namely, the cost of adaption, corresponding to the days a customer would need an alternative form of transportation, as the EV would not have sufficient range on those days.

  8. An Improved Method for Reconfiguring and Optimizing Electrical Active Distribution Network Using Evolutionary Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Nur Faziera Napis

    2018-05-01

    Full Text Available The presence of optimized distributed generation (DG with suitable distribution network reconfiguration (DNR in the electrical distribution network has an advantage for voltage support, power losses reduction, deferment of new transmission line and distribution structure and system stability improvement. However, installation of a DG unit at non-optimal size with non-optimal DNR may lead to higher power losses, power quality problem, voltage instability and incremental of operational cost. Thus, an appropriate DG and DNR planning are essential and are considered as an objective of this research. An effective heuristic optimization technique named as improved evolutionary particle swarm optimization (IEPSO is proposed in this research. The objective function is formulated to minimize the total power losses (TPL and to improve the voltage stability index (VSI. The voltage stability index is determined for three load demand levels namely light load, nominal load, and heavy load with proper optimal DNR and DG sizing. The performance of the proposed technique is compared with other optimization techniques, namely particle swarm optimization (PSO and iteration particle swarm optimization (IPSO. Four case studies on IEEE 33-bus and IEEE 69-bus distribution systems have been conducted to validate the effectiveness of the proposed IEPSO. The optimization results show that, the best achievement is done by IEPSO technique with power losses reduction up to 79.26%, and 58.41% improvement in the voltage stability index. Moreover, IEPSO has the fastest computational time for all load conditions as compared to other algorithms.

  9. 47 CFR 61.14 - Method of filing publications.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Method of filing publications. 61.14 Section 61...) TARIFFS Rules for Electronic Filing § 61.14 Method of filing publications. (a) Publications filed... date of a publication received by the Electronic Tariff Filing System will be determined by the date...

  10. Joint optimization of condition-based maintenance and production lot-sizing

    NARCIS (Netherlands)

    Peng, H.; van Houtum, G.J.J.A.N.

    2016-01-01

    Due to the development of sensor technologies nowadays, condition-based maintenance (CBM) programs can be established and optimized based on the data collected through condition monitoring. The CBM activities can significantly increase the uptime of a machine. However, they should be conducted in a

  11. Inversion of particle size distribution by spectral extinction technique using the attractive and repulsive particle swarm optimization algorithm

    Directory of Open Access Journals (Sweden)

    Qi Hong

    2015-01-01

    Full Text Available The particle size distribution (PSD plays an important role in environmental pollution detection and human health protection, such as fog, haze and soot. In this study, the Attractive and Repulsive Particle Swarm Optimization (ARPSO algorithm and the basic PSO were applied to retrieve the PSD. The spectral extinction technique coupled with the Anomalous Diffraction Approximation (ADA and the Lambert-Beer Law were employed to investigate the retrieval of the PSD. Three commonly used monomodal PSDs, i.e. the Rosin-Rammer (R-R distribution, the normal (N-N distribution, the logarithmic normal (L-N distribution were studied in the dependent model. Then, an optimal wavelengths selection algorithm was proposed. To study the accuracy and robustness of the inverse results, some characteristic parameters were employed. The research revealed that the ARPSO showed more accurate and faster convergence rate than the basic PSO, even with random measurement error. Moreover, the investigation also demonstrated that the inverse results of four incident laser wavelengths showed more accurate and robust than those of two wavelengths. The research also found that if increasing the interval of the selected incident laser wavelengths, inverse results would show more accurate, even in the presence of random error.

  12. Cyclic fatigue resistance of RaCe and Mtwo rotary files in continuous rotation and reciprocating motion.

    Science.gov (United States)

    Vadhana, Sekar; SaravanaKarthikeyan, Balasubramanian; Nandini, Suresh; Velmurugan, Natanasabapathy

    2014-07-01

    The purpose of this study was to evaluate and compare the cyclic fatigue resistance of RaCe (FKG Dentaire, La Chaux-de-Fonds, Switzerland) and Mtwo (VDW, Munich, Germany) rotary files in continuous rotation and reciprocating motion. A total of 60 new rotary Mtwo and RaCe files (ISO size = 25, taper = 0.06, length = 25 mm) were selected and randomly divided into 4 groups (n = 15 each): Mtc (Mtwo NiTi files in continuous rotation), Rc (RaCe NiTi files in continuous rotation), Mtr (Mtwo NiTi files in reciprocating motion), and Rr (RaCe NiTi files in reciprocating motion). A cyclic fatigue testing device was fabricated with a 60° angle of curvature and a 5-mm radius. All instruments were rotated or reciprocated until fracture occurred. The time taken for each instrument to fracture and the length of the broken fragments were recorded. All the fractured files were analyzed under a scanning electron microscope to detect the mode of fracture. The Kolmogorov-Smirnov test was used to assess the normality of samples distribution, and statistical analysis was performed using the independent sample t test. The time taken for the instruments of the Mtr and Rr groups to fail under cyclic loading was significantly longer compared with the Mtc and Rc groups (P ductile mode of fracture. The length of the fractured segments was between 5 and 6 mm, which was not statistically significant among the experimental groups. Mtwo and RaCe rotary instruments showed a significantly higher cyclic fatigue resistance in reciprocating motion compared with continuous rotation motion. Copyright © 2014 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  13. 12 CFR 263.10 - Filing of papers.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 3 2010-01-01 2010-01-01 false Filing of papers. 263.10 Section 263.10 Banks... OF PRACTICE FOR HEARINGS Uniform Rules of Practice and Procedure § 263.10 Filing of papers. (a) Filing. Any papers required to be filed, excluding documents produced in response to a discovery request...

  14. 12 CFR 308.10 - Filing of papers.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 4 2010-01-01 2010-01-01 false Filing of papers. 308.10 Section 308.10 Banks... AND PROCEDURE Uniform Rules of Practice and Procedure § 308.10 Filing of papers. (a) Filing. Any papers required to be filed, excluding documents produced in response to a discovery request pursuant to...

  15. 29 CFR 1981.103 - Filing of discrimination complaint.

    Science.gov (United States)

    2010-07-01

    ... constitute the violations. (c) Place of filing. The complaint should be filed with the OSHA Area Director... or she has been discriminated against by an employer in violation of the Act may file, or have filed..., but may be filed with any OSHA officer or employee. Addresses and telephone numbers for these...

  16. 77 FR 66458 - Combined Notice of Filings #1

    Science.gov (United States)

    2012-11-05

    ... Service Company of Colorado. Description: 2012--10--26 PSCo MBR Filing to be effective 12/26/ 2012. Filed...--SPS MBR Filing to be effective 12/26/2012. Filed Date: 10/26/12. Accession Number: 20121026-5123...: Revised Application for MBR Authorization to be effective 10/16/2012. Filed Date: 10/25/12. Accession...

  17. 75 FR 66075 - Combined Notice of Filings #1

    Science.gov (United States)

    2010-10-27

    ....12: Baseline MBR Concurrence to be effective 10/8/2010. Filed Date: 10/19/2010. Accession Number... Company submits tariff filing per 35.12: Baseline MBR Concurrence to be effective 10/8/2010. Filed Date... Power Company submits tariff filing per 35.12: Baseline MBR Concurrence to be effective 10/8/2010. Filed...

  18. The Global File System

    Science.gov (United States)

    Soltis, Steven R.; Ruwart, Thomas M.; OKeefe, Matthew T.

    1996-01-01

    The global file system (GFS) is a prototype design for a distributed file system in which cluster nodes physically share storage devices connected via a network-like fiber channel. Networks and network-attached storage devices have advanced to a level of performance and extensibility so that the previous disadvantages of shared disk architectures are no longer valid. This shared storage architecture attempts to exploit the sophistication of storage device technologies whereas a server architecture diminishes a device's role to that of a simple component. GFS distributes the file system responsibilities across processing nodes, storage across the devices, and file system resources across the entire storage pool. GFS caches data on the storage devices instead of the main memories of the machines. Consistency is established by using a locking mechanism maintained by the storage devices to facilitate atomic read-modify-write operations. The locking mechanism is being prototyped in the Silicon Graphics IRIX operating system and is accessed using standard Unix commands and modules.

  19. [Optimize preparation of compound licorice microemulsion with D-optimal design].

    Science.gov (United States)

    Ma, Shu-Wei; Wang, Yong-Jie; Chen, Cheng; Qiu, Yue; Wu, Qing

    2018-03-01

    In order to increase the solubility of essential oil in compound licorice microemulsion and improve the efficacy of the decoction for treating chronic eczema, this experiment intends to prepare the decoction into microemulsion. The essential oil was used as the oil phase of the microemulsion and the extract was used as the water phase. Then the microemulsion area and maximum ratio of water capacity was obtained by plotting pseudo-ternary phase diagram, to determine the appropriate types of surfactant and cosurfactant, and Km value-the mass ratio between surfactant and cosurfactant. With particle size and skin retention of active ingredients as the index, microemulsion prescription was optimized by D-optimal design method, to investigate the in vitro release behavior of the optimized prescription. The results showed that the microemulsion was optimal with tween-80 as the surfactant and anhydrous ethanol as the cosurfactant. When the Km value was 1, the area of the microemulsion region was largest while when the concentration of extract was 0.5 g·mL⁻¹, it had lowest effect on the particle size distribution of microemulsion. The final optimized formulation was as follows: 9.4% tween-80, 9.4% anhydrous ethanol, 1.0% peppermint oil and 80.2% 0.5 g·mL⁻¹ extract. The microemulsion prepared under these conditions had a small viscosity, good stability and high skin retention of drug; in vitro release experiment showed that microemulsion had a sustained-release effect on glycyrrhizic acid and liquiritin, basically achieving the expected purpose of the project. Copyright© by the Chinese Pharmaceutical Association.

  20. Optimization of particle trapping and patterning via photovoltaic tweezers: role of light modulation and particle size

    International Nuclear Information System (INIS)

    Matarrubia, J; García-Cabañes, A; Plaza, J L; Agulló-López, F; Carrascosa, M

    2014-01-01

    The role of light modulation m and particle size on the morphology and spatial resolution of nano-particle patterns obtained by photovoltaic tweezers on Fe : LiNbO 3 has been investigated. The impact of m when using spherical as well as non-spherical (anisotropic) nano-particles deposited on the sample surface has been elucidated. Light modulation is a key parameter determining the particle profile contrast that is optimum for spherical particles and high-m values (m ∼ 1). The minimum particle periodicities reachable are also investigated obtaining periodic patterns up to 3.5 µm. This is a value at least one order of magnitude shorter than those obtained in previous reported experiments. Results are successfully explained and discussed in light of the previous reported models for photorefraction including nonlinear carrier transport and dielectrophoretic trapping. From the results, a number of rules for particle patterning optimization are derived. (paper)